6.1 Introduction

The conceptual structure informing continuum physics rests on two fundamental pillars: balance laws (or conservation laws) and constitutive laws. While the constitutive laws, ruling the specific properties of the material in which the physical phenomenon occurs (e.g. viscous fluids, elastic solids, elastic dielectric, etc. ) are exposed to a great variety of possible relations (may be escaping any tentative of a definitive general theory), conservation laws admits a clear mathematical statement in the format of partial differential equations. In the general multidimensional spatial setting, an homogeneous hyperbolic conservation law takes the form [3, 19, 21]

$$\displaystyle \begin{aligned} \partial_t u+\sum_{\alpha=1}^d\partial_\alpha\,F_\alpha(u)=0, \end{aligned} $$
(6.1)

where the state variable u, taking values in \(\mathbb {R}^m\) depends on the spatial variables (x1, …, xd) and time t, F1, …, Fd are smooth maps from \(\mathbb {R}^m\) to \(\mathbb {R}^m\), t denotes ∂t and α denotes ∂xα.

In these notes we shall focus on the one-dimensional spatial case, governed by the first order partial differential equation

$$\displaystyle \begin{aligned} \partial_t u+\partial_x f(u)=0, \end{aligned} $$
(6.2)

where \(f\in C^2(\mathbb {R}^N;\mathbb {R}^N)\), \(u:[0,\infty )\times \mathbb {R}\to \mathbb {R}^N\), and N ≥ 1. The function u = u(t, x) is termed conserved quantity, f = f(u) flux. If N = 1 we say that (6.2) is a scalar conservation law, if N > 1 we say that (6.2) is a system of conservation laws and it stays for

$$\displaystyle \begin{aligned} \begin{cases} \partial_t u_1+\partial_x f_1(u_1,\ldots,u_N)=0,&{}\\ \ldots\ldots&{}\\ \partial_t u_N+\partial_x f_N(u_1,\ldots,u_N)=0,&{} \end{cases} \end{aligned}$$

where

$$\displaystyle \begin{aligned} &u=u(t,x)=(u_1(t,x),\ldots,u_N(t,x)),\\ &f=f(u)=(f_1(u_1,\ldots,u_N),\ldots, f_N(u_1,\ldots,u_N)). \end{aligned} $$

In this section we try to answer the following questions:

(Q.1):

Why do we use the terms conservation law, conserved quantity, and flux for (6.2), u, and f, respectively?

(Q.2):

Which kind of physical phenomena is (6.2) able to describe?

(Q.3):

Which are the mathematical features of the solutions of (6.2)?

Let us answer to (Q.1). If u is a smooth solution of (6.2) and a < b we have that (see Fig. 6.1)

$$\displaystyle \begin{aligned} \frac{d}{dt}\int_a^b u(t,x)dx&=\int_a^b \partial_t u(t,x)dx\\ &=-\int_a^b \partial_x f(u(t,x))dx=f(u(t,a))-f(u(t,b))\\ &=\left[\text{inflow at}\ x=a\ \text{and time}\ t\right]\\ &\quad - \left[\text{outflow at}\ x=b\ \text{and time}\ t\right]. \end{aligned} $$

In other words, the conserved quantity u is neither created nor destroyed, the amount of u in the interval [a, b] changes in function only of the flow through the two end points.

Fig. 6.1
figure 1

Flow trough the end points

To answer to (Q.2) we proceed by showing some paradigmatic models founded in continuum mechanics, expressed in terms of conservation laws.

Rarefied Gas

The simplest model of gasdynamic in one space dimension considers a material made of non interacting particles, idealizing a low dense gas. In the Lagrangian description, we can identify the particles using their initial position y. Let φ(t, y) be the position at time t of the particle that at time t = 0 was in y, its velocity and acceleration are tφ and \(\partial _{tt}^2\varphi \), respectively. Since the particles do not interact within themselves, we cannot have two different particles in the same position at the same time, therefore φ(t, ⋅) is increasing and, in particular, invertible. Let ψ(t, ⋅) be the inverse of φ(t, ⋅), i.e.,

$$\displaystyle \begin{aligned} y=\psi(t,\varphi(t,y)) \end{aligned}$$

and

$$\displaystyle \begin{aligned} x=\varphi(t,y)\Longleftrightarrow y=\psi(t,x). \end{aligned}$$

Let u(t, x) be the velocity of the particle at time t is in x, namely

$$\displaystyle \begin{aligned} x&=\varphi(t,y),\\ u(t,x)&=u(t,\varphi(t,y))=\partial_t\varphi(t,y),\\ u(t,x)&=\partial_t\varphi(t,\psi(t,x)). \end{aligned} $$

The acceleration of the particle that at time t is in x is

$$\displaystyle \begin{aligned} \partial_{tt}^2\varphi(t,y)&=\partial_t\Big(\partial_t\varphi(t,y)\Big)=\partial_t\Big(u(t,\varphi(t,y))\Big)\\ &=\partial_t u(t,\varphi(t,y))+\partial_x u(t,\varphi(t,y))\partial_t\varphi(t,y)\\ &=\partial_t u(t,x)+\partial_x u(t,x)u(t,x). \end{aligned} $$

Since the particles do not interact within themselves, there are no forces acting on them. Then, the balance of linear momentum delivers the equation

$$\displaystyle \begin{aligned} \partial_t u+\partial_x \left(\frac{u^2}{2}\right)=0, \end{aligned} $$
(6.3)

that is termed Burgers equation [5, 6, 18].

Traffic Flow 1

We begin with the road fluid-dynamic traffic model introduced by Lighthill, Whitham, and Richards [15, 17]. We consider a one way one lane infinite road. Let ρ = ρ(t, x) be the the density of vehicles at time t in the position x. Assuming that the vehicles behave as fluid particles we have [8, 9]

$$\displaystyle \begin{aligned} \partial_t\rho+\partial_x(\rho v)=0, \end{aligned} $$
(6.4)

where v is the velocity of the vehicles. The key assumption of Lighthill, Whitham, and Richards is that the velocity depends only on the density, namely

$$\displaystyle \begin{aligned} v=v(\rho), \end{aligned} $$
(6.5)

that is somehow reasonable in case of highways. The drivers regulate their velocity in function of the number of vehicles in front of them. Therefore writing

$$\displaystyle \begin{aligned} f(\rho)=\rho v(\rho), \end{aligned}$$

(6.4) reads

$$\displaystyle \begin{aligned} \partial_t\rho+\partial_x f(\rho)=0. \end{aligned} $$
(6.6)

On v = v(ρ) it is reasonable to assume that

$$\displaystyle \begin{aligned} v(0)=v_{max},\qquad v(\rho_{max})=0,\qquad v\ \text{is decreasing}. \end{aligned}$$

In particular, Lighthill, Whitham, and Richards proposed

$$\displaystyle \begin{aligned} v(\rho)=v_{max}\left(1-\frac{\rho}{\rho_{max}}\right). \end{aligned}$$

Compressible Non-viscous Gas

The Lighthill-Whitham-Richards traffic model and the Burgers equation are model expressed in terms of scalar conservation laws, we continue by showing more models expressed in terms of systems of conservation laws.

The Euler equations for a non-viscous compressible gas in Lagrangian coordinates are

$$\displaystyle \begin{aligned} \begin{cases} \partial_t v-\partial_x u=0,&\text{(conservation of mass)}\\ \partial_t u+\partial_x p=0,&\text{(conservation of momentum)}\\ \displaystyle\partial_t \left(e+\frac{u^2}{2}\right)+\partial_x (up)=0,&\text{(conservation of energy)} \end{cases} \end{aligned} $$
(6.7)

where v is the specific volume (i.e., 1∕v is the density), u is the velocity, e is the energy, and p is the pressure of the gas. Since we have three equations in four unknowns, we need a constitutive equation

$$\displaystyle \begin{aligned} p=p(e,v), \end{aligned}$$

which selects the specific gas under consideration.

Nonlinear Elasticity

Let us consider a one-dimensional elastic material body whose configuration in the Lagrangian description is represented by the displacement field w(x, t). Then the strain measure is given by u = xw and assuming the constitutive equation σ = f(u) giving the Piola-Kirchhoff stress σ in terms of the strain measure u, the balance of linear momentum delivers the wave equation of motion [6, 10, 16]

$$\displaystyle \begin{aligned} \partial_{tt}^2 w-\partial_x f(u)=0. \end{aligned} $$
(6.8)

Setting v = tw the velocity field, the previous wave equation takes the form of the following system of conservation laws

$$\displaystyle \begin{aligned} \begin{cases} \partial_t u-\partial_x v=0,&{}\\ \partial_t v-\partial_x f(u) =0.&{} \end{cases} \end{aligned}$$

Shallow Water Equations

Let h(x, t) be the depth and u(x, t) the mean velocity of a fluid moving in a rectangular channel of constant breadth and inclination α of the surface. Let also Cf be the friction coefficient affecting the friction force originating by the interaction of the fluid with the bed and g the gravity acceleration. The equations governing the motion of the fluid are given by

$$\displaystyle \begin{aligned} \begin{cases} \partial_t h+u\partial_x h+h\partial_x u=0,&{}\\ \partial_t u+u\partial_x u+g\cos\alpha\partial_x h=g\sin\alpha-C_f^2(u^2/h).&{} \end{cases} \end{aligned}$$

In the shallow water theory, the height of the water surface above the bottom is assumed to be small with respect to the typical wave lengths and the terms representing the slope and the friction are neglected giving rise to the simplified equations [20]

$$\displaystyle \begin{aligned} \begin{cases} \partial_t c+u\partial_x c+(c\partial_x u/2)=0,&{}\\ \partial_t u+u\partial_x u+2c\partial_x c=0,&{} \end{cases} \end{aligned}$$

where \(c(x,t)=\sqrt {gh(x,t)}\).

Traffic Flow 2

Finally, we have the traffic model proposed by Aw and Rascle [1]

$$\displaystyle \begin{aligned} \begin{cases} \partial_t\rho+\partial_x\left(y+\rho^{\gamma+1}\right)=0,&{}\\ \displaystyle\partial_t y+\partial_x\left(\frac{y^2}{2}-y\rho^\gamma\right)=0,&{} \end{cases} \end{aligned} $$
(6.9)

where ρ is the density, y this the generalized momentum of the vehicles, and γ is a positive constant.

Regarding (Q.3), one of the main features exhibited by hyperbolic of conservation laws is the possible creation of discontinuities. Indeed, even scalar problems with analytic flux and initial condition, like

$$\displaystyle \begin{aligned} \begin{cases} \displaystyle\partial_t+\partial_x\left(\frac{u^2}{2}\right)=0,&t>0,\,x\in\mathbb{R},\\ u(0,x)=\displaystyle\frac{1}{1+x^2},&x\in\mathbb{R}, \end{cases} \end{aligned} $$
(6.10)

experience the creation of discontinuities in finite time [5, 6, 18], see Fig. 6.2.

Fig. 6.2
figure 2

Spontaneous creation of discontinuity in finite time

The next sections are organized as follows. In Sect. 6.2 we introduce weak and entropy solutions and prove the classical uniqueness result of Kružkov. In Sect. 6.3 we introduce and solve the Riemann problem. In Sect. 6.4 we present one of the many different approaches to the existence issue: the vanishing viscosity. Finally, some elementary facts on BV functions are collected in the Appendix.

6.2 Entropy Solutions

We pointed out in Sect. 6.1 that even a Cauchy problem of the type

$$\displaystyle \begin{aligned} \partial_t u+\partial_x\left(\frac{u^2}{2}\right)=0, \qquad u(0,x)=\frac{1}{1+x^2}, \end{aligned}$$

with analytic flux (uu2∕2) and analytic initial condition (x↦1∕(1 + x2)) may experience discontinuities in finite time. It appears evident that additional physical and mathematical conditions must be required in order to reach a meaningful concept of solution. As a consequence we develop a wellposedness theory for conservation laws in the framework of entropy solutions, that are special distributional solutions satisfying suitable additional inequalities (or E-conditions). The definition is inspired by the Second Law of Thermodynamics, we consider only the distributional solutions along which the entropies decrease. Note that the physical entropies are all concave maps, in the mathematical community the entropies are assumed to be convex, this explain the discrepancy between the usual Second Law of Thermodynamics and the ones considered here.

6.2.1 Weak Solutions

Consider the scalar conservation law

$$\displaystyle \begin{aligned} \partial_t u+\partial_x f(u)=0,\qquad t>0,\,x\in\mathbb{R}, \end{aligned} $$
(6.11)

endowed with the initial condition

$$\displaystyle \begin{aligned} u(0,x)=u_0(x),\qquad x\in\mathbb{R}, \end{aligned} $$
(6.12)

and assume

$$\displaystyle \begin{aligned} f\in C^2(\mathbb{R}),\qquad u_0\in L^\infty_{loc}(\mathbb{R}). \end{aligned} $$
(6.13)

Definition 6.2.1

A function \(u:[0,\infty )\times \mathbb {R}\to \mathbb {R}\) is a weak solution of the Cauchy problem (6.11) and (6.12), if

  1. (i)

    \(u\in L^\infty _{loc}((0,\infty )\times \mathbb {R})\);

  2. (ii)

    u satisfies (6.11) and (6.12) in the sense of distributions in \([0,\infty )\times \mathbb {R}\), namely for every test function \(\varphi \in C^\infty (\mathbb {R}^2)\) with compact support we have

    $$\displaystyle \begin{aligned} \int_0^\infty\!\!\!\int_{\mathbb{R}}\left(u\partial_t\varphi+f(u)\partial_x\varphi\right)dtdx+\int_{\mathbb{R}} u_0(x)\varphi(0,x)dx=0. \end{aligned}$$

We say that u is a weak solution of the conservation law (6.11) if i) holds and

  1. (iii)

    u satisfies (6.11) in the sense of distributions in \((0,\infty )\times \mathbb {R}\), namely for every test function \(\varphi \in C^\infty ((0,\infty )\times \mathbb {R})\) with compact support we have

    $$\displaystyle \begin{aligned} \int_0^\infty\!\!\!\int_{\mathbb{R}}\left(u\partial_t\varphi+f(u)\partial_x\varphi\right)dtdx=0. \end{aligned}$$

Direct consequence of the Dominate Converge Theorem is the following.

Theorem 6.2.1

Let {uε}ε>0and u be functions defined on\([0,\infty )\times \mathbb {R}\)with values in\(\mathbb {R}\). If

  1. (i)

    there exists M > 0 such that\(\left \|u_\varepsilon \right \|{ }_{L^\infty ((0,\infty )\times \mathbb {R})}\le M\)for every ε > 0;

  2. (ii)

    \(u\in L^\infty ((0,\infty )\times \mathbb {R});\)

  3. (iii)

    uε → u in\(L^1_{loc}((0,\infty )\times \mathbb {R})\)as ε → 0;

  4. (iv)

    every uεis a weak solution of (6.11);

then

$$\displaystyle \begin{aligned}u\ \mathit{\mbox{is a weak solution of (6.11).}}\end{aligned}$$

6.2.2 Rankine-Hugoniot Condition

The introduction of the notion of weak solution opens the possibility to deal with discontinuous functions which, as above remarked, naturally occur in the mathematics of conservation laws. Then in this section we analyze the shocks, that are the simplest discontinuous weak solutions of (6.11).

Let \(u_-,\,u_+,\,\lambda \in \mathbb {R}\) be given and consider the function

$$\displaystyle \begin{aligned} U:[0,\infty)\times\mathbb{R}\longrightarrow \mathbb{R},\qquad U(t,x)= \begin{cases} u_-,&\quad \text{if}\ x<\lambda t,\\ u_+,&\quad \text{if}\ x\ge\lambda t. \end{cases} \end{aligned} $$
(6.14)

Since we are not interested to the trivial case u+ = u in the following we always assume

$$\displaystyle \begin{aligned} u_+\neq u_-. \end{aligned}$$

Theorem 6.2.2 (Rankine-Hugoniot Condition)

The following statements are equivalent:

  1. (i)

    the function U defined in (6.14) is a weak solution of (6.11);

  2. (ii)

    the following condition named Rankine-Hugoniot condition holds true, i.e.,

    $$\displaystyle \begin{aligned} f(u_+)-f(u_-)=\lambda (u_+-u_-). \end{aligned} $$
    (6.15)

Proof

Let \(\varphi \in C^\infty ((0,\infty )\times \mathbb {R})\) be a test function with compact support. Consider the vector field

$$\displaystyle \begin{aligned} F=(U\varphi,f(U)\varphi) \end{aligned}$$

and the domains

$$\displaystyle \begin{aligned} \Omega_+=\{x>\lambda t\},\qquad \Omega_-=\{x<\lambda t\}. \end{aligned}$$

The definition of U gives

$$\displaystyle \begin{aligned} (t,x)\in\Omega_+&\Longrightarrow \begin{cases} F(t,x)=(u_+\varphi,f(u_+)\varphi),&{}\\ \text{div}_{(t,x)}(F)(t,x)=u_+\partial_t\varphi+f(u_+)\partial_x\varphi,&{} \end{cases}\\ (t,x)\in\Omega_-&\Longrightarrow \begin{cases} F(t,x)=(u_-\varphi,f(u_-)\varphi),&{}\\ \text{div}_{(t,x)}(F)(t,x)=u_-\partial_t\varphi+f(u_-)\partial_x\varphi.&{} \end{cases} \end{aligned} $$

Since

$$\displaystyle \begin{aligned} \partial\Omega_+=\partial\Omega_-=\{x=\lambda t\}, \end{aligned}$$

and the outer normals to Ω+ and Ω are (λ, −1) and (−λ, 1) we have

$$\displaystyle \begin{aligned} \int_0^\infty\!\!\!\int_{\mathbb{R}} &(U\partial_t\varphi+f(U)\partial_x\varphi)dtdx\\ &=\int\!\!\!\int_{\Omega_+}(u_+\partial_t\varphi+f(u_+)\partial_x\varphi)dtdx+\int\!\!\!\int_{\Omega_-}(u_-\partial_t\varphi+f(u_-)\partial_x\varphi)dtdx\\ &=\int\!\!\!\int_{\Omega_+}\text{div}(F)dtdx+\int\!\!\!\int_{\Omega_-}\text{div}(F)dtdx\\ &=\int_0^\infty(u_+,f(u_+))\cdot(\lambda,-1)\varphi(t,\lambda t)dt{+}\int_0^\infty(u_-,f(u_-))\cdot(-\lambda,1)\varphi(t,\lambda t)dt\\ &=\left[\lambda (u_+-u_-)-\left(f(u_+)-f(u_-)\right)\right]\int_0^\infty\!\!\!\varphi(t,\lambda t)dt. \end{aligned} $$

Therefore

$$\displaystyle \begin{aligned} \int_0^\infty\!\!\!\int_{\mathbb{R}} (U\partial_t\varphi+f(U)&\partial_x\varphi)dtdx=0,\quad \forall \varphi\\ &\Updownarrow\\ f(u_+)-f(u_-)&=\lambda (u_+-u_-), \end{aligned} $$

that concludes the proof. □

Remark 6.2.1

The Rankine-Hugoniot condition (6.15) is a scalar equation that links the right and left sates u+, u and the speed λ of the shock. In particular, if f is Lipschitz continuous with Lipschitz constant L, (6.15) gives

$$\displaystyle \begin{aligned} |\lambda|=\frac{| f(u_+)-f(u_-)|}{|u_+-u_-|}\le L. \end{aligned}$$

In other terms, the speed of propagation of the singularities is finite and varies between − L and L.

Theorem 6.2.3

Let\(u:[0,\infty )\times \mathbb {R}\to \mathbb {R},\,\tau >0,\,\xi \in \mathbb {R}\)and\(U:[0,\infty )\times \mathbb {R}\longrightarrow \mathbb {R}\)as defined in (6.14). If

  1. (i)

    \(u\in L^\infty _{loc}((0,\infty )\times \mathbb {R});\)

  2. (ii)

    u is a weak solution of (6.11);

  3. (iii)

    \(\displaystyle \lim \limits _{\varepsilon \to 0}\frac {1}{\varepsilon ^2}\int _{-\varepsilon }^\varepsilon \int _{-\varepsilon }^\varepsilon |u(t+\tau ,x+\xi )-U(t,x)|dtdx=0;\)

then (6.15) holds.

Proof

For every μ > 0 define

$$\displaystyle \begin{aligned} u_\mu(t,x)=u(\mu t+\tau,\mu x +\xi),\qquad t\ge-\frac{\tau}{\mu},\,x\in\mathbb{R}. \end{aligned}$$

Since u is a weak solution of (6.11), the same does uμ. We claim that

$$\displaystyle \begin{aligned} u_\mu\longrightarrow U,\qquad f(u_\mu)\longrightarrow f(U),\qquad \text{in}\ L^1_{loc}((0,\infty)\times\mathbb{R}),\ \text{as}\ \mu\to0. \end{aligned} $$
(6.16)

Let R > 0 and \(\mu <\frac {\tau }{R}\). Since

$$\displaystyle \begin{aligned} U(\mu t,\mu x)=U(t,x),\qquad t>0,\,x\in\mathbb{R}, \end{aligned}$$

we get

$$\displaystyle \begin{aligned} \int_{-R}^R\int_{-R}^R& |u_\mu(t,x)-U(t,x)|dtdx\\ &=\frac{1}{\mu^2}\int_{-R\mu}^{R\mu}\int_{-R\mu}^{R\mu} |u(t+\tau,x+\xi)-U(t,x)|dtdx\longrightarrow 0, \end{aligned} $$

namely

$$\displaystyle \begin{aligned} u_\mu\longrightarrow U,\qquad \text{in}\ L^1((-R,R)\times(-R,R)),\ \text{as}\ \mu\to0. \end{aligned}$$

Therefore the Dominated Convergence Theorem gives (6.16). Finally, Theorem 6.2.1 and (6.16) implies that U is a weak solution of (6.11). Then, the claim follows from Theorem 6.2.2. □

6.2.3 Nonuniqueness of Weak Solutions

In this section we show with a simple example that the Cauchy problem (6.11)–(6.12) may admit more than one weak solution.

Let us consider the Riemann problem for the Burgers equation

$$\displaystyle \begin{aligned} \partial_t u+\partial_x\left(\frac{u^2}{2}\right)=0,\qquad u(0,x)=\begin{cases}0,&\text{if}\ x< 0,\\ 1,&\text{if}\ x\ge 0.\end{cases} \end{aligned} $$
(6.17)

Thanks to Theorem 6.2.2 we know that the function

$$\displaystyle \begin{aligned} U(t,x)= \begin{cases} 0,&\quad \text{if}\ x<t/2,\\ 1,&\quad \text{if}\ x\ge t/2, \end{cases} \end{aligned}$$

is a weak solution of (6.17).

Consider the function

$$\displaystyle \begin{aligned} v(t,x)=\begin{cases}0,&\text{if}\ x< 0,\\ x/t,&\text{if}\ 0\le x< t,\\ 1,&\text{if}\ x\ge t.\end{cases} \end{aligned}$$

Since for every test function \(\varphi \in C^\infty (\mathbb {R}^2)\) with compact support

$$\displaystyle \begin{aligned} \int_0^\infty\!\!\!\int_{\mathbb{R}}&\left(v\partial_t\varphi+\frac{v^2}{2}\partial_x\varphi\right)dtdx+\int_0^\infty\!\!\! \varphi(0,x)dx\\ &=\int_0^\infty\!\!\left(\int_x^\infty\frac{x}{t}\partial_t\varphi dt\right)dx+\int_0^\infty\!\!\left(\int_0^t\frac{x^2}{2t^2}\partial_x\varphi dx\right)dt\\ &+\int_0^\infty\!\!\left(\int_0^x\partial_t\varphi dt\right)dx+\int_0^\infty\!\!\left(\int_t^\infty\partial_x\varphi dx\right)dt+\int_0^\infty\!\!\! \varphi(0,x)dx=0, \end{aligned} $$

then v is also a weak solution of (6.17).

6.2.4 Entropy Conditions

We showed in the previous section that the Cauchy problem (6.11)–(6.12) may admit more than one weak solution. In this section we introduce some additional conditions that will select the unique “physically meaningful” solution within the family of the weak solutions. Those conditions are inspired by the Second Law of Thermodynamics.

Definition 6.2.2

Let \(\eta ,q:\mathbb {R}\to \mathbb {R}\) be functions. We say that η is an entropy associated to (6.11) with flux q if

$$\displaystyle \begin{aligned} \eta,q\in C^2(\mathbb{R}),\qquad \eta''\ge0,\qquad \eta' f'=q'. \end{aligned}$$

Remark 6.2.2

If u is a smooth solution of (6.11) and η is an entropy with flux q we have

$$\displaystyle \begin{aligned} \partial_t \eta(u)+\partial_x q(u)=0. \end{aligned}$$

Indeed

$$\displaystyle \begin{aligned} \partial_t \eta(u)+\partial_x q(u)&=\eta'(u)\partial_t u+q'(u)\partial_x u\\ &=\eta'(u)\left(\partial_t u+f'(u)\partial_x u\right)\\ &=\eta'(u)\left(\partial_t u+\partial_x f(u)\right)=0. \end{aligned} $$

Definition 6.2.3

A function \(u:[0,\infty )\times \mathbb {R}\to \mathbb {R}\) is an entropy solution of the Cauchy problem (6.11) and (6.12), if

  1. (i)

    \(u\in L^\infty _{loc}((0,\infty )\times \mathbb {R})\);

  2. (ii)

    for every entropy η with flux q, u satisfies

    $$\displaystyle \begin{aligned} \partial_t \eta(u)+\partial_x q(u)\le 0,\qquad \eta(u(0,\cdot))=\eta(u_0), \end{aligned} $$
    (6.18)

    in the sense of distributions in \([0,\infty )\times \mathbb {R}\), namely for every nonnegative test function \(\varphi \in C^\infty (\mathbb {R}^2)\) with compact support we have

    $$\displaystyle \begin{aligned} \int_0^\infty\!\!\!\int_{\mathbb{R}}\left(\eta(u)\partial_t\varphi+q(u)\partial_x\varphi\right)dtdx+\int_{\mathbb{R}} \eta(u_0(x))\varphi(0,x)dx\ge0. \end{aligned} $$
    (6.19)

We say that u is an entropy solution of the conservation law (6.11) if i) holds and

  1. (iii)

    for every entropy η with flux q, u satisfies

    $$\displaystyle \begin{aligned} \partial_t \eta(u)+\partial_x q(u)\le 0 \end{aligned} $$
    (6.20)

    in the sense of distributions in \((0,\infty )\times \mathbb {R}\), namely for every nonnegative test function \(\varphi \in C^\infty ((0,\infty )\times \mathbb {R})\) with compact support we have

    $$\displaystyle \begin{aligned} \int_0^\infty\!\!\!\int_{\mathbb{R}}\left(\eta(u)\partial_t\varphi+q(u)\partial_x\varphi\right)dtdx\ge0. \end{aligned}$$

The apparent contradiction of the above definitions with the Second Law of Thermodynamics is soon solved by noticing that the physical entropies are concave functions while the ones we are using here are convex.

As a direct consequence of the Dominate Converge Theorem we can state the following result.

Theorem 6.2.4

Let {uε}ε>0and u be functions defined on\([0,\infty )\times \mathbb {R}\)with values in\(\mathbb {R}\). If

  1. (i)

    there exists M > 0 such that\(\left \|u_\varepsilon \right \|{ }_{L^\infty ((0,\infty )\times \mathbb {R})}\le M\)for every ε > 0;

  2. (ii)

    \(u\in L^\infty ((0,\infty )\times \mathbb {R});\)

  3. (iii)

    uε → u in\(L^1_{loc}((0,\infty )\times \mathbb {R})\)as ε → 0;

  4. (iv)

    every uεis a entropy solution of (6.11);

then

$$\displaystyle \begin{aligned} u\ \mathit{\mbox{is a entropy solution of (6.11).}} \end{aligned}$$

A fundamental class of entropies are the ones introduced by Kružkov [12]

$$\displaystyle \begin{aligned} \eta(\xi)=|\xi-c|,\qquad q(\xi)=\mathrm{sign}\left(\xi-c\right)(f(\xi)-f(c)),\qquad \xi\in\mathbb{R}, \end{aligned} $$
(6.21)

for every constant \(c\in \mathbb {R}\).

Since the Kružkov entropies are not C2 the following theorem is needed.

Theorem 6.2.5

Let \(u:[0,\infty )\times \mathbb {R}\to \mathbb {R}\) be a function. If

$$\displaystyle \begin{aligned} u\in L^\infty_{loc}((0,\infty)\times\mathbb{R}), \end{aligned}$$

then the following statements are equivalent

  1. (i)

    u is an entropy solution of (6.11)(6.12);

  2. (ii)

    for every \(c\in \mathbb {R}\) and every nonnegative test function \(\varphi \in C^\infty (\mathbb {R}^2)\) with compact support

    $$\displaystyle \begin{aligned} \int_0^\infty\!\!\!\int_{\mathbb{R}}\left(|u-c|\partial_t\varphi+\mathrm{sign}\left(u-c\right)(f(u)-f(c))\partial_x\varphi\right)dtdx&\\ +\int_{\mathbb{R}} |u_0(x)-c|\varphi(0,x)dx&\ge0. \end{aligned} $$
    (6.22)

Remark 6.2.3

The set of the entropies

$$\displaystyle \begin{aligned} \{\eta\in C^2(\mathbb{R});\eta \,\text{convex}\} \end{aligned}$$

is an infinite dimensional manifold. On the other hand the set of the Kružkov entropies

$$\displaystyle \begin{aligned} \{|\cdot\,-c|;c \in\mathbb{R}\} \end{aligned}$$

is a one-dimensional manifold. Therefore the previous theorem says that if we have to verify that a function is an entropy solution of (6.11) we can use just the Kružkov entropies and the “amount” of inequalities to verify is “much lower” than the one required in Definition 6.2.3.

Proof (of Theorem 6.2.5)

Let us start by proving (i) ⇒ (ii). Let \(c\in \mathbb {R}\) and \(\varphi \in C^\infty (\mathbb {R}^2)\) be a nonnegative test function with compact support. For every \(n\in \mathbb {N}\setminus \{0\}\), consider the functions

$$\displaystyle \begin{aligned} \eta_n(\xi)=\sqrt{(\xi-c)^2+\frac 1n},\qquad q_n(\xi)=\int_c^\xi\frac{\sigma-c}{\sqrt{(\sigma-c)^2+\frac 1n}} f'(\sigma)d\sigma,\qquad \xi\in\mathbb{R}. \end{aligned}$$

Since

$$\displaystyle \begin{aligned} \eta_n\in& C^2(\mathbb{R}),\\ \eta_n^{\prime}(\xi)&=\frac{\xi-c}{\sqrt{(\xi-c)^2+\frac 1n}},\\ \eta_n^{\prime\prime}(\xi)&=\frac{1}{n\left((\xi-c)^2+\frac 1n\right)^{\frac 32}}\ge0,\\ q_n^{\prime}&=\eta_n^{\prime}f', \end{aligned} $$

we have

$$\displaystyle \begin{aligned} \int_0^\infty\!\!\!\int_{\mathbb{R}}\left(\eta_n(u)\partial_t\varphi+q_n(u)\partial_x\varphi\right)dtdx+\int_{\mathbb{R}} \eta_n(u_0(x))\varphi(0,x)dx\ge0. \end{aligned}$$

As n → thanks to the Dominated Convergence Theorem we get (6.22).

Let us prove ii) ⇒ i). Let η be an entropy with flux q and \(\varphi \in C^\infty (\mathbb {R}^2)\) be a nonnegative test function with compact support. Define

$$\displaystyle \begin{aligned} M=\sup_{\text{supp}(\varphi)}|u|. \end{aligned}$$

We approximate η′ with piecewise constant functions in [−M, M]. For every \(n\in \mathbb {N}\setminus \{0\}\) consider

$$\displaystyle \begin{aligned} \eta_n(\xi)&=\int_{-M}^\xi k_n(\sigma)d\sigma+\eta(-M),\\ k_n(\sigma)&=\sum_{j=0}^{2n-1}\eta'\left(\frac{M}{n}j-M\right)\chi_{\left[\frac{M}{n}j-M,\frac{M}{n}(j+1)-M\right)}(\sigma),\\ q_n(\xi)&=\int_{-M}^\xi f'(\sigma) k_n(\sigma)d\sigma. \end{aligned} $$

We have

$$\displaystyle \begin{aligned} k_n(\sigma)=\sum_{j=0}^{n-1}a_j\left[\mathrm{sign}\left(\sigma-b_j\right)+c_j\right]\chi_{\left[2\frac{M}{n}j-M,2\frac{M}{n}(j+1)-M\right]}(\sigma), \end{aligned}$$

where

$$\displaystyle \begin{aligned} a_j&=\frac{1}{2}\left(\eta'\left(\frac{M}{n}(2j+1)-M\right)-\eta'\left(\frac{M}{n}2j-M\right)\right),\\ b_j&=\frac{M}{n}(2j+1)-M,\\ c_j&=\frac{1}{2}\left(\eta'\left(\frac{M}{n}(2j+1)-M\right)+\eta'\left(\frac{M}{n}2j-M\right)\right). \end{aligned} $$

Since η″ ≥ 0 we have aj ≥ 0 and then

$$\displaystyle \begin{aligned} \int_0^\infty\!\!\!\int_{\mathbb{R}}\left(\eta_n(u)\partial_t\varphi+q_n(u)\partial_x\varphi\right)dtdx+\int_{\mathbb{R}} \eta_n(u_0(x))\varphi(0,x)dx\ge0. \end{aligned}$$

As n → thanks to the Dominated Convergence Theorem we get (6.19). □

It is clear that a smooth solutions is both an entropy and a weak solution (see Remark 6.2.2). We conclude this section proving that the entropy solutions are weak solutions. In the next section we will show that there are weak solutions that are not entropy ones.

Theorem 6.2.6

Let \(u:[0,\infty )\times \mathbb {R}\to \mathbb {R}\) be a function. If

$$\displaystyle \begin{aligned} u\in L^\infty_{loc}((0,\infty)\times\mathbb{R}) \end{aligned}$$

and u is an entropy solution of (6.11)(6.12), then u is a weak solution of (6.11)(6.12).

Proof

Let \(\varphi \in C^2(\mathbb {R}^2)\) be a test function with compact support. Define

$$\displaystyle \begin{aligned} \varphi_+=\max\{\varphi,0\},\qquad \varphi_-=\max\{-\varphi,0\}, \end{aligned}$$

clearly

$$\displaystyle \begin{aligned} \varphi=\varphi_+-\varphi_-,\qquad \varphi_+,\,\varphi_-\ge0. \end{aligned}$$

Using a smooth approximation of φ± and then passing to the limit we get

$$\displaystyle \begin{aligned} \int_0^\infty\!\!\!\int_{\mathbb{R}}\left(|u-c|\partial_t\varphi_\pm+\mathrm{sign}\left(u-c\right)(f(u)-f(c))\partial_x\varphi_\pm\right)dtdx&\\ +\int_{\mathbb{R}} |u_0(x)-c|\varphi_\pm(0,x)dx&\ge0, \end{aligned} $$
(6.23)

for every \(c\in \mathbb {R}\).

Define

$$\displaystyle \begin{aligned} M=\sup_{\text{supp}(\varphi)}|u|. \end{aligned}$$

Choosing c = M + 1 in (6.23) we get

$$\displaystyle \begin{aligned} \int_0^\infty\!\!\!\int_{\mathbb{R}}\left((M+1-u)\partial_t\varphi_\pm+(f(M+1)-f(u))\partial_x\varphi_\pm\right)dtdx&\\ +\int_{\mathbb{R}} (M+1-u_0(x))\varphi_\pm(0,x)dx&\ge0, \end{aligned} $$

and integrating by parts (since M + 1 is a classical solution of (6.11)) we get

$$\displaystyle \begin{aligned} \int_0^\infty\!\!\!\int_{\mathbb{R}}\left(u\partial_t\varphi_\pm+f(u)\partial_x\varphi_\pm\right)dtdx+\int_{\mathbb{R}} u_0(x)\varphi_\pm(0,x)dx\le0. \end{aligned} $$
(6.24)

On the other hand, if we choose c = −M − 1 in (6.23) we get

$$\displaystyle \begin{aligned} \int_0^\infty\!\!\!\int_{\mathbb{R}}\left((u+M+1)\partial_t\varphi_\pm+(f(u)-f(-M-1))\partial_x\varphi_\pm\right)dtdx&\\ +\int_{\mathbb{R}} (u_0(x)+M+1)\varphi_\pm(0,x)dx\ge& 0, \end{aligned} $$

and integrating by parts (since − M − 1 is a classical solution of (6.11)) we get

$$\displaystyle \begin{aligned} \int_0^\infty\!\!\!\int_{\mathbb{R}}\left(u\partial_t\varphi_\pm+f(u)\partial_x\varphi_\pm\right)dtdx+\int_{\mathbb{R}} u_0(x)\varphi_\pm(0,x)dx\ge0. \end{aligned} $$
(6.25)

Adding (6.24) and (6.25) we get (6.19). □

6.2.5 Entropic Shocks

In Sect. 6.2.2 we introduced the shock U (see (6.14)) and proved that it is a weak solution of (6.11) if and only if the Rankine-Hugoniot Condition (6.15) holds. In this section we prove a similar result giving a necessary and sufficient condition for the shock to be an entropy solution.

Theorem 6.2.7

The following statements are equivalent:

  1. (i)

    the function U defined in (6.14) is an entropy solution of (6.11);

  2. (ii)

    the Rankine-Hugoniot Condition holds true, i.e.,

    $$\displaystyle \begin{aligned} f(u_+)-f(u_-)=\lambda (u_+-u_-), \end{aligned} $$
    (6.26)

    and

    $$\displaystyle \begin{aligned} \begin{cases} f(\theta u_++(1-\theta)u_-)\ge \theta f(u_+)+(1-\theta)f(u_-),&\mathit{\text{if}}\ u_-<u_+,\\ f(\theta u_++(1-\theta)u_-)\le \theta f(u_+)+(1-\theta)f(u_-),&\mathit{\text{if}}\ u_->u_+, \end{cases} \end{aligned} $$
    (6.27)

    for every 0 < θ < 1.

The inequalities in (6.27) have a simple geometric interpretation. If u < u+ the graph of f has to be above the segment connecting (u, f(u)) and (u+, f(u+)), that is always true if f is concave. On the other hand if u > u+ the graph of f has to be below the segment connecting (u+, f(u+)) and (u, f(u)), that is always trues if f is convex. In particular, if f is concave the entropic shocks are upward and if is convex they are downward.

Moreover, we can rewrite (6.27) in the following way

$$\displaystyle \begin{aligned} \frac{f(u_*)-f(u_-)}{u_*-u_-}\ge\frac{f(u_+)-f(u_*)}{u_+-u_*}, \end{aligned} $$
(6.28)

for every \(\min \{u_+,u_-\}<u_*<\max \{u_+,u_-\}.\)

Indeed, if u < u+ (in the case u > u+ the same argument works) and u = θu+ + (1 − θ)u for some 0 < θ < 1 we have

$$\displaystyle \begin{aligned} &\frac{f(u_*)-f(u_-)}{u_*-u_-}-\frac{f(u_+)-f(u_*)}{u_+-u_*}\\ &\quad =\frac{f(u_*)(u_+-u_-)}{(u_*-u_-)(u_+-u_*)}-\frac{f(u_-)}{u_*-u_-}-\frac{f(u_+)}{u_+-u_*}\\ &\quad \ge \frac{(\theta f(u_+)+(1-\theta)f(u_-))(u_+-u_-)}{(u_*-u_-)(u_+-u_*)}-\frac{f(u_-)}{u_*-u_-}-\frac{f(u_+)}{u_+-u_*}\\ &\quad =f(u_+)\frac{\theta(u_+-u_-)-(u_*-u_-)}{(u_*-u_-)(u_+-u_*)}+f(u_-)\frac{(1-\theta)(u_+-u_-)-(u_+-u_*)}{(u_*-u_-)(u_+-u_*)}=0. \end{aligned} $$

Let us observe that (6.28) represents a stability condition. Indeed, if u < u < u+ we can perturb the shock (u, u+) and split it in the two shocks (u, u), (u, u+). The two quantities in (6.28) give the speed of these two shocks: the one on the left is faster than the one on the right. Then the two waves will interact in finite time and generate again the initial shock (u, u+) (see Fig. 6.3).

Fig. 6.3
figure 3

Shock wave (u, u+) with u < u+

Lemma 6.2.1

The following statements are equivalent:

  1. (i)

    the function U defined in (6.14) is an entropy solution of (6.11);

  2. (ii)

    for every entropy η with flux q the following inequlity holds

    $$\displaystyle \begin{aligned} \lambda (\eta(u_+)-\eta(u_-))\ge q(u_+)-q(u_-); \end{aligned} $$
    (6.29)
  3. (iii)

    for every constant \(c\in \mathbb {R}\)

    $$\displaystyle \begin{aligned} \lambda (|u_+-c|&-|u_--c|)\\ \ge& \mathrm{sign}\left(u_+-c\right)(f(u_+)-f(c))\\ &-\mathrm{sign}\left(u_--c\right)(f(u_-)-f(c)). \end{aligned} $$
    (6.30)

Proof

Let \(\varphi \in C^\infty ((0,\infty )\times \mathbb {R})\) be a nonnegative test function with compact support and η be an entropy with flux q. Consider the vector field

$$\displaystyle \begin{aligned} G=(\eta(U)\varphi,q(U)\varphi) \end{aligned}$$

and the domains

$$\displaystyle \begin{aligned} \Omega_+=\{x>\lambda t\},\qquad \Omega_-=\{x<\lambda t\}.\end{aligned} $$

The definition of U gives

$$\displaystyle \begin{aligned} (t,x)\in\Omega_+&\Longrightarrow \begin{cases} G(t,x)=(\eta(u_+)\varphi,q(u_+)\varphi),&{}\\ \text{div}_{(t,x)}(G)(t,x)=\eta(u_+)\partial_t\varphi+q(u_+)\partial_x\varphi,&{} \end{cases}\\ (t,x)\in\Omega_-&\Longrightarrow \begin{cases} G(t,x)=(\eta(u_-)\varphi,q(u_-)\varphi),&{}\\ \text{div}_{(t,x)}(G)(t,x)=\eta(u_-)\partial_t\varphi+q(u_-)\partial_x\varphi.&{} \end{cases} \end{aligned} $$

Since

$$\displaystyle \begin{aligned} \partial\Omega_+=\partial\Omega_-=\{x=\lambda t\},\end{aligned} $$

and the outer normals to Ω+ and Ω are (λ, −1) and (−λ, 1) we have

$$\displaystyle \begin{aligned} \int_0^\infty\!\!\!\int_{\mathbb{R}} &(\eta(U)\partial_t\varphi+q(U)\partial_x\varphi)dtdx\\ &=\int\!\!\!\int_{\Omega_+}(\eta(u_+)\partial_t\varphi+q(u_+)\partial_x\varphi)dtdx+\int\!\!\!\int_{\Omega_-}(\eta(u_-)\partial_t\varphi+q(u_-)\partial_x\varphi)dtdx\\ &=\int\!\!\!\int_{\Omega_+}\text{div}(G)dtdx+\int\!\!\!\int_{\Omega_-}\text{div}(G)dtdx\\ &=\int_0^\infty(\eta(u_+),q(u_+))\cdot(\lambda,-1)\varphi(t,\lambda t)dt+\int_0^\infty(\eta(u_-),q(u_-))\cdot(-\lambda,1)\varphi(t,\lambda t)dt\\ &=\left[\lambda (\eta(u_+)-\eta(u_-))-\left(q(u_+)-q(u_-)\right)\right]\int_0^\infty\!\!\!\varphi(t,\lambda t)dt. \end{aligned} $$

Therefore

$$\displaystyle \begin{aligned} \int_0^\infty\!\!\!\int_{\mathbb{R}} (\eta(U)\partial_t\varphi+q(U)&\partial_x\varphi)dtdx\ge0,\quad \forall \varphi\\ &\Updownarrow\\ \lambda (\eta(u_+)-\eta(u_-))&\ge q(u_+)-q(u_-). \end{aligned} $$

Therefore we have proved that i) ⇔ ii). The same argument works for i) ⇔ iii). □

Proof (of Theorem 6.2.7)

We begin by proving that (i) ⇒ (ii). Since U is an entropy solution of (6.11), Theorem 6.2.2 gives (6.26). We have to prove (6.27). We distinguish two cases. We assume u < u+. Let 0 < θ < 1 be fixed. We choose

$$\displaystyle \begin{aligned} c=\theta u_++(1-\theta)u_-. \end{aligned}$$

Since

$$\displaystyle \begin{aligned} u_-<c<u_+, \end{aligned}$$

(6.30) gives

$$\displaystyle \begin{aligned} f(u_+)+f(u_-)-2f(c)\le\lambda (u_++u_--2c). \end{aligned} $$
(6.31)

Using (6.26) and (6.31)

$$\displaystyle \begin{aligned} 2&f(\theta u_++(1-\theta)u_-)=2f(c)\\ &\ge f(u_+)+f(u_-)-\lambda (u_++u_--2c)\\ &=f(u_+)+f(u_-)-\lambda (u_++u_--2(\theta u_++(1-\theta)u_-))\\ &=f(u_+)+f(u_-)-\lambda(1-2\theta) (u_+-u_-)\\ &=f(u_+)+f(u_-)-(1-2\theta) (f(u_+)-f(u_-))\\ &=2(\theta f(u_+)+(1-\theta)f(u_-)). \end{aligned} $$

Since the case u+ < u+ is analogous (6.27) is proved.

We have to prove that (ii) ⇒ (i). If is enough to verify that (6.30) holds for every \(c\in \mathbb {R}\). We distinguish four cases.

If

$$\displaystyle \begin{aligned} c\le\min\{u_+,u_-\}, \end{aligned}$$

(6.26) gives

$$\displaystyle \begin{aligned} \lambda &(|u_+-c|-|u_--c|)=\lambda (u_+-u_-)\\ &=f(u_+)-f(u_-)=(f(u_+)-f(c))-(f(u_-)-f(c))\\ &= \mathrm{sign}\left(u_+-c\right)(f(u_+)-f(c))-\mathrm{sign}\left(u_--c\right)(f(u_-)-f(c)). \end{aligned} $$

If

$$\displaystyle \begin{aligned} c\ge\max\{u_+,u_-\}, \end{aligned}$$

the same argument applies.

If

$$\displaystyle \begin{aligned} u_-<c<u_+, \end{aligned}$$

there exists 0 < θ < 1 such that

$$\displaystyle \begin{aligned} c=\theta u_++(1-\theta)u_-. \end{aligned}$$

(6.27) guarantees

$$\displaystyle \begin{aligned} f(c)\ge \theta f(u_+)+(1-\theta)f(u_-), \end{aligned}$$

then using (6.26)

$$\displaystyle \begin{aligned} \lambda &(|u_+-c|-|u_--c|)=\lambda (u_++-u_--2c)\\ &=\lambda(1-2\theta)(u_+-u_-)=(1-2\theta)(f(u_+)-f(u_-))\\ &=f(u_+)+f(u_-)-2(\theta f(u_+)+(1-\theta)f(u_-))\\ &\ge f(u_+)+f(u_-)-2f(c)\\ &= \mathrm{sign}\left(u_+-c\right)(f(u_+)-f(c))-\mathrm{sign}\left(u_--c\right)(f(u_-)-f(c)). \end{aligned} $$

Finally, if

$$\displaystyle \begin{aligned} u_-<c<u_+, \end{aligned}$$

the same argument works. Then (6.30) holds for every \(c\in \mathbb {R}\). □

Theorem 6.2.8

Let \(u:[0,\infty )\times \mathbb {R}\to \mathbb {R},\,\tau >0,\,\xi \in \mathbb {R}.\) If

  1. (i)

    \(u\in L^\infty _{loc}((0,\infty )\times \mathbb {R});\)

  2. (ii)

    u is an entropy solution of (6.11);

  3. (iii)

    \(\displaystyle \lim \limits _{\varepsilon \to 0}\frac {1}{\varepsilon ^2}\int _{-\varepsilon }^\varepsilon \int _{-\varepsilon }^\varepsilon |u(t+\tau ,x+\xi )-U(t,x)|dtdx=0;\)

then (6.26) and (6.27) hold.

Proof

For every μ > 0 define

$$\displaystyle \begin{aligned} u_\mu(t,x)=u(\mu t+\tau,\mu x +\xi),\qquad t\ge-\frac{\tau}{\mu},\,x\in\mathbb{R}. \end{aligned}$$

Since u is a weak solution of (6.11), the same does uμ. We claim that

$$\displaystyle \begin{aligned} u_\mu\longrightarrow U,\qquad f(u_\mu)\longrightarrow f(U),\qquad \text{in}\ L^1_{loc}((0,\infty)\times\mathbb{R}),\ \text{as}\ \mu\to0. \end{aligned} $$
(6.32)

Let R > 0 and μ < τR. Since

$$\displaystyle \begin{aligned} U(\mu t,\mu x)=U(t,x),\qquad t>0,\,x\in\mathbb{R}, \end{aligned}$$

we get

$$\displaystyle \begin{aligned} \int_{-R}^R\int_{-R}^R& |u_\mu(t,x)-U(t,x)|dtdx\\ &=\frac{1}{\mu^2}\int_{-R\mu}^{R\mu}\int_{-R\mu}^{R\mu} |u(t+\tau,x+\xi)-U(t,x)|dtdx\longrightarrow 0, \end{aligned} $$

namely

$$\displaystyle \begin{aligned} u_\mu\longrightarrow U,\qquad \text{in}\ L^1((-R,R)\times(-R,R)),\ \text{as}\ \mu\to0. \end{aligned}$$

Therefore the Dominated Convergence Theorem gives (6.32). Finally, Theorem 6.2.4 and (6.32) implies that U is a entropy solution of (6.11). Then, the claim follows from Theorem 6.2.7. □

Example 6.2.1

The function

$$\displaystyle \begin{aligned} u(t,x)= \begin{cases} -\frac 23\left(t+\sqrt{3x+t^2}\right)& \quad \text{if}\ 4x+t^2>0,\\ 0& \quad \text{if}\ 4x+t^2<0 \end{cases} \end{aligned} $$
(6.33)

is an entropy solution of the Cauchy problem

$$\displaystyle \begin{aligned} \begin{cases} u_t+\left(\frac{u^2}{2}\right)_x=0,& \quad t>0,\>x\in\mathbb{R},\\ {} u(0,x)= \begin{cases} -\frac{2}{\sqrt{3}}\sqrt{x}& \quad \text{if}\ x>0,\\ 0& \quad \text{if}\ x<0. \end{cases} &{} \end{cases} \end{aligned} $$
(6.34)

Introduce the notation

$$\displaystyle \begin{aligned} u_-(t,x)=0,\quad u_+(t,x)=-\frac 23\left(t+\sqrt{3x+t^2}\right),\quad \lambda(t)=-\frac{t^2}{4},\quad f(\xi)=\frac{\xi^2}{2}. \end{aligned}$$

Since

$$\displaystyle \begin{aligned} \partial_x u_+(t,x)&=-\frac{1}{\sqrt{3x+t^2}},\\ \partial_t u_+(t,x)&=-\frac{2}{3}\left(1+\frac{t}{\sqrt{3x+t^2}}\right),\\ u_+(t,x)\partial_x u_+(t,x)&=\frac{2}{3}\left(\frac{t}{\sqrt{3x+t^2}}+1\right) \end{aligned} $$

u and u+ are a classical solution of the Burgers equation.

We have only to verify that (6.26) and (6.27) hold along the curve x = λ(t). Since

$$\displaystyle \begin{aligned} &u_-(t,\lambda(t))=0,\\ &u_+(t,\lambda(t))=-t\le0,\\ &f(u_+(t,\lambda(t)))-f(u_-(t,\lambda(t)))-\lambda'(t)(u_+(t,\lambda(t))-u_-(t,\lambda(t)))=0, \end{aligned} $$

the Rankine-Hugoniot Condition is satisfied and the jump is downward (note that f is convex).

6.2.6 Change of Coordinates

One of the features of the weak and entropy solutions is that they are not invariant under changes of coordinates. These ones transform smooth solutions in smooth solutions but in general they do not transform weak/entropy solutions in weak/entropy solutions. Let us consider the following simple example based on the Burgers equation. We know that the shock

$$\displaystyle \begin{aligned} u(t,x)=\begin{cases}1,&\text{if}\ x<t/2,\\0&\text{if}\ x\ge t/2\end{cases} \end{aligned} $$
(6.35)

provides an entropy solution of the Riemann problem

$$\displaystyle \begin{aligned} \partial_t u+\partial_x \left(\frac{u^2}{2}\right)=0,\qquad u(0,x)=\begin{cases}1,&\text{if}\ x<0,\\0&\text{if}\ x\ge 0.\end{cases} \end{aligned} $$
(6.36)

Consider the new unknown

$$\displaystyle \begin{aligned} v=u^3. \end{aligned}$$

(6.35) and (6.36) become

$$\displaystyle \begin{aligned} v(t,x)=\begin{cases}1,&\text{if}\ x<t/2,\\0&\text{if}\ x\ge t/2\end{cases} \end{aligned} $$
(6.37)

and

$$\displaystyle \begin{aligned} \partial_t v+\partial_x \left(\frac{3}{4}v^{4/3}\right)=0,\qquad v(0,x)=\begin{cases}1,&\text{if}\ x<0,\\0&\text{if}\ x\ge 0.\end{cases} \end{aligned} $$
(6.38)

respectively. Since v does not satisfy the Rankine-Hugoniot condition, it does not provide a weak solution of (6.38).

6.2.7 Uniqueness and Stability of Entropy Solutions

In this section we prove the classical Kružkov theorem [12]. It has three main consequences: the uniqueness of the entropy solutions, the L1 Lipschitz continuity with respect to the initial condition of the entropy solutions, and the finite speed of propagation of the waves generated by conservation laws.

Theorem 6.2.9 (Kružkov [12])

Let\(u,v:[0,\infty )\times \mathbb {R}\to \mathbb {R}\)be two entropy solutions of (6.11). If

$$\displaystyle \begin{aligned} u,v\in L^\infty((0,\infty)\times\mathbb{R}), \end{aligned}$$

then

$$\displaystyle \begin{aligned} \int_{-R}^R|u(t_2,x)-v(t_2,x)|dx\le\int_{-R-L(t_2-t_1)}^{R+L(t_2-t_1)}|u(t_1,x)-v(t_1,x)|dx, \end{aligned} $$
(6.39)

for every R > 0 and almost every 0 ≤ t1 ≤ t2, where

$$\displaystyle \begin{aligned} L=\sup_{(0,\infty)\times\mathbb{R}}(|f'(u)|+|f'(v)|). \end{aligned}$$

A fundamental consequence of Kružkov theorem is the following.

Corollary 1 (Uniqueness and Stability of Entropy Solutions)

Let\(u,v:[0,\infty )\times \mathbb {R}\to \mathbb {R}\)be two entropy solutions of (6.11). If

$$\displaystyle \begin{aligned} u,v&\in L^\infty((0,\infty)\times\mathbb{R}),\\ u(0,\cdot)-v(0,\cdot)&\in L^1(\mathbb{R})\>(\mathit{\text{or}}\ u(0,\cdot),v(0,\cdot)\in L^1(\mathbb{R})), \end{aligned} $$

then

$$\displaystyle \begin{aligned} &u(t,\cdot)-v(t,\cdot)\in L^1(\mathbb{R})\>(\mathit{\text{or}}\ u(t,\cdot),v(t,\cdot)\in L^1(\mathbb{R})),\\ &\left\|u(t,\cdot)-v(t,\cdot)\right\|{}_{L^1(\mathbb{R})}\le\left\|u(0,\cdot)-v(0,\cdot)\right\|{}_{L^1(\mathbb{R})}, \end{aligned} $$
(6.40)

for almost every t ≥ 0. In particular

$$\displaystyle \begin{aligned} u(0,\cdot)=v(0,\cdot)\Longrightarrow u=v. \end{aligned}$$

The proof of the Kružkov theorem is based on the following lemma.

Lemma 6.2.2 (Doubling of Variables)

Let\(u,v:[0,\infty )\times \mathbb {R}\to \mathbb {R}\)be two entropy solutions of (6.11). If

$$\displaystyle \begin{aligned} u,v\in L^\infty((0,\infty)\times\mathbb{R}), \end{aligned}$$

then

$$\displaystyle \begin{aligned} \partial_t|u-v|+\partial_x\left(\mathrm{sign}\left(u-v\right)(f(u)-f(v))\right)\le0 \end{aligned} $$
(6.41)

holds in the sense of distributions on\((0,\infty )\times \mathbb {R}\).

Proof

Let φ = φ(t, s, x, y) be a C nonnegative test function defined on \((0,\infty )\times (0,\infty )\times \mathbb {R}\times \mathbb {R}.\) Since u and v are entropy solutions of (6.11) we have

$$\displaystyle \begin{aligned} \int_0^\infty\!\!\!\int_{\mathbb{R}}&\Big(|u(t,x)-v(s,y)|\partial_t \varphi(t,s,x,y) \\ &+\mathrm{sign}\left(u(t,x)-v(s,y)\right)(f(u(t,x))-f(v(s,y)))\partial_x \varphi(t,s,x,y)\Big)dtdx\ge0,\\ \int_0^\infty\!\!\!\int_{\mathbb{R}}&\Big(|v(s,y)-u(t,x))|\partial_s \varphi(t,s,x,y) \\ &+\mathrm{sign}\left(v(s,y)-u(t,x)\right)(f(v(s,y))-f(u(t,x)))\partial_y \varphi(t,s,x,y)\Big)dsdy\ge0, \end{aligned} $$

and then

$$\displaystyle \begin{aligned} \int_0^\infty\!\!\!&\int_0^\infty\!\!\!\int_{\mathbb{R}}\int_{\mathbb{R}}\Big(|u(t,x)-v(s,y)|(\partial_t \varphi+\partial_s \varphi) \\ &+\mathrm{sign}\left(u(t,x)-v(s,y)\right)\times\\ &\qquad \times(f(u(t,x))-f(v(s,y)))(\partial_x \varphi+\partial_y \varphi)\Big)dtdsdxdy\ge0. \end{aligned} $$
(6.42)

Let \(\psi \in C^\infty ((0,\infty )\times \mathbb {R})\) be a nonnegative test function and \(\delta \in C^\infty (\mathbb {R})\) be such that

$$\displaystyle \begin{aligned} \delta\ge 0,\qquad \left\|\delta\right\|{}_{L^1(\mathbb{R})}=1,\qquad \text{supp}(\delta)\subset[-1,1]. \end{aligned}$$

Define

$$\displaystyle \begin{aligned} \delta_n(x)&=n\delta(nx),\\ \varphi_n(t,s,x,y)&=\psi\left(\frac{t+s}{2},\frac{x+y}{2}\right)\delta_n\left(\frac{s-t}{2}\right)\delta_n\left(\frac{y-x}{2}\right). \end{aligned} $$
(6.43)

We use φn as test function in (6.42)

$$\displaystyle \begin{aligned} \int_0^\infty\!\!\!&\int_0^\infty\!\!\!\int_{\mathbb{R}}\int_{\mathbb{R}}\delta_n\left(\frac{s-t}{2}\right)\delta_n\left(\frac{y-x}{2}\right)\left((|u(t,x)-v(s,y)|\partial_t \psi\left(\frac{t+s}{2},\frac{x+y}{2}\right) \right.\\ &+\mathrm{sign}\left(u(t,x)-v(s,y)\right)\times\\ &\qquad \times\left.(f(u(t,x))-f(v(s,y)))\partial_x \psi\left(\frac{t+s}{2},\frac{x+y}{2}\right)\right)dtdsdxdy\ge0. \end{aligned} $$

As n → we get

$$\displaystyle \begin{aligned} \int_0^\infty\!\!\!\int_{\mathbb{R}}\left(|u-v|\partial_t \psi+\mathrm{sign}\left(u-v\right)(f(u)-f(v))\partial_x \psi\right)dtdx\ge0, \end{aligned}$$

that gives the claim. □

Proof (of Theorem 6.2.9)

Let R > 0 and 0 ≤ t1 ≤ t2. Define

$$\displaystyle \begin{aligned} \alpha_n(x)=\int_{-\infty}^x \delta_n(y)dy,\qquad x\in\mathbb{R}, \end{aligned} $$

where δn is defined in (6.43). Consider the test function

$$\displaystyle \begin{aligned} \varphi_n(t,x)=\left(\alpha_n(t-t_1)-\alpha_n(t-t_2)\right)\left(1-\alpha_n\left(\sqrt{x^2+\frac{1}{n}}-R-L(t_2-t)\right)\right), \end{aligned}$$

that is a smooth approximant of the characteristic function of the set

$$\displaystyle \begin{aligned} \Big\{(t,x)\in [0,\infty)\times\mathbb{R};t_1\le t\le t_2, |x|\le R+L(t_2-t)\Big\}. \end{aligned}$$

Testing (6.41) with φn we get

$$\displaystyle \begin{aligned} &\int_0^\infty\!\!\!\int_{\mathbb{R}}|u-v|\left(\delta_n(t-t_1)-\delta_n(t-t_2)\right)\left(1-\alpha_n\left(\sqrt{x^2+\frac{1}{n}}-R-L(t_2-t)\right)\right)dtdx\\ &-L\int_0^\infty\!\!\!\int_{\mathbb{R}}|u-v|\left(\alpha_n(t-t_1)-\alpha_n(t-t_2)\right)\delta_n\left(\sqrt{x^2+\frac{1}{n}}-R-L(t_2-t)\right)dtdx\\ &+\int_0^\infty\!\!\!\int_{\mathbb{R}}\mathrm{sign}\left(u-v\right)(f(u)-f(v))\left(\alpha_n(t-t_1)-\alpha_n(t-t_2)\right)\cdot\\ &\qquad \qquad \qquad \qquad \qquad \qquad \cdot\frac{x}{\sqrt{x^2+\frac{1}{n}}}\delta_n\left(\sqrt{x^2+\frac{1}{n}}-R-L(t_2-t)\right)dtdx\ge 0. \end{aligned} $$

Since

$$\displaystyle \begin{aligned} |f(u)-f(v)|\le |u-v|,\qquad \left|\frac{x}{\sqrt{x^2+\frac{1}{n}}}\right|\le 1 \end{aligned}$$

we have

$$\displaystyle \begin{aligned} &\int_0^\infty\!\!\!\int_{\mathbb{R}}|u-v|\left(\delta_n(t-t_1)-\delta_n(t-t_2)\right)\left(1-\alpha_n\left(\sqrt{x^2+\frac{1}{n}}-R-L(t_2-t)\right)\right)dtdx\\ &\ge\int_0^\infty\!\!\!\int_{\mathbb{R}}\left(L|u-v|-\mathrm{sign}\left(u-v\right)(f(u)-f(v))\frac{x}{\sqrt{x^2+\frac{1}{n}}}\right)\cdot\\ &\qquad \qquad \cdot \left(\alpha_n(t-t_1)-\alpha_n(t-t_2)\right)\delta_n\left(\sqrt{x^2+\frac{1}{n}}-R-L(t_2-t)\right)dtdx\ge 0. \end{aligned} $$

As n →, using the fact that, due to the Lusin Theorem, the map \(t\ge 0\mapsto u(t,\cdot )-v(t,\cdot )\in L^1_{loc}(\mathbb {R})\) is almost everywhere continuous, we get (6.39). □

6.3 Riemann Problem

In Sect. 6.2.7 we proved the uniqueness and stability of entropy solutions of Cauchy problems. Here we focus on the existence of entropy solutions. We analyze the simplest cases: the Riemann problems, that are Cauchy problems with Heaviside type initial condition

$$\displaystyle \begin{aligned} \begin{cases} \partial_t u+\partial_x f(u)=0,&\quad t>0,\,x\in\mathbb{R},\\ u(0,x)=\begin{cases}u_+,&\text{if}\ x\ge0,\\u_-,&\text{if}\ x<0,\end{cases}&{} \end{cases} \end{aligned} $$
(6.44)

where \(f\in C^2(\mathbb {R})\) and u≠ u+ are constants.

In the following sections we first consider the case in which f is convex. Indeed the solutions obtained under that assumption are the building blocks of the solutions of the general case [5, 6, 11].

6.3.1 Strictly Convex Fluxes

We assume that f is a convex function, the concave case is analogous.

We distinguish two cases. If (see Fig. 6.4)

$$\displaystyle \begin{aligned} u_+<u_- \end{aligned}$$
Fig. 6.4
figure 4

Convex flux f

then the entropy solution of (6.44) is the shock wave (see Fig. 6.5)

$$\displaystyle \begin{aligned} u(t,x)=\begin{cases} u_+,&\text{if}\> x\ge\displaystyle\frac{f(u_+)-f(u_-)}{u_+-u_-}t,\\ {} u_-,&\text{if}\> x<\displaystyle\frac{f(u_+)-f(u_-)}{u_+-u_-}t. \end{cases} \end{aligned}$$
Fig. 6.5
figure 5

Shck wave (u, u+) with u+ < u

If (see Fig. 6.6)

$$\displaystyle \begin{aligned} u_+>u_- \end{aligned}$$

then the entropy solution of (6.44) is the rarefaction wave (see Fig. 6.7)

$$\displaystyle \begin{aligned} u(t,x)=\begin{cases} u_+,&\text{if}\> x\ge f'(u_+)t,\\ \sigma,&\text{if}\> x= f'(\sigma)t,\>u_-< \sigma<u_+,\\ u_-,&\text{if}\>x=f'(u_-)t. \end{cases} \end{aligned} $$
(6.45)
Fig. 6.6
figure 6

Shock wave (u, u+) with u+ > u

Fig. 6.7
figure 7

Rarefaction wave

Observe that the definition makes sense because f is convex and then f′ is increasing.

We claim that

$$\displaystyle \begin{aligned} \partial_t\eta(u)+\partial_x q(u)=0, \end{aligned} $$
(6.46)

for every entropy η with flux q, where u is the rarefaction wave defined in (6.45).

Consider the sets

$$\displaystyle \begin{aligned} \Omega_1&=\{(t,x)\in(0,\infty)\times\mathbb{R};x<f'(u_-)t\},\\ \Omega_2&=\{(t,x)\in(0,\infty)\times\mathbb{R};f'(u_-)t<x<f'(u_+)t\},\\ \Omega_3&=\{(t,x)\in(0,\infty)\times\mathbb{R};x>f'(u_+)t\}, \end{aligned} $$

whit outer normals n1, n2, n3, and a nonnegative test function \(\varphi \in C^\infty ((0,\infty )\times \mathbb {R})\). We have

Therefore (6.46) holds and then (6.45) is the entropy solution of (6.44).

When f is concave we have a completely symmetric case, a shock when u < u+ and a rarefaction when u > u+.

Example 6.3.1

The entropy solution of the Riemann problem

$$\displaystyle \begin{aligned} \begin{cases} u_t+\left(\frac{u^2}{2}\right)_x=0,& \quad t>0,\>x\in\mathbb{R},\\ {} u(0,x)= \begin{cases} -1& \quad \text{if}\ x<0,\\ 1& \quad \text{if}\ x\ge0, \end{cases} &{} \end{cases} \end{aligned}$$

is the rarefaction wave

$$\displaystyle \begin{aligned} u(t,x)= \begin{cases} -1& \quad \text{if}\ x<-t,\\ \sigma& \quad \text{if}\ x=\sigma t,\,-1<\sigma\le 1,\\ 1& \quad \text{if}\ x>t. \end{cases} \end{aligned}$$

Example 6.3.2

The entropy solution of the Riemann problem

$$\displaystyle \begin{aligned} \begin{cases} u_t+(u^3)_x=0,& \quad t>0,\>x\in\mathbb{R},\\ {} u(0,x)= \begin{cases} 1& \quad \text{if}\ x<2,\\ 0& \quad \text{if}\ x\ge2, \end{cases} &{} \end{cases} \end{aligned}$$

is the shock

$$\displaystyle \begin{aligned} u(t,x)= \begin{cases} 1& \quad \text{if}\ x<t+2,\\ 0& \quad \text{if}\ x\ge t+2. \end{cases} \end{aligned}$$

Example 6.3.3

The entropy solution of the Riemann problem

$$\displaystyle \begin{aligned} \begin{cases} u_t+(u^3)_x=0,& \quad t>0,\>x\in\mathbb{R},\\ {} u(0,x)= \begin{cases} 0& \quad \text{if}\ x<2,\\ 1& \quad \text{se}\ x\ge2, \end{cases} &{} \end{cases} \end{aligned}$$

is the rarefaction wave

$$\displaystyle \begin{aligned} u(t,x)= \begin{cases} 0& \quad \text{if}\ x<2,\\ \sigma& \quad \text{if }\ x=3\sigma^2 t+2,\,0<\sigma\le 1,\\ 1& \quad \text{if}\ x>3t+2. \end{cases} \end{aligned}$$

Example 6.3.4

The entropy solution of the Riemann problem

$$\displaystyle \begin{aligned} \begin{cases} u_t+(e^u)_x=0,& \quad t>0,\>x\in\mathbb{R},\\ {} u(0,x)= \begin{cases} 2& \quad \text{if}\ x<0,\\ 0& \quad \text{if}\ x\ge0, \end{cases} &{} \end{cases} \end{aligned}$$

is the shock

$$\displaystyle \begin{aligned} u(t,x)= \begin{cases} 2& \quad \text{if}\ x<\frac{e^2-1}{2}t,\\ 0& \quad \text{if}\ x\ge \frac{e^2-1}{2}t. \end{cases} \end{aligned}$$

6.3.2 General Fluxes

In the case of convex or concave fluxes the solution of the Riemann problem (6.44) consists of only one wave, a shock or a rarefaction wave. In the case of fluxes that are not convex or concave we can have several waves of both types. Moreover, the waves may also be glued together.

We have to distinguish again two cases. If

$$\displaystyle \begin{aligned} u_-<u_+ \end{aligned}$$

we consider the convex hull f of f in the interval [u, u+], i.e., f is the largest convex map such that

$$\displaystyle \begin{aligned} f_*(\xi)\le f(\xi),\qquad u_-\le \xi \le u_+. \end{aligned}$$

Let consider the points w0, …, wn such that (see Fig. 6.8)

$$\displaystyle \begin{aligned} &u_-=w_0<w_1<\ldots<w_n=u_+,\\ &f(w_i)=f_*(w_i),\quad i=0,\ldots,n,\\ &w_i<u<w_{i+1}\Rightarrow f_*(u)<f(u)\>\text{or}\> f_*(u)=f(u),\quad i=0,\ldots,n-1. \end{aligned} $$
Fig. 6.8
figure 8

Nonconvex flux f

We solve separately the n − 1 Riemann problems obtained in correspondence of the values (wi, wi+1), i = 0, …, n − 1. If f < f in (wi, wi+1) we have a shock otherwise a rarefaction (see Fig. 6.9). This algorithm provides clearly the entropy solution of (6.44) because we are gluing entropy solutions.

Fig. 6.9
figure 9

Nonconvex flux with shock (u, u+) with u < u+

If

$$\displaystyle \begin{aligned} u_->u_+ \end{aligned}$$

we consider the concave hull f of f in the interval [u, u+], i.e., f is the smallest concave map such that

$$\displaystyle \begin{aligned} f(\xi)\le f^*(\xi),\qquad u_-\le \xi \le u_+, \end{aligned}$$

and we argue in the same way.

Example 6.3.5

Consider the Riemann problem (see Fig. 6.10)

$$\displaystyle \begin{aligned} \begin{cases} \partial_t u+\partial_x (u^3-3u)=0,&\quad t>0,\,x\in\mathbb{R},\\ u(0,x)=\begin{cases}2,&\text{if}\ x\ge0,\\-2,&\text{if}\ x<0,\end{cases}&{} \end{cases} \end{aligned} $$
(6.47)
Fig. 6.10
figure 10

f(u) = (u3 − 3u)

The solution of (6.47) is (see Fig. 6.11)

$$\displaystyle \begin{aligned} u(0,x)=\begin{cases} 2,&\text{if}\ x\ge 9t,\\ \sigma,&\text{if}\ x=(3\sigma^2-3)t,\,1\le\sigma<2,\\ -2,&\text{if}\ x<0, \end{cases} \end{aligned}$$

where the shock connecting −2 and 1 is attached to the rarefaction from 1 to 2.

Fig. 6.11
figure 11

Solution of (6.47)

The same feature can be found in

$$\displaystyle \begin{aligned} \begin{cases} \partial_t u+\partial_x (u^3-3u)=0,&\quad t>0,\,x\in\mathbb{R},\\ u(0,x)=\begin{cases}-2,&\text{if}\ x\ge0,\\2,&\text{if}\ x<0.\end{cases}&{} \end{cases} \end{aligned}$$

Example 6.3.6

Let us solve the Cauchy problem

$$\displaystyle \begin{aligned} \begin{cases} u_t+\left(\frac{u^2}{2}\right)_x=0,& \quad t>0,\>x\in\mathbb{R},\\ {} u(0,x)= \begin{cases} 1& \quad \text{if}\ 0<x<1\\ 0& \quad \text{otherwise}. \end{cases} &{} \end{cases} \end{aligned} $$
(6.48)

The wave generated at x = 0 is a rarefaction wave with speeds between 0 and 1, the one generated at x = 1 a shock with speed 1/2, they interact at t = 2, and we have (see Fig. 6.12)

$$\displaystyle \begin{aligned} u(t,x)=\begin{cases} 0,&\text{if}\ x\le 0,\\ \sigma,&\text{if}\ x=\sigma t,\,0\le\sigma\le 1,\\ 1,&\text{if}\ t<x\le \frac{t}{2}+1 ,\\ 0,&\text{if}\ x>\frac{t}{2}+1, \end{cases} \qquad 0\le t\le 2. \end{aligned} $$
(6.49)

For t ≥ 2 we have a structure of the type (see Fig. 6.13)

$$\displaystyle \begin{aligned} u(t,x)=\begin{cases} 0,&\text{if}\ x\le 0,\\ \sigma,&\text{if}\ x=\sigma t,\,0\le\sigma\le\lambda(t),\\ 0,&\text{if}\ x>\lambda(t), \end{cases} \qquad t\ge 2. \end{aligned} $$
(6.50)

We have to determine λ(t). We know that

$$\displaystyle \begin{aligned} \lambda(2)=2. \end{aligned} $$
(6.51)

The Rankine-Hugoniot condition gives

$$\displaystyle \begin{aligned} \lambda'(t)=\frac{u(t,\lambda(t)^-)}{2}. \end{aligned} $$
(6.52)

Finally, from (6.50) we know

$$\displaystyle \begin{aligned} u(t,\lambda(t)^-)=\frac{\lambda(t)}{t}. \end{aligned} $$
(6.53)

Therefore, (6.51), (6.52), and (6.53) imply that λ(t) is the unique solution of the ordinary differential problem

$$\displaystyle \begin{aligned} \lambda'(t)=\frac{\lambda(t)}{2t},\qquad \lambda(2)=2, \end{aligned}$$

namely

$$\displaystyle \begin{aligned} \lambda(t)=\sqrt{2t},\qquad t\ge2. \end{aligned}$$
Fig. 6.12
figure 12

Solution of (6.48)

Fig. 6.13
figure 13

Solution of (6.48)

Example 6.3.7

Let us solve the Cauchy problem

$$\displaystyle \begin{aligned} \begin{cases} u_t+\left(\frac{u^2}{2}\right)_x=0,& \quad t>0,\>x\in\mathbb{R},\\ {} u(0,x)= \begin{cases} 1& \quad \text{se}\ x<-1,\\ 0& \quad \text{se}\ -1<x<0\\ 2& \quad \text{se}\ 0<x<1\\ 0& \quad \text{se}\ x>1. \end{cases} &{} \end{cases} \end{aligned} $$
(6.54)

The wave generated at x = −1 is a shock with speed 1/2, the one generated at x = 0 is a rarefaction wave with speeds between 0 and 2, the one generated at x = 1 a shock with speed 1. The first interaction is between the second and the third wave at t = 1, and we have (see Fig. 6.14).

$$\displaystyle \begin{aligned} u(t,x)=\begin{cases} 1,&\text{if}\ x\le \frac{t}{2}-1,\\ 0,&\text{if}\ \frac{t}{2}-1\le x\le 0,\\ \sigma,&\text{if}\ x=\sigma t,\,0\le\sigma\le 2,\\ 2,&\text{if}\ 2t<x\le t+1 ,\\ 0,&\text{if}\ x>t+1, \end{cases} \qquad 0\le t\le 1. \end{aligned} $$
(6.55)

The second interaction is between the first and the second wave at t = 2, and for 1 ≤ t ≤ 2 and t ≥ 2 we have a structure of the type (see Figs. 6.15 and 6.16)

$$\displaystyle \begin{aligned} &u(t,x)=\begin{cases} 1,&\text{if}\ x\le 0,\\ \sigma,&\text{if}\ x=\sigma t,\,0\le\sigma\le\lambda(t),\\ 0,&\text{if}\ x>\lambda(t), \end{cases} \qquad 1\le t \le 2, \end{aligned} $$
(6.56)
$$\displaystyle \begin{aligned} &u(t,x)=\begin{cases} 1,&\text{if}\ x\le \gamma(t),\\ \sigma,&\text{if}\ x=\sigma t,\,\gamma(t)\le\sigma\le\lambda(t),\\ 0,&\text{if}\ x>\lambda(t), \end{cases} \qquad t\ge 2. \end{aligned} $$
(6.57)
Fig. 6.14
figure 14

Solution of (6.54)

Fig. 6.15
figure 15

Solution of (6.54)

Fig. 6.16
figure 16

Solution of (6.54)

We have to determine γ(t) and λ(t). We know that

$$\displaystyle \begin{aligned} \gamma(2)=0,\qquad \lambda(1)=2. \end{aligned} $$
(6.58)

The Rankine-Hugoniot condition gives

$$\displaystyle \begin{aligned} \gamma'(t)=\frac{1+u(t,\gamma(t)^+)}{2},\qquad \lambda'(t)=\frac{u(t,\lambda(t)^-)}{2}. \end{aligned} $$
(6.59)

Finally, from (6.56) we know

$$\displaystyle \begin{aligned} u(t,\gamma(t)^+)=\frac{\gamma(t)}{t},\qquad u(t,\lambda(t)^-)=\frac{\lambda(t)}{t}. \end{aligned} $$
(6.60)

Therefore, (6.51), (6.52), and (6.53) imply that γ(t) and λ(t) are the unique solution of the ordinary differential problems

$$\displaystyle \begin{aligned} \begin{cases} \displaystyle\gamma'(t)=\frac{1}{2}\left(1+\frac{\gamma(t)}{t}\right),&{}\\ \gamma(2)=0,&{} \end{cases} \qquad \begin{cases} \displaystyle\lambda'(t)=\frac{\lambda(t)}{2t}&{}\\ \lambda(1)=2,&{} \end{cases} \end{aligned}$$

namely

$$\displaystyle \begin{aligned} \gamma(t)=t-\sqrt{2t},\qquad \lambda(t)=2\sqrt{t}. \end{aligned}$$

Since, γ and λ interact at \(\frac {9+4\sqrt {2}}{2}\), (6.57) holds only for \(2\le t\le \frac {9+4\sqrt {2}}{2}\). For \(t\ge \frac {9+4\sqrt {2}}{2}\) we have only a shock connecting 0 and 1 with speed \(\frac {1}{2}\)

$$\displaystyle \begin{aligned} u(t,x)=\begin{cases} 1,&\text{if}\ x\le \frac{t}{2}+\sqrt{18+8\sqrt{2}},\\ 0,&\text{if}\ x>\frac{t}{2}+\sqrt{18+8\sqrt{2}}, \end{cases} \qquad t\ge\frac{9+4\sqrt{2}}{2}. \end{aligned}$$

6.4 Vanishing Viscosity

In this section we discuss the parabolic approximation

$$\displaystyle \begin{aligned} \begin{cases} \partial_t u_\varepsilon+\partial_x f(u_\varepsilon)=\varepsilon\partial_{xx}^2u_\varepsilon,&\qquad t>0,\>x\in\mathbb{R},\\ u_\varepsilon(0,x)=u_{0,\varepsilon}(x),&\qquad x\in\mathbb{R}, \end{cases} \end{aligned} $$
(6.61)

of the scalar hyperbolic conservation law

$$\displaystyle \begin{aligned} \begin{cases} \partial_t u+\partial_x f(u)=0,&\qquad t>0,\>x\in\mathbb{R},\\ u(0,x)=u_0(x),&\qquad x\in\mathbb{R}. \end{cases} \end{aligned} $$
(6.62)

The mean feature of such an approximation relies in the regularity property of the solutions. Indeed due to its parabolic structure (6.61) does not experience shocks.

For the initial data of (6.62) we assume

$$\displaystyle \begin{aligned} u_0\in L^1(\mathbb{R})\cap BV(\mathbb{R}). \end{aligned}$$

On the other hand, for every ε > 0, u0,ε is a smooth approximation to u0 such that

$$\displaystyle \begin{aligned} &u_{0,\varepsilon}\in C^\infty(\mathbb{R})\cap W^{2,1}(\mathbb{R}),\qquad \varepsilon>0,\\ &u_{0,\varepsilon}\longrightarrow u_0,\qquad \text{in}\ L^p(\mathbb{R}),\,1\le p< \infty,\ \text{as}\ \varepsilon\to0,\\ &\left\|u_{0,\varepsilon}\right\|{}_{L^\infty(\mathbb{R})}\le\left\|u_{0}\right\|{}_{L^\infty(\mathbb{R})},\quad \left\|u_{0,\varepsilon}\right\|{}_{L^1(\mathbb{R})}\le\left\|u_{0}\right\|{}_{L^1(\mathbb{R})},\qquad \varepsilon>0,\\ &\left\|\partial_x u_{0,\varepsilon}\right\|{}_{L^1(\mathbb{R})}\le TV(u_{0}),\quad \varepsilon\left\|\partial_{xx}^2 u_{0,\varepsilon}\right\|{}_{L^1(\mathbb{R})}\le C,\qquad \varepsilon>0, \end{aligned} $$
(6.63)

for some constant C > 0 independent on ε. Under these assumptions (6.61) admits a unique solution uε such that [7, 14]

$$\displaystyle \begin{aligned} u_\varepsilon\in C^\infty([0,\infty)\times\mathbb{R})\cap W^{2,p}((0,\infty);W^{1,p}(\mathbb{R})),\qquad 1\le p<\infty. \end{aligned}$$

The main result of this Section is the following [6, 11, 18].

Theorem 6.4.1

If

$$\displaystyle \begin{aligned} u_0\in L^1(\mathbb{R})\cap BV(\mathbb{R}). \end{aligned}$$

then

$$\displaystyle \begin{aligned} u_\varepsilon\longrightarrow u\qquad \mathit{\text{in}}\ L^p_{loc}((0,\infty)\times\mathbb{R}),\,1\le p<\infty,\ \mathit{\text{and a.e. in }}\ (0,\infty)\times\mathbb{R}, \end{aligned} $$
(6.64)

where u is the entropy weak solution of (6.62) and uεis the solution of (6.61). Moreover, the following estimate holds

$$\displaystyle \begin{aligned} \left\|u_\varepsilon(t,\cdot)-u(t,\cdot)\right\|{}_{L^1(\mathbb{R})}\le c\sqrt{\varepsilon t}\, TV(u_0)+\left\|u_{0,\varepsilon}-u_0\right\|{}_{L^1(\mathbb{R})}, \end{aligned} $$
(6.65)

for every ε > 0 and t ≥ 0, where c is a positive constant independent on ε and t.

The convergence part of this result has been proved in [12] for scalar equations and in [4] for systems of conservation laws. The error estimates has been proved in [13].

Let us conclude this introduction with the following observation. In our statement all the family {uε}ε>0 converges to u and not just a subsequences, this result is due to the uniqueness of the entropy solutions of (6.62) and to the following equivalence

$$\displaystyle \begin{aligned} u_\varepsilon &\longrightarrow u\\ &\Updownarrow\\ \forall\> \{u_{\varepsilon_k}\}_{k\in\mathbb{N}}\>\>\text{subsequence}\>\>\exists\>&\{u_{\varepsilon_{k_h}}\}_{h\in\mathbb{N}}\>\>\text{subsequence s.t.}\>\>u_{\varepsilon_{k_h}} \longrightarrow u. \end{aligned} $$
(6.66)

6.4.1 A Priori Estimates, Compactness, and Convergence

The aim of this section relies essentially in the proof of (6.64). Let us start with a technical lemma that will play a key role in the following a priori estimates.

Lemma 6.4.1 ([2, Lemma 2])

Let \(v:\mathbb {R}\to \mathbb {R}\) be a function. If

$$\displaystyle \begin{aligned} v\in C^1(\mathbb{R}),\qquad v'\in L^1(\mathbb{R}), \end{aligned}$$

then

$$\displaystyle \begin{aligned} \lim_{\delta\to0+}\int_{|v|<\delta}|v'|dx=0. \end{aligned}$$

Proof

We write

$$\displaystyle \begin{aligned} v_\delta=|v'|\chi_{\{|v|<\delta\}},\qquad \delta>0 \end{aligned}$$

and observe that

$$\displaystyle \begin{aligned} |v_\delta|\le |v'|,\qquad v_\delta\longrightarrow 0\quad \text{a.e. in}\ \mathbb{R}. \end{aligned}$$

Indeed, if |{v = 0}| = 0 we have χ{|v|<δ}→ 0 otherwise v′→ 0 on {v = 0}. Therefore the claim follows from the Dominated Convergence Theorem. □

Remark 6.4.1

Since the solutions of (6.61) are smooth, the previous lemma allows us to use the identity

$$\displaystyle \begin{aligned} \mathrm{sign}\left(v\right)'=\delta_{\{v=0\}}v' \end{aligned} $$
(6.67)

in our computations, where δ{v=0} is the Dirac delta concentrated on the set {v = 0}. In particular, if \(v\in C^2(\mathbb {R})\cap L^\infty (\mathbb {R})\cap W^{2,1}(\mathbb {R})\),

$$\displaystyle \begin{aligned} \int_{\mathbb{R}} f(v)''\mathrm{sign}\left(v'\right)dx=0,\qquad \int_{\mathbb{R}} v''\mathrm{sign}\left(v\right)dx\le 0, \end{aligned} $$
(6.68)

that follow integrating by parts and using (6.67).

Let us give a rigorous proof of them. We have

$$\displaystyle \begin{aligned} \lim_{\alpha\to 0}\int_{\mathbb{R}} f(v)''\eta_\alpha^{\prime}(v')dx&=\int_{\mathbb{R}} f(v)''\mathrm{sign}\left(v'\right)dx,\\ \lim_{\alpha\to 0}\int_{\mathbb{R}} v''\eta_\alpha^{\prime}(v)dx&=\int_{\mathbb{R}} v''\mathrm{sign}\left(v\right)dx, \end{aligned} $$
(6.69)

where

$$\displaystyle \begin{aligned} \eta_\alpha(\xi)=\sqrt{\xi^2+\alpha^2},\qquad \alpha\in\mathbb{R}. \end{aligned}$$

For every α ≠ 0

$$\displaystyle \begin{aligned} \eta_\alpha\in C^2(\mathbb{R}),\qquad \eta_\alpha^{\prime}(\xi)=\frac{\xi}{\sqrt{\xi^2+\alpha^2}},\qquad \eta_\alpha^{\prime\prime}(\xi)=\frac{\alpha^2}{(\xi^2+\alpha^2)^{3/2}}\ge0. \end{aligned}$$

We have

where \(L=\sup \limits _{|\xi |\le \left \|v\right \|{ }_{L^\infty (\mathbb {R})}}|f'(\xi )|\). Therefore, (6.68) follows from (6.69).

Let us continue with some apriori estimates on uε independent on ε.

Lemma 6.4.2 (L Estimate)

We have that

$$\displaystyle \begin{aligned} \left\|u_\varepsilon\right\|{}_{L^\infty((0,\infty)\times\mathbb{R})}\le \left\|u_0\right\|{}_{L^\infty(\mathbb{R})},\qquad \varepsilon>0. \end{aligned}$$

Proof

Due to (6.63) the maps with constant values \( \left \|u_0\right \|{ }_{L^\infty (\mathbb {R})}\) and \(- \left \|u_0\right \|{ }_{L^\infty (\mathbb {R})}\) provide a super and a sub solution to (6.61), respectively. Therefore, the claim follows from the comparison principle for parabolic equations. □

Lemma 6.4.3 (L1 Estimate)

The function

$$\displaystyle \begin{aligned} t\ge0\longmapsto \left\|u_\varepsilon(t,\cdot)\right\|{}_{L^1(\mathbb{R})} \end{aligned}$$

is nonincreasing. In particular,

$$\displaystyle \begin{aligned} \left\|u_\varepsilon(t,\cdot)\right\|{}_{L^1(\mathbb{R})}\le \left\|u_0\right\|{}_{L^1(\mathbb{R})},\qquad \varepsilon>0,\quad t\ge0. \end{aligned}$$

Proof

Due to the regularity of uε, we have

where \(\delta _{\{u_\varepsilon = 0\}}\) is the Dirac’s delta concentrated on the set {uε = 0}. Finally, an integration on (0, t) gives (see (6.63))

$$\displaystyle \begin{aligned} \left\|u_\varepsilon(t,\cdot)\right\|{}_{L^1(\mathbb{R})}\le \left\|u_{0,\varepsilon}\right\|{}_{L^1(\mathbb{R})}\le \left\|u_0\right\|{}_{L^1(\mathbb{R})}. \end{aligned}$$

Lemma 6.4.4 (BV  Estimate in x)

The function

$$\displaystyle \begin{aligned} t\ge0\longmapsto \left\|\partial_xu_\varepsilon(t,\cdot)\right\|{}_{L^1(\mathbb{R})} \end{aligned}$$

is nonincreasing. In particular,

$$\displaystyle \begin{aligned} \left\|\partial_xu_\varepsilon(t,\cdot)\right\|{}_{L^1(\mathbb{R})}\le TV(u_0),\qquad \varepsilon>0,\quad t\ge0. \end{aligned}$$

Proof

Due to the regularity of uε, we have

$$\displaystyle \begin{aligned} \partial_{tx}^2 u_\varepsilon+ \partial_x\left(f'(u_\varepsilon)\partial_x u_\varepsilon\right)=\varepsilon\partial_{xxx}^3u_\varepsilon \end{aligned}$$

and then

where \(\delta _{\{\partial _xu_\varepsilon =\, 0\}}\) is the Dirac’s delta concentrated on the set {xuε = 0}. Finally, an integration on (0, t) gives (see (6.63))

$$\displaystyle \begin{aligned} \left\|\partial_xu_\varepsilon(t,\cdot)\right\|{}_{L^1(\mathbb{R})}\le \left\|\partial_x u_{0,\varepsilon}\right\|{}_{L^1(\mathbb{R})}\le TV(u_0). \end{aligned}$$

Lemma 6.4.5 (BV  Estimate in t)

The function

$$\displaystyle \begin{aligned} t\ge0\longmapsto \left\|\partial_tu_\varepsilon(t,\cdot)\right\|{}_{L^1(\mathbb{R})} \end{aligned}$$

is nonincreasing. In particular,

$$\displaystyle \begin{aligned} \left\|\partial_tu_\varepsilon(t,\cdot)\right\|{}_{L^1(\mathbb{R})}\le TV(u_0)L+C,\qquad \varepsilon>0,\quad t\ge0, \end{aligned}$$

where C is the constant that appears in (6.63) and

$$\displaystyle \begin{aligned} L=\left\|f'\right\|{}_{L^\infty(-\left\|u_0\right\|{}_{L^\infty(\mathbb{R})}, \left\|u_0\right\|{}_{L^\infty(\mathbb{R})})}.\end{aligned} $$

Proof

Due to the regularity of uε, we have

$$\displaystyle \begin{aligned} \partial_{tt}^2 u_\varepsilon+ \partial_x\left(f'(u_\varepsilon)\partial_t u_\varepsilon\right)=\varepsilon\partial_{txx}^3u_\varepsilon\end{aligned} $$

and then

where \(\delta _{\{\partial _tu_\varepsilon =0\}}\) is the Dirac’s delta concentrated on the set {tuε = 0}. Finally, an integration on (0, t), (6.61), (6.63), and Lemma 6.4.2 give

$$\displaystyle \begin{aligned} \left\|\partial_tu_\varepsilon(t,\cdot)\right\|{}_{L^1(\mathbb{R})}\le&\left\|\partial_tu_\varepsilon(0,\cdot)\right\|{}_{L^1(\mathbb{R})}\\ &=\left\|\varepsilon\partial_{xx}^2 u_{0,\varepsilon}-f'(u_{0,\varepsilon})\partial_x u_{0,\varepsilon}\right\|{}_{L^1(\mathbb{R})}\\ \le&\varepsilon\left\|\partial_{xx}^2 u_{0,\varepsilon}\right\|{}_{L^1(\mathbb{R})}+\left\|f'(u_{0,\varepsilon})\right\|{}_{L^\infty(\mathbb{R})}\left\|\partial_x u_{0,\varepsilon}\right\|{}_{L^1(\mathbb{R})}\\ \le&C+TV(u_0)L. \end{aligned} $$

Proof (of (6.64))

Let \(\{u_{\varepsilon _k}\}_{k\in \mathbb {N}}\) be a subsequence of {uε}ε>0. Since \(\{u_{\varepsilon _k}\}_{k\in \mathbb {N}}\) is bounded in \(L^\infty ((0,\infty )\times \mathbb {R})\cap BV((0,T)\times \mathbb {R}),\,T>0,\) (see Lemmas 6.4.3, 6.4.4, and 6.4.5), there exists a function \(u\in L^\infty ((0,\infty )\times \mathbb {R})\cap BV((0,T)\times \mathbb {R}),\,T>0,\) and a subsequence \(\{u_{\varepsilon _{k_h}}\}_{h\in \mathbb {N}}\) such that

$$\displaystyle \begin{aligned} u_{\varepsilon_{k_h}} \longrightarrow u\qquad \text{in}\ L^p_{loc}((0,\infty)\times\mathbb{R})\ \text{and a.e. in }\ (0,\infty)\times\mathbb{R}. \end{aligned}$$

We claim that u is the unique entropy solution of (6.62). Let \(\eta \in C^2(\mathbb {R})\) be a convex entropy with flux q defined by q′ = η′f′. Multiplying (6.61) by \(\eta '(u_{\varepsilon _{k_h}})\) we get

For every nonnegative test function \(\varphi \in C^\infty (\mathbb {R}^2)\) with compact support we have that

$$\displaystyle \begin{aligned} \int_0^\infty\!\!\!\int_{\mathbb{R}}\left(\eta(u_{\varepsilon_{k_h}})\partial_t\varphi +q(u_{\varepsilon_{k_h}})\partial_x \varphi\right)dtdx+&\int_{\mathbb{R}} \eta(u_{0,\varepsilon_{k_h}}(x))\varphi(0,x)dx\\ \ge -&\varepsilon_{k_h}\int_0^\infty\!\!\!\int_{\mathbb{R}} \eta(u_{\varepsilon_{k_h}})\partial_{xx}^2 \varphi dtdx.\end{aligned} $$

As h →, the Dominated Convergence Theorem gives

$$\displaystyle \begin{aligned} \int_0^\infty\!\!\!\int_{\mathbb{R}}\left(\eta(u)\partial_t\varphi+q(u)\partial_x \varphi\right)dtdx+\int_{\mathbb{R}} \eta(u_0(x))\varphi(0,x)dx\ge 0,\end{aligned} $$

proving that u is the unique entropy solution of (6.62).

Finally, thanks to (6.66), (6.64) is proved. □

6.4.2 Error Estimate

In this section we complete the proof of Theorem 6.4.1 showing (6.65).

Let t, ε > 0. We “double the variables”, using (τ, x) for (6.62) and (s, y) for (6.61). We have

$$\displaystyle \begin{aligned} \partial_t|&u(\tau,x)-u_\varepsilon(s,y)|\\&+\partial_x[\mathrm{sign}\left(u(\tau,x)-u_\varepsilon(s,y)\right)(f(u(\tau,x))-f(u_\varepsilon(s,y)))]\le 0, \end{aligned} $$
(6.70)

and

$$\displaystyle \begin{aligned} \partial_s|&u(\tau,x)-u_\varepsilon(s,y)|\\&+\partial_y[\mathrm{sign}\left(u(\tau,x)-u_\varepsilon(s,y)\right)(f(u(\tau,x))-f(u_\varepsilon(s,y)))]\\ \le& \varepsilon\partial_{yy}^2|u(\tau,x)-u_\varepsilon(s,y)|, \end{aligned} $$
(6.71)

in the sense of distributions. Let \(w\in C^\infty (\mathbb {R})\) be a nonnegative function with compact support such that

$$\displaystyle \begin{aligned} \left\|w\right\|{}_{L^1(\mathbb{R})}=1. \end{aligned}$$

We define

$$\displaystyle \begin{aligned} w_\alpha(\xi)=\frac{1}{\alpha}w\left(\frac{\xi}{\alpha}\right),\qquad \xi\in\mathbb{R},\>\>\alpha>0. \end{aligned}$$

By testing (6.70) with the function

$$\displaystyle \begin{aligned} (\tau,x)\longmapsto w_\beta(\tau-s)w_\alpha(x-y),\qquad \alpha,\,\beta>0, \end{aligned}$$

we get

$$\displaystyle \begin{aligned} \int_0^t\int_{\mathbb{R}}&\Big[|u(\tau,x)-u_\varepsilon(s,y)|w_\beta'(\tau-s)w_\alpha(x-y)\\ &+\mathrm{sign}\left(u(\tau,x)-u_\varepsilon(s,y)\right)(f(u(\tau,x))-f(u_\varepsilon(s,y)))\times\\ &\qquad \qquad \qquad \qquad \qquad \qquad \times w_\beta(\tau-s)w_\alpha^{\prime}(x-y)\Big]d\tau dx\\ -\int_{\mathbb{R}}& |u(t,x)-u_\varepsilon(s,y)|w_\beta(t-s)w_\alpha(x-y) dx\\ +\int_{\mathbb{R}}& |u_0(x)-u_\varepsilon(s,y)|w_\beta(-s)w_\alpha(x-y) dx\ge 0, \end{aligned} $$

that is

$$\displaystyle \begin{aligned} &\int_0^t\int_{\mathbb{R}}\int_{\mathbb{R}} |u(t,x)-u_\varepsilon(s,y)|w_\beta(t-s)w_\alpha(x-y) dsdxdy\\ &\le\int_0^t\int_{\mathbb{R}}\int_{\mathbb{R}} |u_0(x)-u_\varepsilon(s,y)|w_\beta(-s)w_\alpha(x-y) dsdxdy\\ &\>\>+\int_0^t\int_0^t\int_{\mathbb{R}}\int_{\mathbb{R}}\Big[|u(\tau,x)-u_\varepsilon(s,y)|w_\beta'(\tau-s)w_\alpha(x-y)\\ &\>\>\>\>\>+\mathrm{sign}\left(u(\tau,x)-u_\varepsilon(s,y)\right)(f(u(\tau,x))-f(u_\varepsilon(s,y)))\times\\ &\qquad \qquad \qquad \qquad \qquad \qquad \times w_\beta(\tau-s)w_\alpha^{\prime}(x-y)\Big]dsd\tau dxdy. \end{aligned} $$
(6.72)

By testing (6.71) with the function

$$\displaystyle \begin{aligned} (s,y)\longmapsto w_\beta(\tau-s)w_\alpha(x-y),\qquad \alpha,\,\beta>0,\end{aligned} $$

we get

$$\displaystyle \begin{aligned} -\int_0^t\int_{\mathbb{R}}&\Big[|u(\tau,x)-u_\varepsilon(s,y)|w_\beta'(\tau-s)w_\alpha(x-y)\\ &+\mathrm{sign}\left(u(\tau,x)-u_\varepsilon(s,y)\right)(f(u(\tau,x))-f(u_\varepsilon(s,y)))\times\\ &\qquad \qquad \qquad \qquad \qquad \qquad \times w_\beta(\tau-s)w_\alpha^{\prime}(x-y)\Big]dsdy\\ -\int_{\mathbb{R}}& |u(\tau,x)-u_\varepsilon(t,y)|w_\beta(\tau-t)w_\alpha(x-y) dy\\ +\int_{\mathbb{R}}& |u(\tau,x)-u_{0,\varepsilon}(y)|w_\beta(\tau)w_\alpha(x-y) dy\\ \ge&-\varepsilon\int_0^t\int_{\mathbb{R}} |u(\tau,x)-u_\varepsilon(s,y)|w_\beta(\tau-s)w_\alpha^{\prime\prime}(x-y)dsdy,\end{aligned} $$

that is

$$\displaystyle \begin{aligned} &\int_0^t\int_{\mathbb{R}}\int_{\mathbb{R}} |u(\tau,x)-u_\varepsilon(t,y)|w_\beta(\tau-t)w_\alpha(x-y) d\tau dxdy\\ &\le\int_0^t\int_{\mathbb{R}}\int_{\mathbb{R}} |u(\tau,x)-u_{0,\varepsilon}(y)|w_\beta(\tau)w_\alpha(x-y)d\tau dxdy\\ &\>\>-\int_0^t\int_0^t\int_{\mathbb{R}}\int_{\mathbb{R}}\Big[|u(\tau,x)-u_\varepsilon(s,y)|w_\beta'(\tau-s)w_\alpha(x-y)\\ &\>\>\>\>\>\>+\mathrm{sign}\left(u(\tau,x)-u_\varepsilon(s,y)\right)(f(u(\tau,x))-f(u_\varepsilon(s,y)))\times\\ &\qquad \qquad \qquad \qquad \qquad \qquad \times w_\beta(\tau-s)w_\alpha^{\prime}(x-y)\Big]dsd\tau dxdy\\ &\>\>+\varepsilon\int_0^t\int_0^t\int_{\mathbb{R}}\int_{\mathbb{R}} |u(\tau,x)-u_\varepsilon(s,y)|w_\beta(\tau-s)w_\alpha^{\prime\prime}(x-y)dsd\tau dxdy. \end{aligned} $$
(6.73)

We add (6.72) and (6.73)

$$\displaystyle \begin{aligned} &\int_0^t\int_{\mathbb{R}}\int_{\mathbb{R}} |u(t,x)-u_\varepsilon(s,y)|w_\beta(t-s)w_\alpha(x-y) dsdxdy\\ &\>\>+\int_0^t\int_{\mathbb{R}}\int_{\mathbb{R}} |u(\tau,x)-u_\varepsilon(t,y)|w_\beta(\tau-t)w_\alpha(x-y) d\tau dxdy\\ &\le\int_0^t\int_{\mathbb{R}}\int_{\mathbb{R}} |u_0(x)-u_\varepsilon(s,y)|w_\beta(-s)w_\alpha(x-y) dsdxdy\\ &\>\>+\int_0^t\int_{\mathbb{R}}\int_{\mathbb{R}} |u(\tau,x)-u_{0,\varepsilon}(y)|w_\beta(\tau)w_\alpha(x-y)d\tau dxdy\\ &\>\>+\varepsilon\int_0^t\int_0^t\int_{\mathbb{R}}\int_{\mathbb{R}} |u(\tau,x)-u_\varepsilon(s,y)|w_\beta(\tau-s)w_\alpha^{\prime\prime}(x-y)dsd\tau dxdy \end{aligned} $$

and send β → 0

(6.74)

We estimate I1 and I2 in the following way (see (6.63) and Lemma 6.4.4)

$$\displaystyle \begin{aligned} I_1\ge&\int_{\mathbb{R}}\int_{\mathbb{R}}\Big( |u(t,x)-u_\varepsilon(t,x)|-|u_\varepsilon(t,x)-u_\varepsilon(t,y)|\Big)w_\alpha(x-y) dxdy\\ &=\int_{\mathbb{R}} |u(t,x)-u_\varepsilon(t,x)|dx-\int_{\mathbb{R}}\int_{\mathbb{R}} |u_\varepsilon(t,y+\xi)-u_\varepsilon(t,y)|w_\alpha(\xi) d\xi dy\\ \ge&\left\|u(t,\cdot)-u_\varepsilon(t,\cdot)\right\|{}_{L^1(\mathbb{R})}-\int_{\mathbb{R}}\left|\int_0^\xi\int_{\mathbb{R}}|\partial_x u_\varepsilon (t, y+\sigma)|dyd\sigma\right|w_\alpha(\xi)d\xi\\ &=\left\|u(t,\cdot)-u_\varepsilon(t,\cdot)\right\|{}_{L^1(\mathbb{R})}- \left\|\partial_xu_\varepsilon(t,\cdot)\right\|{}_{L^1(\mathbb{R})} \int_{\mathbb{R}}|\xi|w_\alpha(\xi)d\xi\\ \ge&\left\|u(t,\cdot)-u_\varepsilon(t,\cdot)\right\|{}_{L^1(\mathbb{R})}-\alpha TV(u_0)\int_{\mathbb{R}} |\xi|w(\xi)d\xi,\\ I_2\le&\int_{\mathbb{R}}\int_{\mathbb{R}}\Big(|u_0(x)-u_{0,\varepsilon}(x)|+ |u_{0,\varepsilon}(x)-u_{0,\varepsilon}(y)|\Big)w_\alpha(x-y) dxdy\\ &=\int_{\mathbb{R}} |u_0(x)-u_{0,\varepsilon}(x)|dy+\int_{\mathbb{R}}\int_{\mathbb{R}} |u_{0,\varepsilon}(y+\xi)-u_{0,\varepsilon}(y)|w_\alpha(\xi) d\xi dy\\ \le&\left\|u_0-u_{0,\varepsilon}\right\|{}_{L^1(\mathbb{R})}+\int_{\mathbb{R}}\left|\int_0^\xi\int_{\mathbb{R}}|\partial_x u_{0,\varepsilon} (y+\sigma)|dyd\sigma\right|w_\alpha(\xi)d\xi\\ &=\left\|u_0-u_{0,\varepsilon}\right\|{}_{L^1(\mathbb{R})}+ \left\|\partial_x u_{0,\varepsilon}\right\|{}_{L^1(\mathbb{R})} \int_{\mathbb{R}}|\xi|w_\alpha(\xi)d\xi\\ \le&\left\|u_0-u_{0,\varepsilon}\right\|{}_{L^1(\mathbb{R})}+\alpha TV(u_0)\int_{\mathbb{R}} |\xi|w(\xi)d\xi. \end{aligned} $$

We have to estimate I3. Thanks to (6.64) we know

$$\displaystyle \begin{aligned} I_3=\lim_{\mu\to0}I_{3,\mu},\end{aligned} $$

where

$$\displaystyle \begin{aligned} I_{3,\mu}=\frac{\varepsilon}{2}\int_0^t\int_{\mathbb{R}}\int_{\mathbb{R}} |u_\mu(s,x)-u_\varepsilon(s,y)|w_\alpha^{\prime\prime}(x-y)dsdxdy,\qquad \mu>0.\end{aligned} $$

Since (see Lemma 6.4.4)

$$\displaystyle \begin{aligned} I_{3,\mu}\le&\frac{\varepsilon}{2}\int_0^t\int_{\mathbb{R}}\int_{\mathbb{R}}\big( |\partial_x u_\mu(s,x)|+|\partial_yu_\varepsilon(s,y)|\big)|w_\alpha^{\prime}(x-y)|dsdxdy\\ &=\frac{\varepsilon}{2}\int_0^t\int_{\mathbb{R}}\int_{\mathbb{R}}\big( |\partial_x u_\mu(s,y+\xi)|+|\partial_yu_\varepsilon(s,y)|\big)|w_\alpha^{\prime}(\xi)|dsd\xi dy\\ &=\frac{\varepsilon}{2}\int_0^t\int_{\mathbb{R}}\left( \left\|\partial_x u_\mu(s,\cdot)\right\|{}_{L^1(\mathbb{R})}+\left\|\partial_yu_\varepsilon(s,\cdot)\right\|{}_{L^1(\mathbb{R})}\right)|w_\alpha^{\prime}(\xi)|dsd\xi \\ \le &\varepsilon t TV(u_{0})\left\|w_\alpha^{\prime}\right\|{}_{L^1(\mathbb{R})}= \frac{\varepsilon t}{\alpha} TV(u_0)\left\|w'\right\|{}_{L^1(\mathbb{R})},\end{aligned} $$

we have

$$\displaystyle \begin{aligned} I_3\le \frac{\varepsilon t}{\alpha} TV(u_0)\left\|w'\right\|{}_{L^1(\mathbb{R})}.\end{aligned} $$

Using the estimates on I1, I2, and I3 in (6.74) we have

$$\displaystyle \begin{aligned} \left\|u(t,\cdot)-u_\varepsilon(t,\cdot)\right\|{}_{L^1(\mathbb{R})}\le &\left\|u_0-u_{0,\varepsilon}\right\|{}_{L^1(\mathbb{R})}\\ &+\left(\alpha+\frac{\varepsilon t}{\alpha}\right)TV(u_0)\left(2\int_{\mathbb{R}} |\xi|w(\xi)d\xi+\left\|w'\right\|{}_{L^1(\mathbb{R})}\right).\end{aligned} $$

Since the minimum of the map

$$\displaystyle \begin{aligned} \alpha\longmapsto \alpha+\frac{\varepsilon t}{\alpha}\end{aligned} $$

is attained in \(\sqrt {\varepsilon t}\), (6.65) is proved.