1 Introduction

In this article we prove existence of an adapted entropy solution in the sense of Audusse and Perthame [6], via a convergence proof for a Godunov type finite difference scheme, to the following Cauchy problem:

$$\begin{aligned} \partial _t u+\partial _x A(u,x)= & {} 0\qquad \quad \text{ for } (x,t)\in \mathbb {R}\times (0,T)=: \Pi _T,\end{aligned}$$
(1)
$$\begin{aligned} u(x,0)= & {} u_0\qquad \quad \text{ for } x\in \mathbb {R}. \end{aligned}$$
(2)

The partial differential equation appearing above is a scalar one-dimensional conservation law whose flux A(ux) has a spatial dependence that may have infinitely many spatial discontinuities. In contrast to all but a few previous papers on conservation laws with discontinuous flux that address the uniqueness question, we make no assumption about the existence of traces, and so the set of spatial flux discontinuities could have accumulation points.

Scalar conservation laws with discontinuous flux have a number of applications including vehicle traffic flow with rapid transitions in road conditions [11], sedimentation in clarifier-thickeners [8, 10], oil recovery simulation [20], two phase flow in porous media [4], and granular flow [16].

Even in the absence of spatial flux discontinuities, solutions of conservation laws develop discontinuities (shocks). Thus we seek weak solutions, which are bounded measurable functions u satisfying (1) in the sense of distributions. Closely related to the presence of shocks is the problem of nonuniqueness. Weak solutions are not generally unique without an additional condition or conditions, so-called entropy conditions. For the classical case of a conservation law with a spatially independent flux

$$\begin{aligned} u_t + f(u)_x = 0, \end{aligned}$$
(3)

one requires that the Kružkov entropy inequalities hold in the sense of distributions:

$$\begin{aligned} \partial _t \left| u-k\right| + \partial _x \{sgn(u-k)(f(u)-f(k)) \} \le 0, \quad \forall k \in \mathbb {R}, \end{aligned}$$
(4)

and then uniqueness follows from (4).

There are two main difficulties that arise which are not present in the classical case (3). The first problem is existence, the new difficulty being that getting a TV bound for the solution with BV initial data may not be possible in general due to the counter-examples given in [1, 12]. More interestingly, a TV bound for the solution is possible near the interface for non-uniformly convex fluxes (see Reference Ghoshal [13]). Several methods have been used to deal with the lack of a spatial variation bound, the main ones being the so-called singular mapping, compensated compactness, and a local variation bound. In this paper we employ the singular mapping approach, applied to approximations generated by a Godunov type difference scheme. The singular mapping technique is used to get a TV bound of a transformed (via the singular mapping) quantity. Once the TV bound of the transformed quantity is achieved we can pass to the limit and get a solution satisfying the adapted entropy inequality. Showing that the limit of the numerical approximations satisfies the adapted entropy inequalities is not straightforward due to the presence of infinitely many flux discontinuities.

The second problem is uniqueness. The usual Kružkov entropy inequalities do not apply to the discontinuous case. Also, it turns out that there are many reasonable notions of entropy solution [3, 5]. One must consider the application in order to decide on which definition of entropy solution to use.

There have been many papers on the subject of scalar conservation laws with spatially discontinuous flux over the past several decades. Most papers on this subject that have addressed the uniqueness question have assumed a finite number of flux discontinuities. Often the case of a single flux discontinuity is addressed, with the understanding that the results are readily extended to the case of any finite number of flux discontinuities. The admissibility condition has usually boiled down to a so-called interface condition (in addition to the Rankine-Hugoniot condition) that involves the traces of the solution across the spatial flux discontinuity. Often the interface condition consists of one or more inequalities, and is often derived from some modified version of the classical Kružkov entropy inequality.

When there are only finitely many flux discontinuities, existence of the required traces is guaranteed, assuming that \(u \mapsto A(u,x)\) is genuinely nonlinear [17, 23]. However if there are infinitely many flux discontinuities, and the subset of \(\mathbb {R}\) where they occur has one or more accumulation points, these existence results for traces do not apply. Thus a definition of entropy solution which does not refer to traces is of great interest.

A method has been developed first in [7], and then extended in [6], using so-called adapted entropy inequalities, that provides a notion of entropy solution and does not require the existence of traces. For the conservation law \(u_t + A(u,x)_x = 0\) with \(x \mapsto A(u,x)\) smooth, the classical Kružkov inequality (4) becomes

$$\begin{aligned}&\partial _t \left| u-k\right| + \partial _x \{sgn(u-k)(A(u,x)-A(k,x)) \} \nonumber \\&\qquad + sgn(u-k)\partial _x A(k,x) \le 0, \quad \forall k \in \mathbb {R}. \end{aligned}$$
(5)

Due to the term \(sgn(u-k)\partial _x A(k,x)\), this definition does not make sense without modification when one tries to extend it to the case of the discontinuous flux A(ux) considered here.

The adapted entropy approach consists of replacing the constants \(k \in \mathbb {R}\) by functions \(k_\alpha \) defined by the equations

$$\begin{aligned} A(k_\alpha (x),x) = \alpha , \quad x\in \mathbb {R}. \end{aligned}$$

With this approach the troublesome term \(sgn(u-k)\partial _x A(k,x)\) is not present, and the definition of adapted entropy solution is

$$\begin{aligned} \partial _t \left| u-k_{\alpha }\right| + \partial _x \{sgn(u-k_{\alpha })(A(u,x)-\alpha ) \} \le 0. \end{aligned}$$
(6)

Baiti and Jenssen [7] used this approach for the closely related problem where \(u\mapsto A(u,x)\) is strictly increasing. They proved both existence and uniqueness, with the additional assumption that the flux has the form \(A(u,x) = {\tilde{A}}(u,v(x))\). Audusse and Perthame [6] proved uniqueness for both the unimodal case considered in this paper, along with the case where \(u\mapsto A(u,x)\) is strictly increasing. The existence question was left open.

Recently there has been renewed interest in the existence question for problems where the Audusse–Perthame uniqueness theory applies. Piccoli and Tournus [19] proved existence for the problem where \(u\mapsto A(u,x)\) is strictly increasing, and without assuming the special functional form \(A(u,x) = {\tilde{A}}(u,v(x))\). This was accomplished under the simplifying assumption that \(u \mapsto A(u,x)\) is concave. Towers [22] extended the result of [19] to the case where \(u \mapsto A(u,x)\) is not required to be concave. Panov [18] proved existence of an adapted entropy solution, under assumptions that include our setup, by a measure-valued solution approach. The approach of [18] is quite general but more abstract than ours, and is not associated with a numerical algorithm.

The Godunov type scheme of this paper is a generalization of the scheme developed in [2] for the case where the flux has the form

$$\begin{aligned} A(u,x) = g(u) (1-H(x)) + f(u) H(x), \end{aligned}$$
(7)

where each of gf is unimodal and \(H(\cdot )\) denotes the Heaviside function. This is a so-called two-flux problem, where there is a single spatial flux discontinuity. The authors of [2] proposed a very simple interface flux that extends the classical Godunov flux so that it properly handles a single flux discontinuity. The singular mapping technique is used to prove that the Godunov approximations converge to a weak solution of the conservation law. With an additional regularity assumption about the limit solution (the solution is assumed to be continuous except for finitely many Lipschitz curves in \(\mathbb {R}\times \mathbb {R}_+\)), they also prove uniqueness.

The scheme and results of the present paper improve and extend those of Adimurthi et al. [2]. By adopting the Audusse–Perthame definition of entropy solution [6], and then invoking the uniqueness result of [6], we are able to remove the regularity assumption employed in [2], and also the restriction to finitely many flux discontinuities. Moreover, the scheme of [2] is defined on a nonstandard spatial grid that is specific to the case of a single flux discontinuity, and would be inconvenient from a programming viewpoint for the case of multiple flux discontinuities. Our scheme uses a standard spatial grid, and in fact our algorithm does not require that flux discontinuities be specifically located, identified, or processed in any special way. Our approach is based on the observation that it is possible to simply apply the Godunov interface flux at every grid cell boundary. At cell boundaries where there is no flux discontinuity, the interface flux automatically reverts to the classical Godunov flux, as desired. This not only makes it possible to use a standard spatial grid, but also simplifies the analysis of the scheme.

The remainder of the paper is organized as follows. In Sect. 2 we specify the assumptions on the data of the problem, give the definition of adapted entropy solution, and state our main theorem, Theorem 2.5. In Sect. 3 we give the details of the Godunov numerical scheme, and prove convergence (along a subsequence) of the resulting approximations. In Sect. 4 we show that a (subsequential) limit solution guaranteed by our convergence theorem is an adapted entropy solution in the sense of Definition 2.1, completing the proof of the main theorem.

2 Main theorem

We assume that the flux function \(A:\mathbb {R}\times \mathbb {R}\rightarrow \mathbb {R}_+\) satisfies the following conditions:

H-1:

For some \(r>0\)

$$\begin{aligned} |A(u_1,x)-A(u_2,x)|\le C|u_1-u_2| \text{ for } u_1,u_2\in [-r,r] \end{aligned}$$

where the constant \(C=C(r)\) is independent of x.

H-2:

There is a BV function \(a:\mathbb {R}\rightarrow \mathbb {R}\) and a continuous function \(R:\mathbb {R}\rightarrow \mathbb {R}^+\) such that

$$\begin{aligned} |A(u,x)-A(u,y)|\le R(u)|a(x)-a(y)|. \end{aligned}$$
H-3:

For each \(x\in \mathbb {R}\) the function \(u \mapsto A(u,x)\) is unimodal, meaning that there is \(u_{M}(x)\in \mathbb {R}\) such that \(A(u_M(x),x)=0\) and \(A(\cdot ,x)\) is decreasing on \((-\infty ,u_M(x)]\) and increasing on \([u_M(x),\infty )\). We further assume that there is a continuous function \(\gamma : [0,\infty ) \rightarrow [0,\infty )\), which is strictly increasing with \(\gamma (0)=0\), \(\gamma (+\infty ) = +\infty \), and such that

$$\begin{aligned} \begin{aligned}&A(u,x) \ge \gamma (u-u_M(x)) \text { for all } x\in \mathbb {R}\text { and } u \in [u_M(x),\infty ],\\&A(u,x) \ge \gamma (-(u-u_M(x))) \text { for all } x\in \mathbb {R}\text { and } u \in (-\infty ,u_M(x)]. \end{aligned} \end{aligned}$$
(8)
H-4:

\(u_M \in BV(\mathbb {R})\).

Above we have used the notation \(\text {BV}(\mathbb {R})\) to denote the set of functions of bounded variation on \(\mathbb {R}\), i.e., those functions \(\rho :\mathbb {R}\mapsto \mathbb {R}\) for which

$$\begin{aligned} {{\,\mathrm{TV}\,}}(\rho ) := \sup \left\{ \sum _{k=1}^K \left| \rho (\xi _k) - \rho (\xi _{k-1})\right| \right\} < \infty , \end{aligned}$$

where the \(\sup \) extends over all \(K\ge 1\) and all partitions \(\{\xi _0< \xi _1< \ldots < \xi _K \}\) of \(\mathbb {R}\).

By Assumption H-3, for each \(\alpha \ge 0\) there exist two functions \(k_\alpha ^+(x)\in [u_M(x),\infty )\) and \(k_\alpha ^-(x)\in (-\infty ,u_M(x)]\) uniquely determined from the following equations:

$$\begin{aligned} A(k_\alpha ^+(x),x)= A(k_\alpha ^-(x),x)=\alpha . \end{aligned}$$
(9)

Related to the flux \(A(\cdot ,\cdot )\) is the so-called singular mapping:

$$\begin{aligned} \Psi (u,x):=\int \limits _{u_{M}(x)}^{u}\left| \frac{\partial }{\partial u}A(\theta ,x)\right| d\theta . \end{aligned}$$
(10)

It is clear that for each \(x\in \mathbb {R}\) the mapping \(u \mapsto \Psi (u,x)\) is strictly increasing. Therefore for each \(x\in \mathbb {R}\) the map \(u\mapsto \Psi (u,x)\) is invertible and we denote the inverse map by \(\alpha (u,x)\). Notice that \(\alpha (\cdot ,\cdot )\) and \(\Psi (\cdot ,\cdot )\) satisfy the following relation

$$\begin{aligned} \Psi (\alpha (u,x),x)=u=\alpha (\Psi (u,x),x) \text{ for } \text{ all } x\in \mathbb {R}. \end{aligned}$$
(11)

Also, due to Assumption H-3, (10) is equivalent to \(\Psi (u,x)=sgn(u-u_M(x))A(u,x)\).

Definition 2.1

A function \(u \in L^{\infty }(\Pi _T) \cap C([0,T]:L^1_{{{\,\mathrm{loc}\,}}}(\mathbb {R}))\) is an adapted entropy solution of the Cauchy problem (1)–(2) if it satisfies the following adapted entropy inequality in the sense of distributions:

$$\begin{aligned} \partial _t|u-k_\alpha ^\pm (x)|+\partial _x \left[ sgn(u-k_\alpha ^\pm (x))(A(u,x)-\alpha )\right] \le 0 \end{aligned}$$
(12)

for all \(\alpha \ge 0\) or equivalently,

$$\begin{aligned} \begin{aligned}&\int \limits _{\mathbb {R}_+}\int \limits _{\mathbb {R}}\frac{\partial \phi }{\partial t}|u(x,t)-k^{\pm }_{\alpha }(x)|\,dxdt\\&\quad +\int \limits _{\mathbb {R}_+}\int \limits _{\mathbb {R}}\frac{\partial \phi }{\partial x}sgn(u(x,t)-k^{\pm }_{\alpha }(x))(A(u(x,t),x)-\alpha )\,dxdt \\&\quad + \int \limits _{\mathbb {R}} \left| u_0(x) -k^{\pm }_{\alpha }(x)\right| \phi (x,0) \,dx \ge 0 \end{aligned} \end{aligned}$$
(13)

for any \(0\le \phi \in C_c^{\infty }(\mathbb {R}\times [0,\infty ))\).

For uniqueness and stability we will rely on the following result by Panov.

Theorem 2.2

(Uniqueness Theorem [18]) Let uv be adapted entropy solutions in the sense of Definition 2.1, with corresponding initial data \(u_0\), \(v_0\), and assume that Assumptions (H-1)–(H-4) hold. Then for a.e. \(t\in [0,T]\) and any \(r>0\) we have

$$\begin{aligned} \int \limits _{\left| x\right| \le r}\left| \alpha (u(x,t),x)-\alpha (v(x,t),x)\right| \,dx\le \int \limits _{\left| x\right| \le r+L_1t}\left| \alpha (u_0(x),x)-\alpha (v_0(x),x)\right| \,dx\nonumber \\ \end{aligned}$$
(14)

where \(L_1:=\sup \{{\left| \partial _u A(u,x)\right| };\,x\in \mathbb {R},\left| u\right| \le \max (\Vert u_0\Vert _{L^{\infty }},\Vert v_0\Vert _{L^{\infty }})\}\) and \(\alpha \) is as in (11).

Though Theorem 2.2 is not stated in [18] but it essentially follows from the techniques used in [18, Theorem 2] and Kružkov’s uniqueness proof [15] for scalar conservation laws. For sake of completeness we give a sketch of the proof for Theorem 2.2 in “Appendix”. The main reason to rely on Theorem 2.2 instead of the uniqueness result in [6] is to exclude the following assumption [6, Hypothesis (H1); page 5] on flux:

  • A(u,x) is continuous at all points of \(\mathbb {R}\times \mathbb {R}\setminus \mathcal {N}\) where \(\mathcal {N}\) is a closed set of zero measure.

Audusse and Perthame [6] presents the following two examples to which their uniqueness theorem applies.

Example 2.3

$$\begin{aligned} A(u,x) = S(x)u^2, \quad S(x) >0. \end{aligned}$$

In this example \(u_M(x) = 0\) for all \(x \in \mathbb {R}\). Assumptions (H-1)–(H-4) are satisfied if \(S \in BV(\mathbb {R})\), and \(S(x) \ge \epsilon \) for some \(\epsilon >0\).

Example 2.4

$$\begin{aligned} A(u,x) = (u-u_M(x))^2. \end{aligned}$$

Assumptions (H-1)–(H-4) are satisfied for this example also if we assume that \(u_M \in BV(\mathbb {R}).\)

Our main theorem is

Theorem 2.5

Assume that the flux function A satisfies (H-1)–(H-4), and that \(u_0\in L^{\infty }(\mathbb {R})\). Then as the mesh size \(\Delta \rightarrow 0\), the approximations \(u^{\Delta }\) generated by the Godunov scheme described in Sect. 3 converge in \(L^1_{{{\,\mathrm{loc}\,}}}(\Pi _T)\) and pointwise a.e in \(\Pi _T\) to the unique adapted entropy solution \(u \in L^{\infty }(\Pi _T) \cap C([0,T]:L^1_{{{\,\mathrm{loc}\,}}}(\mathbb {R}))\) corresponding to the Cauchy problem (1)–(2) with initial data \(u_0\).

3 Godunov scheme and compactness

For \(\Delta x>0\) and \(\Delta t>0\) consider equidistant spatial grid points \(x_j:=j\Delta x\) for \(j\in \mathbb {Z}\) and temporal grid points \(t^n:=n\Delta t\) for integers \(0 \le n\le N\). Here N is the integer such that \(T \in [t^N,t^{N+1})\). Let \(\lambda :=\Delta t/\Delta x\). We fix the notation \(\chi _j(x)\) for the indicator function of \(I_j:=[x_j - \Delta x /2, x_j + \Delta x /2)\), and \(\chi ^n(t)\) for the indicator function of \(I^n:=[t^n,t^{n+1})\). Next we approximate initial data \(u_0\in BV(\mathbb {R})\) by a piecewise constant function \(u^{\Delta }_0\) defined as follows:

$$\begin{aligned} u^{\Delta }_0:=\sum _{j \in \mathbb {Z}}\chi _j(x)u^0_j\quad \text{ where } u^0_j=u_0(x_j) \text{ for } j\in \mathbb {Z}. \end{aligned}$$
(15)

Suppose \({\overline{m}}^0_j:=\max \{u_0(x);\,x\in I_j\}\) and \({\underline{m}}^0_j:=\min \{u_0(x);\,x\in I_j\}\). Then, for any \(r>0\) we have

$$\begin{aligned} \int \limits _{[-r,r]}\left| u_0(x)-u^{\Delta }_0(x)\right| \,dx\le \sum \limits _{j\in \mathbb {Z}}\int \limits _{I_j}\left| {\overline{m}}^0_j -{\underline{m}}^0_j\right| \,dx\le \Delta xTV(u_0)\rightarrow 0 \text{ as } \Delta x\rightarrow 0. \end{aligned}$$

Therefore, \(u_0^\Delta \rightarrow u_0\) in \(L^1_{loc}(\mathbb {R})\). Later this argument is also used in Lemma 3.14. The approximations generated by the scheme are denoted by \(u_j^n\), where \(u_j^n \approx u(x_j,t^n)\). The grid function \(\{u_j^n\}\) is extended to a function defined on \(\Pi _T\) via

$$\begin{aligned} u^{\Delta }(x,t) =\sum _{n=0}^N \sum _{j \in \mathbb {Z}}\chi _j(x) \chi ^n(t) u_j^n. \end{aligned}$$
(16)

We use the notation \(\Delta _+,\Delta _-\) as standard difference operator in x variable, that is, \(\Delta _+v_j=v_{j+1}-v_j\) and \(\Delta _-v_j=v_j-v_{j-1}\). The Godunov type scheme that we employ is then:

$$\begin{aligned} u_j^{n+1} = u_j^n - \lambda \Delta _- {\bar{A}}(u^n_j,u^n_{j+1},x_j,x_{j+1}), \quad j \in \mathbb {Z}, \quad n=0,1,\ldots ,N, \end{aligned}$$
(17)

where the numerical flux \({\bar{A}}\) is

$$\begin{aligned} {\bar{A}}(u,v,x_j,x_{j+1}) := \max \left\{ A(\max (u,u_M(x_j)),x_j) , A(\min (v,u_M(x_{j+1})),x_{j+1}) \right\} .\nonumber \\ \end{aligned}$$
(18)

When \(A(\cdot ,x_j)=A(\cdot ,x_{j+1})\), the flux \({\bar{A}}\) reduces to the classical Godunov flux that is used for conservation laws where the flux does not have a spatial dependence. Otherwise \({\bar{A}}\) is a generalization of the Godunov flux proposed in [2] for the two-flux problem where the flux is given by (7). It is readily verified that \({\bar{A}}(u,u,x_j,x_j) = A(u,x_j)\) and that \({\bar{A}}(u,v,x_j,x_{j+1})\) is nondecreasing (nonincreasing) as a function of u (v).

Consider \(\Psi (\cdot ,\cdot )\) as in (10). Let

$$\begin{aligned} z_j^n = \Psi (u_j^n,x_j), \quad z^{\Delta }(x,t) = \sum _{n=0}^N \sum _{j \in \mathbb {Z}}\chi _j(x) \chi ^n(t) z_j^n.\nonumber \\ \end{aligned}$$
(19)

We obtain compactness for \(\{u^{\Delta }\}\) via the singular mapping technique, which consists of first proving compactness for the sequence \(\{z^{\Delta }\}\), and then observing that convergence of the original sequence \(\{u^{\Delta } \}\) follows from the fact that \(u \mapsto \Psi (u,x)\) has a continuous inverse.

For our analysis we will assume that \(u_0-u_M\) has compact support and \(u_0 \in \text {BV}(\mathbb {R})\). We will show in Sect. 4 that the solution we obtain as a limit of numerical approximations satisfies the adapted entropy inequality (12). Using (14), the resulting existence theorem is then extended to the case of \(u_0 \in L^{\infty }(\mathbb {R})\) via approximations to \(u_0\) that are in \(\text {BV}\) and are equal to \(u_M\) outside of compact sets.

Let

$$\begin{aligned} {\bar{\alpha }} = \sup _{x \in \mathbb {R}} A(u_0(x),x). \end{aligned}$$
(20)

By Assumption H-1, and since \(\left| \left| u_0 \right| \right| _{\infty }<\infty \) (which follows from \(u_0 \in \text {BV}(\mathbb {R})\)), \({\bar{\alpha }} < \infty \). Define \(k_{{\bar{\alpha }}}^{\pm }(x)\) via the equations

$$\begin{aligned} \begin{aligned}&A(k_{{\bar{\alpha }}}^{-}(x),x) = {\bar{\alpha }}, \quad k_{{\bar{\alpha }}}^{-}(x) \le u_M(x),\\&A(k_{{\bar{\alpha }}}^{+}(x),x) = {\bar{\alpha }}, \quad k_{{\bar{\alpha }}}^{+}(x) \ge u_M(x). \end{aligned} \end{aligned}$$
(21)

Lemma 3.1

The following bounds are satisfied:

$$\begin{aligned} \sup _{x \in \mathbb {R}}{\left| k_{{\bar{\alpha }}}^{\pm }\right| }(x) < \infty . \end{aligned}$$
(22)

Proof

By definition, \(u_M(x) \le k^+_{{\bar{\alpha }}}(x)\). On the other hand, by (8) and (21), we have

$$\begin{aligned} \gamma (k^+_{{\bar{\alpha }}}(x)-u_M(x)) \le {\bar{\alpha }} \text { for all } x \in \mathbb {R}. \end{aligned}$$
(23)

By Assumption H-3, \(\gamma ^{-1}\) is defined on \([0,\infty )\). Applying \(\gamma ^{-1}\) to both sides of (23) yields

$$\begin{aligned} k^+_{{\bar{\alpha }}}(x)-u_M(x) \le \gamma ^{-1}({\bar{\alpha }}) \text { for all } x \in \mathbb {R}. \end{aligned}$$
(24)

Thus

$$\begin{aligned} u_M(x) \le k^+_{{\bar{\alpha }}}(x) \le u_M(x)+\gamma ^{-1}({\bar{\alpha }}) \text { for all } x \in \mathbb {R}. \end{aligned}$$
(25)

The desired bound for \(k^+_{{\bar{\alpha }}}(x)\) then follows from (25), along with the fact that \(u_M \in L^{\infty }(\mathbb {R})\) (which follows from \(u_M \in \text {BV}(\mathbb {R})\)). The proof for \(k^-_{{\bar{\alpha }}}(x)\) is similar. \(\square \)

Let

$$\begin{aligned} \begin{array}{lll} \mathcal {M}= \max \left( \sup _{x \in \mathbb {R}} \left| k_{{\bar{\alpha }}}^{-}(x)\right| , \sup _{x \in \mathbb {R}} \left| k_{{\bar{\alpha }}}^{+}(x)\right| \right) , \quad {\bar{R}}=\sup \{R(u);\,\left| u\right| \le \mathcal {M}\},\\ L =\sup \{\left| \partial _u A(u,x)\right| :\left| u\right| \le \mathcal {M}, x \in \mathbb {R}\}. \end{array} \end{aligned}$$
(26)

Note that by Assumption H-1, \(L< \infty \). Since R is continuous we have \({\bar{R}}<\infty \). Also, by (21) we have \(k^-_{{\bar{\alpha }}}(x) \le u_M(x) \le k^+_{{\bar{\alpha }}}(x)\) for all \(x \in \mathbb {R}\), implying that \(\left| \left| u_M \right| \right| _{\infty } \le \mathcal {M}\).

Lemma 3.2

The numerical flux \({\bar{A}}\) satisfies the following continuity estimates:

$$\begin{aligned} \begin{aligned}&\left| {\bar{A}}({\hat{u}},v,x,y) - {\bar{A}}(u,v,x,y)\right| \le L \left| {\hat{u}}-u\right| ,\\&\left| {\bar{A}}(u,{\hat{v}},x,y) - {\bar{A}}(u,v,x,y)\right| \le L\left| {\hat{v}}-v\right| ,\\&\left| {\bar{A}}(u,v,{\hat{x}},y) - {\bar{A}}(u,v,x,y)\right| \le R(\max (u,u_M({\hat{x}})))\left| a({\hat{x}})-a(x)\right| \\&\quad + L\left| u_M({\hat{x}})-u_M(x)\right| ,\\&\left| {\bar{A}}(u,v,x,{\hat{y}}) - {\bar{A}}(u,v,x,y)\right| \le R(\min (v,u_M({\hat{y}})))\left| a({\hat{y}})-a(y)\right| \\&\quad + L\left| u_M({\hat{y}})-u_M(y)\right| , \end{aligned} \end{aligned}$$
(27)

for \(u,{\hat{u}},v,{\hat{v}}\in [-\mathcal {M},\mathcal {M}]\) and \(x,{\hat{x}},y,{\hat{y}}\in \mathbb {R}\).

Proof

These inequalities follow from the definition of \({\bar{A}}\) along with

$$\begin{aligned} \left| \max (a,b)-\max (c,b)\right| \le \left| a-c\right| , \quad \left| \min (a,b)-\min (c,b)\right| \le \left| a-c\right| . \end{aligned}$$
(28)

More specifically, from (18) and (28) we have

$$\begin{aligned} \left| {\bar{A}}({\hat{u}},v,x,y)-{\bar{A}}(u,v,x,y)\right|\le & {} \left| A(\max ({\hat{u}},u_M(x)),x)-A(\max (u,u_M(x)),x)\right| \\\le & {} L\left| \max ({\hat{u}},u_M(x))-\max (u,u_M(x))\right| \\\le & {} L\left| {\hat{u}}-u\right| . \end{aligned}$$

The second inequality in (27) can be proven in a similar manner. By using definition (18) and inequalities in (28) we have

$$\begin{aligned}&\left| {\bar{A}}(u,v,{\hat{x}},y)-{\bar{A}}(u,v,x,y)\right| \\&\le \left| A(\max (u,u_M({\hat{x}})),{\hat{x}})-A(\max (u,u_M(x)),x)\right| \\&\le \left| A(\max (u,u_M({\hat{x}})),x)-A(\max (u,u_M(x)),x)\right| \\&\quad +\left| A(\max (u,u_M({\hat{x}})),{\hat{x}})-A(\max (u,u_M({\hat{x}})),x)\right| \\&\le L\left| \max (u,u_M({\hat{x}}))-\max (u,u_M(x))\right| \\&\quad +R(\max (u,u_M({\hat{x}})))\left| a({\hat{x}})-a(x)\right| \\&\le L\left| u_M({\hat{x}})-u_M(x)\right| +R(\max (u,u_M({\hat{x}})))\left| a({\hat{x}})-a(x)\right| . \end{aligned}$$

The last inequality in (27) can be shown in a similar way. \(\square \)

Lemma 3.3

Let \(u\in [-\mathcal {M},\mathcal {M}]\) and \(x,y\in \mathbb {R}\). Then we have

$$\begin{aligned} \left| \Psi (u,x)-\Psi (u,y)\right| \le L\left| u_M(x)-u_M(y)\right| +R(u)\left| a(x)-a(y)\right| . \end{aligned}$$
(29)

Proof

We start with the observation that

$$\begin{aligned} \Psi (u,x)-\Psi (u,y)=\left\{ \begin{array}{lll} -A(u,x)+A(u,y)&{} \text{ if } u_M(x)\ge u \text{ and } u_M(y)\ge u,&{}\\ A(u,x)+A(u,y)&{} \text{ if } u_M(x)<u \text{ and } u_M(y)\ge u,&{}\\ -A(u,x)-A(u,y)&{} \text{ if } u_M(x)\ge u \text{ and } u_M(y)<u,&{}\\ A(u,x)-A(u,y)&{} \text{ if } u_M(x)<u \text{ and } u_M(y)<u.&{}\\ \end{array}\right. \end{aligned}$$

When \(u_M(x)<u\le u_M(y)\) we have

$$\begin{aligned} \Psi (u,x)-\Psi (u,y)= & {} A(u,x)+A(u,y)\\\le & {} \left| A(u,x)-A(u_M(x),x)\right| +\left| A(u,y)-A(u_M(y),y)\right| \\\le & {} L\left| u_M(x)-u_M(y)\right| . \end{aligned}$$

Similarly, when \(u_M(y)<u\le u_M(x)\) we have

$$\begin{aligned} \Psi (u,x)-\Psi (u,y)\le L\left| u_M(x)-u_M(y)\right| . \end{aligned}$$

In the other cases we can estimate directly and get

$$\begin{aligned} \Psi (u,x)-\Psi (u,y)\le R(u)\left| a(x)-a(y)\right| . \end{aligned}$$

By symmetry we have

$$\begin{aligned} \left| \Psi (u,x)-\Psi (u,y)\right| \le R(u)\left| a(x)-a(y)\right| +L\left| u_M(x)-u_M(y)\right| . \end{aligned}$$

\(\square \)

Lemma 3.4

The grid functions \(\{ k_{{\bar{\alpha }}}^{-}(x_j)\}_{j \in \mathbb {Z}}\) and \(\{ k_{{\bar{\alpha }}}^{+}(x_j)\}_{j \in \mathbb {Z}}\) are stationary solutions of the difference scheme.

Proof

We will prove the lemma for \(\{ k_{{\bar{\alpha }}}^{+}(x_j)\}\). The proof for \(\{ k_{{\bar{\alpha }}}^{-}(x_j)\}\) is similar. It suffices to show that

$$\begin{aligned} {\bar{A}}(k_{{\bar{\alpha }}}^{+}(x_j),k_{{\bar{\alpha }}}^{+}(x_{j+1}),x_j,x_{j+1}) = {\bar{\alpha }}, \quad j \in \mathbb {Z}. \end{aligned}$$

By definition

$$\begin{aligned} k_{{\bar{\alpha }}}^{+}(x_j) \ge u_M(x_j), \quad k_{{\bar{\alpha }}}^{+}(x_{j+1}) \ge u_M(x_{j+1}). \end{aligned}$$

Thus, referring to (18),

$$\begin{aligned} \begin{aligned} {\bar{A}}(k_{{\bar{\alpha }}}^{+}(x_j),k_{{\bar{\alpha }}}^{+}(x_{j+1}),x_j,x_{j+1})&= \max \left\{ A(k_{{\bar{\alpha }}}^{+}(x_j),x_j) , A(u_M(x_{j+1}),x_{j+1}) \right\} \\&= \max \left\{ {\bar{\alpha }} , 0) \right\} = {\bar{\alpha }}. \end{aligned} \end{aligned}$$

Recalling the formula for the scheme (17), it is clear from the above that \(\{ {k_{{\bar{\alpha }}}^{+}}(x_j)\}_{j \in \mathbb {Z}}\) is a stationary solution. \(\square \)

For the convergence analysis that follows we assume that \(\Delta :=(\Delta x,\Delta t)\rightarrow 0\) with the ratio \(\lambda = \Delta t / \Delta x\) fixed and satisfying the CFL condition

$$\begin{aligned} \lambda L \le 1. \end{aligned}$$
(30)

Lemma 3.5

Assume that \(\lambda \) is chosen so that the CFL condition (30) holds.

The scheme is monotone, meaning that if \( \left| v_j^n\right| , \left| w_j^n\right| \le \mathcal {M}\) for \( j \in \mathbb {Z}\), then

$$\begin{aligned} v_j^n \le w_j^n, \quad j \in \mathbb {Z}\implies v_j^{n+1} \le w_j^{n+1}, \quad j \in \mathbb {Z}. \end{aligned}$$

Proof

We define \(H_j(u,v,w)\) as follows

$$\begin{aligned} H_j(u,v,w):=v-\lambda \left( {\bar{A}}(v,w,x_j,x_{j+1})-{\bar{A}}(u,v,x_{j-1},x_j)\right) . \end{aligned}$$
(31)

We show that \(H_j\) is nondecreasing in each variable. Note that from definition (18) it is clear that \({\bar{A}}(\cdot ,\cdot ,x_j,x_{j+1})\) is nondecreasing in the first variable and nonincreasing in the second variable. Therefore, from (31) we have

$$\begin{aligned} H_j(u_1,v,w)\le H_j(u_2,v,w)&\text{ for }&u_1\le u_2,\\ H_j(u,v,w_1)\le H_j(u,v,w_2)&\text{ for }&w_1\le w_2. \end{aligned}$$

Next we define

$$\begin{aligned} {u}^*\le u_M(x_j) \text{ such } \text{ that } A(u,x_{j-1})=A(u^*,x_{j}) \text{ when } u\ge u_M(x_{j-1}),\\ {w}_*\ge u_M(x_j) \text{ such } \text{ that } A(w,x_{j+1})=A(w_*,x_{j}) \text{ when } w\le u_M(x_{j+1}). \end{aligned}$$

For \(v_1\le v_2\) we denote \(I_1={\bar{A}}(u,v_1,x_{j-1},x_{j})-{\bar{A}}(u,v_2,x_{j-1},x_{j})\) and \(I_2={\bar{A}}(v_1,w,x_{j},x_{j+1})-{\bar{A}}(v_2,w,x_{j},x_{j+1})\) and \(I=I_1-I_2\). From (18) we have the following:

$$\begin{aligned} I_1= & {} \left\{ \begin{array}{lll} A(v_1,x_j)-A(u^*,x_j)&{} \text{ if } u\ge u_M(x_{j-1}) \text{ and } v_1\le u^*\le v_2,&{}\\ A(v_1,x_j)-A(v_2,x_j)&{} \text{ if } u\ge u_M(x_{j-1}) \text{ and } v_1\le v_2\le u^*,&{}\\ A(v_1,x_j)-A(v_2,x_j)&{} \text{ if } u\le u_M(x_{j-1}) \text{ and } v_1\le v_2\le u_M(x_j),&{}\\ A(v_1,x_j)&{} \text{ if } u\le u_M(x_{j-1}) \text{ and } v_1\le u_M(x_j) \le v_2,&{}\\ 0&{} \text{ otherwise. }&{} \end{array}\right. \end{aligned}$$
(32)
$$\begin{aligned} I_2= & {} \left\{ \begin{array}{lll} A(w_*,x_j)-A(v_2,x_j)&{} \text{ if } w\le u_M(x_{j+1}) \text{ and } v_1\le w_*\le v_2,&{}\\ A(v_1,x_j)-A(v_2,x_j)&{} \text{ if } w\le u_M(x_{j+1}) \text{ and } w_*\le v_1\le v_2,&{}\\ A(v_1,x_j)-A(v_2,x_j)&{} \text{ if } w\ge u_M(x_{j+1}) \text{ and } u_M(x_j)\le v_1\le v_2,&{}\\ -A(v_2,x_j)&{} \text{ if } w\ge u_M(x_{j+1}) \text{ and } v_1\le u_M(x_j) \le v_2,&{}\\ 0&{} \text{ otherwise. }&{} \end{array}\right. \end{aligned}$$
(33)

From (32) and (33) it follows that \(I=-I_2\) if \(v_1\ge u_M(x_j)\) and \(I=I_1\) if \(v_2\le u_M(x_j)\). In both the cases we have \(|I|\le L|v_1-v_2|\). When \(v_1\le u_M(x_j)\le v_2\) we have

$$\begin{aligned} |I|\le |I_1|+|I_2|\le & {} L( |v_1-u_M(x_j)|+|v_2-u_M(x_j)|)=L|v_1-v_2|. \end{aligned}$$
(34)

Therefore we have

$$\begin{aligned} H_j(u,v_1,w)-H_j(u,v_2,w)=v_1-v_2+\lambda I\le (1-\lambda L)(v_1-v_2)\le 0. \end{aligned}$$

Hence, the proof is completed by invoking the CFL condition (30). \(\square \)

Lemma 3.6

Assume that the CFL condition (30) holds. Then,

$$\begin{aligned} \left| u_j^n\right| \le \mathcal {M}, \quad j \in \mathbb {Z}, n \ge 0. \end{aligned}$$
(35)

Proof

From (20), we have

$$\begin{aligned} A(u_0(x),x) \le {\bar{\alpha }}, \quad \forall x \in \mathbb {R}. \end{aligned}$$
(36)

Applying the two branches of the inverse function \(A^{-1}(\cdot ,x)\) to (36), and using the fact that the increasing branch preserves order, while the decreasing branch reverses order, we have

$$\begin{aligned} k_{{\bar{\alpha }}}^{-}(x) \le u_0(x) \le k_{{\bar{\alpha }}}^{+}(x), \quad \forall x \in \mathbb {R}. \end{aligned}$$
(37)

By evaluation at \(x=x_j\), we also have

$$\begin{aligned} k_{{\bar{\alpha }}}^{-}(x_j) \le u_j^0 \le k_{{\bar{\alpha }}}^{+}(x_j),\quad j \in \mathbb {Z}. \end{aligned}$$
(38)

Thus \(\left| u_j^0\right| \le \mathcal {M}\) for \(j \in \mathbb {Z}\). It is clear from (26) that also \(\left| k_{{\bar{\alpha }}}^{\pm }(x_j)\right| \le \mathcal {M}\) for \(j \in \mathbb {Z}\). We apply a single step of the scheme to all three parts of (38), and due to the bounds \(\left| u_j^0\right| , \left| k_{{\bar{\alpha }}}^{\pm }(x_j)\right| \le \mathcal {M}\), the scheme acts in a monotone manner (Lemma 3.5), so that the ordering of (38) is preserved. In addition each of \(\{ k_{{\bar{\alpha }}}^{-}(x_j)\}\) and \(\{ k_{{\bar{\alpha }}}^{+}(x_j)\}\) is a stationary solution of the difference scheme, by Lemma 3.4. Thus, after applying the difference scheme, the result is

$$\begin{aligned} k_{{\bar{\alpha }}}^{-}(x_j) \le u_j^1 \le k_{{\bar{\alpha }}}^{+}(x_j),\quad i \in \mathbb {Z}, \end{aligned}$$
(39)

implying that (35) holds at time level \(n=1\). The proof is completed by continuing this way from one time step to the next. \(\square \)

Lemma 3.7

The following bound holds for \(z_j^n\):

$$\begin{aligned} \left| z_j^n\right| \le 2\mathcal {M}L, \quad j \in \mathbb {Z}, \quad n \ge 0. \end{aligned}$$
(40)

Proof

From definition (10) of \(\Psi \), (26) and (35) we have

$$\begin{aligned} \left| z^n_j\right| =\left| \Psi (u^n_j,x_j)\right| =\left| \int \limits _{u_{M}(x_j)}^{u^n_j}\left| \frac{\partial A}{\partial u}(\theta ,x_j)\right| \,d\theta \right| \le L\left| u^n_j-u_M(x_j)\right| \le 2\mathcal {M}L. \end{aligned}$$

\(\square \)

Lemma 3.8

The following time continuity estimate holds for \(u_j^n\):

$$\begin{aligned} \sum _{j \in \mathbb {Z}}\left| u_j^{n+1}-u_j^n\right| \le 2 \lambda ({\bar{R}} {{\,\mathrm{TV}\,}}(a) +L {{\,\mathrm{TV}\,}}(u_0) + L {{\,\mathrm{TV}\,}}(u_M)). \end{aligned}$$
(41)

Proof

It is apparent from (18) that

$$\begin{aligned} {\bar{A}}(u_M(x_j),u_M(x_{j+1}),x_j,x_{j+1})=0. \end{aligned}$$
(42)

With (42) and the assumption that \(u_0-u_M\) has compact support, it is clear that \(u_j^n - u_M(x_j)\) vanishes for \(\left| j\right| \ge J(n)\), for some positive integer J(n), for each \(n \ge 0\). In particular, we have \(\sum \nolimits _{j \in \mathbb {Z}} \left| u_j^n - u_M(x_j)\right| < \infty \) for \(n \ge 0\). Moreover, the fact that \(u_j^n = u_M(x_j)\) for \(\left| j\right| \ge J(n)\), combined with (42), yields

$$\begin{aligned} \sum _{j \in \mathbb {Z}} \Delta _- {\bar{A}}(u_j^n,u_{j+1}^n,x_j,x_{j+1}) =0. \end{aligned}$$
(43)

As a consequence of (43) and (17) we have

$$\begin{aligned} \sum _{j \in \mathbb {Z}} (u_j^{n+1} - u_M(x_j))= \sum _{j \in \mathbb {Z}} (u_j^n - u_M(x_j)). \end{aligned}$$
(44)

Due to the monotonicity property (Lemma 3.5), and the conservation property (44) we can invoke a straightforward modification of the Crandall-Tartar lemma [14], yielding

$$\begin{aligned} \begin{aligned} \sum _{j \in \mathbb {Z}}\left| u_j^{n+1}-u_j^n\right|&\le \sum _{j \in \mathbb {Z}}\left| u_j^{n}-u_j^{n-1}\right| \\&\,\,\, \vdots \\&\le \sum _{j \in \mathbb {Z}}\left| u_j^{1}-u_j^{0}\right| \\&\le \lambda \sum _{j \in \mathbb {Z}}\left| {\bar{A}}(u^0_{j},u^0_{j+1},x_{j},x_{j+1})- {\bar{A}}(u^0_{j-1},u^0_j,x_{j-1},x_j)\right| . \end{aligned} \end{aligned}$$
(45)

The proof will be completed by estimating the last term.

$$\begin{aligned} \begin{aligned}&\left| {\bar{A}}(u^0_j,u^0_{j+1},x_j,x_{j+1}) -{\bar{A}}(u^0_{j-1},u^0_{j},x_{j-1},x_{j})\right| \\&\quad \le \left| {\bar{A}}(u^0_j,u^0_{j+1},x_j,x_{j+1}) -{\bar{A}}(u^0_{j},u^0_{j+1},x_{j},x_{j})\right| \\&\qquad + \left| {\bar{A}}(u^0_j,u^0_{j+1},x_j,x_{j}) -{\bar{A}}(u^0_{j},u^0_{j},x_{j},x_{j})\right| \\&\qquad + \left| {\bar{A}}(u^0_j,u^0_{j},x_j,x_{j}) -{\bar{A}}(u^0_{j},u^0_{j},x_{j-1},x_{j})\right| \\&\qquad + \left| {\bar{A}}(u^0_j,u^0_{j},x_{j-1},x_{j}) -{\bar{A}}(u^0_{j-1},u^0_{j},x_{j-1},x_{j})\right| . \end{aligned} \end{aligned}$$
(46)

Invoking Lemma 3.2, we obtain

$$\begin{aligned} \begin{aligned}&\left| {\bar{A}}(u^0_j,u^0_{j+1},x_j,x_{j+1}) -{\bar{A}}(u^0_{j},u^0_{j+1},x_{j},x_{j})\right| \le {\bar{R}}\left| a(x_{j+1}) - a(x_j) \right| \\&\qquad \qquad \qquad \quad \,\,\, + L \left| u_M(x_{j+1}) - u_M(x_j)\right| ,\\&\left| {\bar{A}}(u^0_j,u^0_{j+1},x_j,x_{j}) -{\bar{A}}(u^0_{j},u^0_{j},x_{j},x_{j})\right| \le L \left| u^0_{j+1}-u^0_{j}\right| ,\\&\left| {\bar{A}}(u^0_j,u^0_{j},x_j,x_{j}) -{\bar{A}}(u^0_{j},u^0_{j},x_{j-1},x_{j})\right| \le {\bar{R}}\left| a(x_{j}) - a(x_{j-1}) \right| \\&\qquad \qquad \qquad \quad \quad + L \left| u_M(x_{j}) - u_M(x_{j-1})\right| ,\\&\left| {\bar{A}}(u^0_j,u^0_{j},x_{j-1},x_{j}) -{\bar{A}}(u^0_{j-1},u^0_{j},x_{j-1},x_{j})\right| \le L \left| u^0_{j}-u^0_{j-1}\right| . \end{aligned} \end{aligned}$$
(47)

Plugging (47) into (46), and then summing over \(j \in \mathbb {Z}\), the result is

$$\begin{aligned} \begin{aligned} \sum _{j \in \mathbb {Z}}\left| u_j^{n+1} - u_j^{n}\right|&\le 2 \lambda {\bar{R}}\sum _{j \in \mathbb {Z}}\left| a(x_{j+1}) - a(x_j) \right| \\&\quad + 2 \lambda L\sum _{j \in \mathbb {Z}}\left| u^0_{j+1} - u^0_j\right| + 2 \lambda L \sum _{j \in \mathbb {Z}}\left| u_M(x_{j+1}) - u_M(x_j)\right| \\&\le 2 \lambda ({\bar{R}} {{\,\mathrm{TV}\,}}(a) +L {{\,\mathrm{TV}\,}}(u_0) + L {{\,\mathrm{TV}\,}}(u_M)). \end{aligned} \end{aligned}$$

\(\square \)

Lemma 3.9

The following time continuity estimate holds for \(z_j^n\):

$$\begin{aligned} \sum _{j \in \mathbb {Z}}\left| z_j^{n+1} - z_j^n\right| \le 2 \lambda L ({\bar{R}} {{\,\mathrm{TV}\,}}(a) +L {{\,\mathrm{TV}\,}}(u_0) + L {{\,\mathrm{TV}\,}}(u_M)), \quad n \ge 0. \end{aligned}$$
(48)

Proof

The estimate (48) follows directly from time continuity for \(\{u_j^n \}\), and Lipschitz continuity of \(\Psi (\cdot ,x_j)\). Indeed,

$$\begin{aligned} |z_j^{n+1}-z_j^n|= & {} \left| \Psi (u^{n+1}_j,x_j)-\Psi (u^n_j,x_j)\right| \nonumber \\= & {} \left| \int \limits _{u^n_j}^{u^{n+1}_j}\left| \frac{\partial A}{\partial u}(\theta ,x_j)\,d\theta \right| \right| \le L\left| u^{n+1}_j-u^n_j\right| . \end{aligned}$$
(49)

Now (48) is immediate from (41) and (49). \(\square \)

We next turn to establishing a spatial variation bound for \(z_j^n\). Define

$$\begin{aligned} \sigma _+(u,x) = {\left\{ \begin{array}{ll} 1, \quad &{} u> u_M(x),\\ 0, \quad &{} u\le u_M(x), \end{array}\right. } \quad \sigma _-(u,x) = {\left\{ \begin{array}{ll} 1, \quad &{} u< u_M(x),\\ 0, \quad &{} u\ge u_M(x). \end{array}\right. } \end{aligned}$$
(50)

We also use the notation \(b_+,b_-\) as positive and negative part of real number b defined as \(b_+=\max \{b,0\}\) and \(b_-=\min \{b,0\}\). Proof of the following lemma follows from Lemma 4.5 of [2] or Lemma 3.3 of [21].

Lemma 3.10

The following inequality holds:

$$\begin{aligned} \begin{aligned}&\left( \Psi (u_{j+1}^n,x_j) - \Psi (u_j^n,x_j) \right) _+\\&\quad \le \sigma _-(u_j^n,x_j) \left| {\bar{A}}(u_j^n,u_{j+1}^n,x_j,x_j) - {\bar{A}}(u_{j-1}^n,u_{j}^n,x_j,x_j)\right| \\&\qquad + \sigma _+(u_{j+1}^n,x_{j}) \left| {\bar{A}}(u_{j+1}^n,u_{j+2}^n,x_j,x_j) - {\bar{A}}(u_{j}^n,u_{j+1}^n,x_j,x_j)\right| . \end{aligned} \end{aligned}$$
(51)

Proof

Since RHS in (51) is non-negative, it is enough to consider the case when \((\Psi (u_{j+1}^n,x_j) - \Psi (u_j^n,x_j))_+>0 \), that is, \(\Psi (u_{j+1}^n,x_j)> \Psi (u_j^n,x_j) \). As \(u\mapsto \Psi (u,x_j)\) is an increasing function we have \(u_{j+1}^n>u_j^n\). Note that \(G(v,w):={\bar{A}}(v,w,x_j,x_j)\) for \(v,w\in \mathbb {R}\) is a standard Godunov flux. Hence, G is Lipschitz continuous function and we have

$$\begin{aligned} G(u^n_j,u^n_{j+1})-G(u^n_{j-1},u^n_j)&=\int \limits _{u_j^n}^{u_{j+1}^n} \frac{\partial G}{\partial w}(u^n_j,w)\,dw+\int \limits _{u_{j-1}^n}^{u_{j}^n}\frac{\partial G}{\partial v}(v,u^n_j)\,dv, \end{aligned}$$
(52)
$$\begin{aligned} G(u^n_{j+1},u^n_{j+2})-G(u^n_{j},u^n_{j+1})&=\int \limits _{u_{j+1}^n}^{u_{j+2}^n}\frac{\partial G}{\partial w}(u^n_{j+1},w)\,dw+\int \limits _{u_{j}^n}^{u_{j+1}^n}\frac{\partial G}{\partial v}(v,u^n_{j+1})\,dv. \end{aligned}$$
(53)

Now we observe the following

$$\begin{aligned} \Psi (u_{j+1}^n,x_j) - \Psi (u_j^n,x_j)&=\int \limits _{u_{j}^n}^{u_{j+1}^n}\left| \frac{\partial A}{\partial u}(u,x_j)\right| \,du\nonumber \\&=\int \limits _{u_{j}^n}^{u_{j+1}^n}\left( \frac{\partial A}{\partial u}(u,x_j)\right) _+du-\int \limits _{u_{j}^n}^{u_{j+1}^n}\left( \frac{\partial A}{\partial u}(u,x_j)\right) _-du. \end{aligned}$$
(54)

Next we check the following two inequalities,

$$\begin{aligned} \sigma _+(u_{j+1}^n,x_j)\left| G(u_{j+1}^n,u_{j+2}^n)-G(u_{j}^n,u_{j+1}^n)\right|&\ge \int \limits _{u_{j}^n}^{u_{j+1}^n}\left( \frac{\partial A}{\partial u}(u,x_j)\right) _+du, \end{aligned}$$
(55)
$$\begin{aligned} \sigma _-(u_{j}^n,x_j)\left| G(u_{j}^n,u_{j+1}^n)-G(u_{j-1}^n,u_j^n)\right|&\ge \int \limits ^{u_{j}^n}_{u_{j+1}^n}\left( \frac{\partial A}{\partial u}(u,x_j)\right) _-du. \end{aligned}$$
(56)

We first show (56). If \(\sigma _-(u_j^n,x_j)=0\), then \(u_j^n\ge u_M(x_j)\). Subsequently, \(u_{j+1}^n>u_j^n\ge u_M(x_j)\). Since \(u\mapsto A(u,x_j)\) is increasing for \(u\ge u_{M}(x_j)\), the RHS of (56) vanishes. Now, if \(\sigma _-(u_j^n,x_j)=1\), then we have \(u_j^n<u_M(x_j)\). If \(u_{j-1}^n\ge u_M(x_j)\) we get \(-\int \nolimits _{u_{j-1}^n}^{u_{j}^n}\frac{\partial G}{\partial v}(v,u^n_j)\,dv\ge 0\). If \(u_{j-1}^n< u_M(x_j)\), then we have \(G(u_{j-1}^n,u_j^n)=G(u_j^n,u_j^n)\) and, subsequently, it holds \(-\int \nolimits _{u_{j-1}^n}^{u_{j}^n}\frac{\partial G}{\partial v}(v,u^n_j)\,dv =0\). Therefore, for \(\sigma _-(u^n_j,x_j)=1\) we have

$$\begin{aligned} \left| G(u_{j}^n,u_{j+1}^n)-G(u_{j-1}^n,u_j^n)\right|&=-\int \limits _{u_j^n}^{u_{j+1}^n}\frac{\partial G}{\partial w}(u^n_j,w)\,dw-\int \limits _{u_{j-1}^n}^{u_{j}^n}\frac{\partial G}{\partial v}(v,u^n_j)\,dv\\&\ge -\int \limits _{u_j^n}^{u_{j+1}^n}\frac{\partial G}{\partial w}(u^n_j,w)\,dw=\int \limits ^{u_{j}^n}_{u_{j+1}^n}\left( \frac{\partial A}{\partial u}(u,x_j)\right) _-du. \end{aligned}$$

To obtain the last equality we have used the fact that for \(u_j^n<u_M(x_j)\) and \(w\ge u_j^n\), \(G(u_j^n,w) = A(\min (w,u_M(x_j)),x_j)\). This completes the proof of (56). Now we show (55). If \(\sigma _+(u_{j+1}^n,x_j)=0\), then we have \(u_j^n<u_{j+1}^n\le u_M(x_j)\). Hence, the RHS of (55) vanishes. Now consider the case when we have \(\sigma _+(u^n_{j+1},x_j)=1\). This says, \(u_{j+1}^n>u_M(x_j)\). If \(u_{j+2}^n>u_M(x_j)\) we have \(G(u_{j+1}^n,u_{j+2}^n)=G(u_{j+1}^n,u_{j+1}^n)\) or equivalently, \(\int \nolimits _{u_{j+1}^n}^{u_{j+2}^n}\frac{\partial G}{\partial w}(u^n_{j+1},w)\,dw=0\). If \(u_{j+2}^n\le u_M(x_j)\), we get \(u_{j+2}^n\le u_{j+1}^n\) and subsequently, \(\int \nolimits _{u_{j+1}^n}^{u_{j+2}^n}\frac{\partial G}{\partial w}(u^n_{j+1},w)\,dw\ge 0\). Hence, for \(\sigma _+(u^n_{j+1},x_j)=1\), we have

$$\begin{aligned} \left| G(u_{j+1}^n,u_{j+2}^n)-G(u_{j}^n,u_{j+1}^n)\right|&=\int \limits _{u_{j+1}^n}^{u_{j+2}^n}\frac{\partial G}{\partial w}(u^n_{j+1},w)\,dw+\int \limits _{u_{j}^n}^{u_{j+1}^n}\frac{\partial G}{\partial v}(v,u^n_{j+1})\,dv\\&\ge \int \limits _{u_{j}^n}^{u_{j+1}^n}\frac{\partial G}{\partial v}(v,u^n_{j+1})\,dv=\int \limits ^{u_{j+1}^n}_{u_{j}^n}\left( \frac{\partial A}{\partial u}(u,x_j)\right) _+du. \end{aligned}$$

This guarantees (55). To obtain the last equality we have used the fact that for \(u_{j+1}^n>u_M(x_j)\) and \(v\le u_{j+1}^n\), \(G(v,u_{j+1}^n) = A(\max (v,u_M(x_j)),x_j)\). Now combining (55), (56) with (54) we conclude (51). \(\square \)

We remark that Lemma 3.10 is still true if we replace \(u_{j+2}^n\) and \(u_{j-1}^n\) by any two real number \(w_1,w_2\).

Lemma 3.11

Let \({\bar{A}}^n_{j+1/2} = {\bar{A}}(u_j^n,u_{j+1}^n,x_j,x_{j+1})\). The following inequality holds:

$$\begin{aligned} \begin{aligned} \left( \Psi (u_{j+1}^n,x_{j+1}) - \Psi (u_j^n,x_j) \right) _+&\le \sigma _-(u_j^n,x_j) \left| {\bar{A}}^n_{j+1/2} - {\bar{A}}^n_{j-1/2}\right| \\&\quad + \sigma _+(u_{j+1}^n,x_{j}) \left| {\bar{A}}^n_{j+3/2} - {\bar{A}}^n_{j+1/2}\right| \\&\quad + \Omega _{j+1/2}^n, \end{aligned} \end{aligned}$$
(57)

where \(\sum \nolimits _{j\in \mathbb {Z}} \Omega _{j+1/2}^n \le C ({{\,\mathrm{TV}\,}}(a)+{{\,\mathrm{TV}\,}}(u_M))\) and C is independent of \(\Delta \).

Proof

By Lemma 3.3 we have

$$\begin{aligned}&\left( \Psi (u_{j+1}^n,x_{j+1}) - \Psi (u_j^n,x_j) \right) _+ \nonumber \\&\quad \le \left( \Psi (u_{j+1}^n,x_{j}) - \Psi (u_j^n,x_j) \right) _+\nonumber \\&\qquad +\left( \Psi (u_{j+1}^n,x_{j+1}) - \Psi (u_{j+1}^n,x_j) \right) _+ \nonumber \\&\quad \le \left( \Psi (u_{j+1}^n,x_{j}) - \Psi (u_j^n,x_j) \right) _++L\left| u_M(x_{j+1})-u_M(x_j)\right| \nonumber \\&\qquad + R(u_{j+1}^n)\left| a(x_{j+1})-a(x_j)\right| . \end{aligned}$$
(58)

From (51) we have

$$\begin{aligned}&\left( \Psi (u_{j+1}^n,x_{j+1}) - \Psi (u_j^n,x_j) \right) _+\nonumber \\&\quad \le \sigma _-(u_j^n,x_j) \left| {\bar{A}}(u_j^n,u_{j+1}^n,x_j,x_j) - {\bar{A}}(u_{j-1}^n,u_{j}^n,x_j,x_j)\right| \nonumber \\&\qquad + \sigma _+(u_{j+1}^n,x_{j}) \left| {\bar{A}}(u_{j+1}^n,u_{j+2}^n,x_j,x_j) - {\bar{A}}(u_{j}^n,u_{j+1}^n,x_j,x_j)\right| \nonumber \\&\qquad +{\bar{R}}\left| a(x_{j+1})-a(x_j)\right| +L\left| u_M(x_{j+1})-u_M(x_j)\right| . \end{aligned}$$
(59)

We further modify (59) to get the following

$$\begin{aligned}&\left( \Psi (u_{j+1}^n,x_{j+1}) - \Psi (u_j^n,x_j) \right) _+ \nonumber \\&\quad \le \sigma _-(u_j^n,x_j) \left| {\bar{A}}^n_{j+1/2} - {\bar{A}}^n_{j-1/2}\right| + \sigma _+(u_{j+1}^n,x_{j+1}) \left| {\bar{A}}^n_{j+3/2} - {\bar{A}}^n_{j+1/2}\right| \nonumber \\&\qquad +{\bar{R}}\left| a(x_{j+1})-a(x_j)\right| +L\left| u_M(x_{j+1})-u_M(x_j)\right| \nonumber \\&\qquad + \sigma _-(u_j^n,x_j) \left| {\bar{A}}(u_{j}^n,u_{j+1}^n,x_{j},x_{j+1}) - {\bar{A}}(u_{j}^n,u_{j+1}^n,x_j,x_j)\right| \nonumber \\&\qquad + \sigma _-(u_j^n,x_j) \left| {\bar{A}}(u_{j-1}^n,u_{j}^n,x_{j-1},x_j) - {\bar{A}}(u_{j-1}^n,u_{j}^n,x_j,x_j)\right| \nonumber \\&\qquad + \sigma _+(u_{j+1}^n,x_{j}) \left| {\bar{A}}(u_{j+1}^n,u_{j+2}^n,x_j,x_j) - {\bar{A}}(u_{j+1}^n,u_{j+2}^n,x_{j},x_{j+2})\right| \nonumber \\&\qquad + \sigma _+(u_{j+1}^n,x_{j}) \left| {\bar{A}}(u_{j+1}^n,u_{j+2}^n,x_j,x_{j+2}) - {\bar{A}}(u_{j+1}^n,u_{j+2}^n,x_{j+1},x_{j+2})\right| \nonumber \\&\qquad + \sigma _+(u_{j+1}^n,x_{j}) \left| {\bar{A}}(u_{j}^n,u_{j+1}^n,x_j,x_{j+1}) - {\bar{A}}(u_{j}^n,u_{j+1}^n,x_j,x_{j})\right| . \end{aligned}$$
(60)

Next we apply Lemma 3.2 to bound the last five terms of (60) by

$$\begin{aligned}&{\bar{R}}\left| a(x_j)-a(x_{j-1})\right| +3{\bar{R}}\left| a(x_{j+1})-a(x_{j})\right| +{\bar{R}}\left| a(x_{j})-a(x_{j+2})\right| \nonumber \\&\quad +L\left| u_M(x_j)-u_M(x_{j-1})\right| +3L\left| u_M(x_j)-u_M(x_{j+1})\right| \nonumber \\&\quad +L\left| u_M(x_{j})-u_M(x_{j+2})\right| . \end{aligned}$$
(61)

Combining (60) and (61) we get (57) with

$$\begin{aligned} \Omega _{j+1/2}:= & {} {\bar{R}}\left| a(x_j)-a(x_{j-1})\right| +4{\bar{R}}\left| a(x_{j+1})-a(x_{j})\right| +{\bar{R}}\left| a(x_{j+2})-a(x_{j})\right| \\&+L\left| u_M(x_j)-u_M(x_{j-1})\right| +4L\left| u_M(x_{j+1})-u_M(x_{j})\right| \\&+L\left| u_M(x_{j+2})-u_M(x_{j})\right| . \end{aligned}$$

\(\square \)

Lemma 3.12

For each \(n\ge 0\), there is a positive integer J(n) such that

$$\begin{aligned} z_j^n = 0 \text { for } \left| j\right| \ge J(n). \end{aligned}$$
(62)

Proof

From (10) and (19) we have

$$\begin{aligned} z_j^n = \int _{u_M(x_j)}^{u_j^n} \left| {{\partial A}\over {\partial u}}(\theta ,x_j)\right| \,d\theta . \end{aligned}$$
(63)

As part of the proof of Lemma 3.8 we showed that \(u_j^n = u_M(x_j)\) for sufficiently large \(\left| j\right| \). The proof is completed by combining this fact with (63). \(\square \)

Lemma 3.13

The following spatial variation bound holds for \(n \ge 0\):

$$\begin{aligned} \sum _{j \in \mathbb {Z}}\left| z^n_{j+1}-z^n_j\right| \le C, \end{aligned}$$
(64)

where C is a \(\Delta \)-independent constant.

Proof

From Lemma 3.11 we find that

$$\begin{aligned} \begin{aligned} \sum _{j \in \mathbb {Z}}(z^n_{j+1}-z^n_j)_+&\le \sum _{j \in \mathbb {Z}}\sigma _-(u_j^n,x_j) \left| {\bar{A}}^n_{j+1/2} - {\bar{A}}^n_{j-1/2}\right| \\&\quad + \sum _{j \in \mathbb {Z}}\sigma _+(u_{j+1}^n,x_{j}) \left| {\bar{A}}^n_{j+3/2} - {\bar{A}}^n_{j+1/2}\right| + \sum _{j \in \mathbb {Z}}\Omega _{j+1/2}^n\\&\le 2 \sum _{j \in \mathbb {Z}}\left| {\bar{A}}^n_{j+1/2} - {\bar{A}}^n_{j-1/2}\right| + \sum _{j \in \mathbb {Z}}\Omega _{j+1/2}^n\\&= {2 \over \lambda } \sum _{j \in \mathbb {Z}}\left| u_j^{n+1}-u_j^n\right| + \sum _{j \in \mathbb {Z}}\Omega _{j+1/2}^n. \end{aligned} \end{aligned}$$
(65)

Invoking Lemma 3.8 and the fact that \(\sum \nolimits _{j \in \mathbb {Z}} \Omega _{j+1/2}^n \le C ({{\,\mathrm{TV}\,}}(a)+{{\,\mathrm{TV}\,}}(u_M))\), the result is

$$\begin{aligned} \sum _{j \in \mathbb {Z}}(z^n_{j+1}-z^n_j)_+ \le C_2, \end{aligned}$$
(66)

for some \(\Delta \)-independent constant \(C_2\).

As a result of Lemma 3.12,

$$\begin{aligned} \sum _{j \in \mathbb {Z}}(z^n_{j+1}-z^n_j) =0. \end{aligned}$$
(67)

We also have

$$\begin{aligned} \sum _{j \in \mathbb {Z}}(z^n_{j+1}-z^n_j) =\sum _{j \in \mathbb {Z}}(z^n_{j+1}-z^n_j)_+ + \sum _{j \in \mathbb {Z}}(z^n_{j+1}-z^n_j)_-, \end{aligned}$$
(68)

implying that

$$\begin{aligned} -\sum _{j \in \mathbb {Z}}(z^n_{j+1}-z^n_j)_- = \sum _{j \in \mathbb {Z}}(z^n_{j+1}-z^n_j)_+. \end{aligned}$$
(69)

From (66) and (69) it follows that

$$\begin{aligned} \sum _{j \in \mathbb {Z}}\left| z^n_{j+1}-z^n_j\right| = \sum _{j \in \mathbb {Z}}(z^n_{j+1}-z^n_j)_+ - \sum _{j \in \mathbb {Z}}(z^n_{j+1}-z^n_j)_- \le 2 C_2. \end{aligned}$$
(70)

\(\square \)

Lemma 3.14

Define \(a^{\Delta }(x):=\sum _{j \in \mathbb {Z}}\chi _j(x)a(x_j)\) and \(u_M^{\Delta }(x):=\sum _{j \in \mathbb {Z}}\chi _j(x)u_M(x_j)\). As \(\Delta \rightarrow 0\), \(a^{\Delta } \rightarrow a\) and \(u_M^\Delta \rightarrow u_M\) in \(L^1_{{{\,\mathrm{loc}\,}}}(\mathbb {R})\).

Proof

We give a proof for \(a^\Delta \) and it similarly follows for \(u_M^\Delta \). Suppose

$$\begin{aligned} {\overline{m}}_j:= & {} \sup \{a(x);\,x\in I_j\},\\ {\underline{m}}_j:= & {} \inf \{a(x);\,x\in I_j\}. \end{aligned}$$

Then for any \(\sigma >0\), we have

$$\begin{aligned} \int \limits _{[-\sigma ,\sigma ]}\left| a(x)-a^{\Delta }(x)\right| \,dx\le & {} \sum _{j \in \mathbb {Z}}\int \limits _{I_j}({\overline{m}}_j-{\underline{m}}_j)\,dx\nonumber \\\le & {} \Delta x\,{{\,\mathrm{TV}\,}}(a)\rightarrow 0 \text{ as } \Delta x\rightarrow 0. \end{aligned}$$
(71)

\(\square \)

Lemma 3.15

The approximations \(u^{\Delta }\) converge as \(\Delta \rightarrow 0\), modulo extraction of a subsequence, in \(L^1_{{{\,\mathrm{loc}\,}}}(\Pi _T)\) and pointwise a.e. in \(\Pi _T\) to a function \(u \in L^{\infty }(\Pi _T) \cap C([0,T]:L^1_{{{\,\mathrm{loc}\,}}}(\mathbb {R}))\).

Proof

From Lemmas 3.73.9 and 3.13 we have for some subsequence, and some \(z \in L^1(\Pi _T) \cap L^{\infty }(\Pi _T)\), \(z^{\Delta } \rightarrow z\) in \(L^1(\Pi _T)\) and pointwise a.e. Define \(u(x,t) = \Psi ^{-1}(z(x,t),x)\). We have \(u_j^n = \Psi ^{-1}(z_j^n,x_j)\), or

$$\begin{aligned} u^{\Delta }(x,t) = \Psi ^{-1}(z^{\Delta }(x,t),x^\Delta ) \text { for } (x,t) \in \Pi _T. \end{aligned}$$
(72)

By using triangle inequality and Lemma 3.3 we obtain the following

$$\begin{aligned}&\left| \Psi (u^{\Delta }(x,t),x)-z(x,t)\right| \nonumber \\&\quad \le \left| \Psi (u^{\Delta }(x,t),x^{\Delta })-z(x,t)\right| +\left| \Psi (u^{\Delta }(x,t),x)-\Psi (u^{\Delta }(x,t),x^{\Delta })\right| \nonumber \\&\quad \le \left| z^{\Delta }(x,t)-z(x,t)\right| +L\left| u_M(x)-u_M(x^\Delta )\right| +{\bar{R}}\left| a(x)-a(x^\Delta )\right| . \end{aligned}$$
(73)

From Lemma 3.14, we have \(a(x^\Delta )=a^\Delta \rightarrow a\) and \(u_M(x^\Delta )=u^\Delta _M\rightarrow u_M\) in \(L^1_{loc}(\mathbb {R})\), therefore up to a subsequence \(a(x^\Delta )\rightarrow a(x),u_M(x^\Delta )\rightarrow u(x)\) for a.e. \(x\in \mathbb {R}\). Hence, \(\Psi (u^{\Delta }(x,t),x)\rightarrow z(x,t)\) as \(\Delta \rightarrow 0\) for a.e. \((x,t)\in \Pi _T\). Fixing a point \((x,t) \in \Pi _T\) where \(\Psi (u^{\Delta }(x,t),x) \rightarrow z(x,t)\) and using the continuity of \(\zeta \mapsto \Psi ^{-1}(\zeta ,x)\) for each fixed \(x \in \mathbb {R}\), we get

$$\begin{aligned} u^{\Delta }(x,t) \rightarrow u(x,t). \end{aligned}$$
(74)

Thus \(u^{\Delta } \rightarrow u\) pointwise a.e. in \(\Pi _T\). Since \(u^{\Delta }\) is bounded in \(\Pi _T\) independently of \(\Delta \) in \(\Pi _T\), we also have \(u^{\Delta } \rightarrow u\) in \(L^1_{{{\,\mathrm{loc}\,}}}(\Pi _T)\), by the dominated convergence theorem. In fact, due to the time continuity estimate (41), we also have \(u \in C([0,T]:L^1_{{{\,\mathrm{loc}\,}}}(\mathbb {R}))\). \(\square \)

4 Entropy inequality and proof of Theorem 2.5

In this section we show that u satisfies adapted entropy inequality (12), the remaining ingredient required for the proof of Theorem 2.5.

Lemma 4.1

We have the following discrete entropy inequalities:

$$\begin{aligned} \left| u^{n+1}_j- k^{\pm }_{\alpha ,j}\right| \le \left| u_j^n - k^{\pm }_{\alpha ,j}\right| - \lambda (\mathcal {F}^n_{j+1/2} - \mathcal {F}^n_{j-1/2}), \text{ for } \text{ all } j\in \mathbb {Z}\end{aligned}$$
(75)

where

$$\begin{aligned} \mathcal {F}^n_{j+1/2}= & {} {\bar{A}}(u_j^n \vee k^{\pm }_{\alpha ,j},u_{j+1}^n \vee k^{\pm }_{\alpha ,j+1},x_j,x_{j+1}) - {\bar{A}}(u_j^n \wedge k^{\pm }_{\alpha ,j},u_{j+1}^n \\&\wedge k^{\pm }_{\alpha ,j+1}, { x_j},x_{j+1}). \end{aligned}$$

Proof

The proof is a slightly generalized version of a now classical argument found in [9] or [14]. Denote the grid function \(\{u_j^n\}_{j \in \mathbb {Z}}\) by \(U^n\), and write the scheme defined by (17) as \(U^{n+1} = \Gamma (U^n)\), i.e., \(\Gamma (\cdot )\) is the operator that advances the solution from time level n to \(n+1\). Let \(K^{\pm }_{\alpha } = \{k_{\alpha }^{\pm }(x_j)\}_{j \in \mathbb {Z}}\). Since the scheme is monotone, we have

$$\begin{aligned} \Gamma (K^{\pm }_{\alpha }) \vee \Gamma (U^n) \le \Gamma (K^{\pm }_{\alpha } \vee U^n), \quad \Gamma (K^{\pm }_{\alpha }) \wedge \Gamma (U^n) \ge \Gamma (K^{\pm }_{\alpha } \wedge U^n). \end{aligned}$$
(76)

Using the fact that \(\Gamma (K^{\pm }_{\alpha }) = K^{\pm }_{\alpha }\), it follows from (76) that

$$\begin{aligned} U^{n+1} \vee K^{\pm }_{\alpha } - U^{n+1} \wedge K^{\pm }_{\alpha } \le \Gamma (K^{\pm }_{\alpha } \vee U^n) - \Gamma (K^{\pm }_{\alpha } \wedge U^n). \end{aligned}$$
(77)

The discrete entropy inequality (75) then follows from (77), using the definition of \(\Gamma (\cdot )\) in terms of (17) along with the identity \(a \vee b - a \wedge b = \left| a-b\right| \). \(\square \)

Lemma 4.2

Let

$$\begin{aligned} k_{\alpha }^{\pm ,\Delta }(x) = \sum _{j \in \mathbb {Z}}\chi _j(x) k_{\alpha ,j}^{\pm }. \end{aligned}$$
(78)

Then

$$\begin{aligned} k_{\alpha }^{\pm ,\Delta }(x) \rightarrow k_{\alpha }^{\pm }(x) \text { in } L^1_{{{\,\mathrm{loc}\,}}}(\mathbb {R})\text { and pointwise a.e. in } \mathbb {R}. \end{aligned}$$
(79)

Proof

We first show that \(\Psi (k^{\pm ,\Delta }_\alpha (\cdot ),\cdot )\rightarrow \Psi (k^{\pm }_\alpha (\cdot ),\cdot )\) as \(\Delta \rightarrow 0\). Observe that

$$\begin{aligned} \Psi (k^{\pm ,\Delta }(x),x)= \sum _{j \in \mathbb {Z}}\chi _j(x) \Psi (k_{\alpha ,j}^{\pm },x). \end{aligned}$$

This yields

$$\begin{aligned} \left| \Psi (k^{\pm ,\Delta }_{\alpha }(x),x)-\Psi (k^{\pm }_\alpha (x),x)\right|\le & {} \sum _{j \in \mathbb {Z}}\chi _j(x)\left| \Psi (k^{\pm }_{\alpha ,j},x)-\Psi (k^{\pm }_\alpha (x),x)\right| \\\le & {} \sum _{j \in \mathbb {Z}}\chi _j(x)\left| \Psi (k^{\pm }_{\alpha ,j},x)-\Psi (k^{\pm }_{\alpha ,j},x_j)\right| \\&+\sum _{j \in \mathbb {Z}}\chi _j(x)\left| \Psi (k^{\pm }_{\alpha ,j},x_j)-\Psi (k^{\pm }_\alpha (x),x)\right| \\= & {} \sum _{j \in \mathbb {Z}}\chi _j(x)\left| \Psi (k^{\pm }_{\alpha ,j},x)-\Psi (k^{\pm }_{\alpha ,j},x_j)\right| \\&+\sum _{j \in \mathbb {Z}}\chi _j(x)\left| A(k^{\pm }_{\alpha ,j},x_j)-A(k^{\pm }_\alpha (x),x)\right| \\= & {} \sum _{j \in \mathbb {Z}}\chi _j(x)\left| \Psi (k^{\pm }_{\alpha ,j},x)-\Psi (k^{\pm }_{\alpha ,j},x_j)\right| . \end{aligned}$$

By virtue of Assumption H-2 we obtain

$$\begin{aligned} \left| \Psi (k^{\pm ,\Delta }_{\alpha }(x),x)-\Psi (k^{\pm }_\alpha (x),x)\right|&\le \sum _{j \in \mathbb {Z}}\chi _j(x)R(k^{\pm }_{\alpha ,j})\left| a(x)-a(x_j)\right| \nonumber \\&\le {\bar{R}}\left| a(x)-\sum _{j \in \mathbb {Z}}\chi _j(x)a(x_j)\right| . \end{aligned}$$
(80)

By using (71) in (80), we have \(\Psi (k^{\pm ,\Delta }_{\alpha }(x),x)\rightarrow \Psi (k^{\pm }_\alpha (x),x)\) as \(\Delta \rightarrow 0\) for a.e. \(x\in \mathbb {R}\). By using continuity of \(\zeta \mapsto \Psi ^{-1}(\zeta ,x)\) for each fixed \(x\in \mathbb {R}\) we have \(k^{\pm ,\Delta }_{\alpha }(x)\rightarrow k^{\pm }_\alpha (x)\) as \(\Delta \rightarrow 0\) for a.e. \(x\in \mathbb {R}\). \(\square \)

Lemma 4.3

The (subsequential) limit u guaranteed by Lemma 3.15 satisfies the adapted entropy inequalities (13).

Proof

Fix \(\alpha \ge 0\). Define

$$\begin{aligned} v_j^n := u_j^n \vee k^{\pm }_{\alpha ,j}, \quad w_j^n := u_j^n \wedge k^{\pm }_{\alpha ,j}, \end{aligned}$$
(81)

and

$$\begin{aligned} v^{\Delta }(x,t) := \sum _{n=0}^N \sum _{j \in \mathbb {Z}}\chi _j(x) \chi ^n(t) v_j^n, \quad w^{\Delta }(x,t) := \sum _{n=0}^N \sum _{j \in \mathbb {Z}}\chi _j(x) \chi ^n(t) w_j^n. \end{aligned}$$
(82)

We can rewrite \(v^{\Delta }\) and \(w^{\Delta }\) as follows

$$\begin{aligned}&2v^{\Delta }(x,t)=u^{\Delta }(x,t)+k^{\pm ,\Delta }_{\alpha }(x,t)+\left| u^{\Delta }(x,t)-k^{\pm ,\Delta }_{\alpha }(x,t)\right| ,\\&2w^{\Delta }(x,t)=u^{\Delta }(x,t)+k^{\pm ,\Delta }_{\alpha }(x,t)-\left| u^{\Delta }(x,t)-k^{\pm ,\Delta }_{\alpha }(x,t)\right| . \end{aligned}$$

Invoking the convergence results for \(u^{\Delta }\) and for \(k^{\pm ,\Delta }_{\alpha }\), we have

$$\begin{aligned} v^{\Delta } \rightarrow \underbrace{u \vee k^{\pm }_{\alpha }}_{=:v}, \quad w^{\Delta } \rightarrow \underbrace{u \wedge k^{\pm }_{\alpha }}_{=:w} \end{aligned}$$
(83)

pointwise a.e. and in \(L^1_{{{\,\mathrm{loc}\,}}}(\Pi _T)\). From Lemma 4.1 we have

$$\begin{aligned} \left| u^{n+1}_j - k^{\pm }_{\alpha ,j}\right| \le \left| u_j^n - k^{\pm }_{\alpha ,j}\right| - \lambda (\mathcal {F}^n_{j+1/2} - \mathcal {F}^n_{j-1/2}), \text{ for } \text{ all } j\in \mathbb {Z}\end{aligned}$$
(84)

where

$$\begin{aligned} \mathcal {F}^n_{j+1/2}= & {} {\bar{A}}(u_j^n \vee k^{\pm }_{\alpha ,j},u_{j+1}^n \vee k^{\pm }_{\alpha ,j+1},x_j,x_{j+1})\\&- {\bar{A}}(u_j^n \wedge k^{\pm }_{\alpha ,j},u_{j+1}^n \wedge k^{\pm }_{\alpha ,j+1},x_j,x_{j+1}). \end{aligned}$$

Let \(0 \le \phi (x,t) \in C_0^1(\mathbb {R}\times (0,T))\) be a test function, and define \(\phi _j^n = \phi (x_j,t^n)\). As in the proof of the Lax-Wendroff theorem, we multiply (84) by \(\phi _j^n \Delta x\), and then sum by parts to arrive at

$$\begin{aligned}&\Delta x \Delta t \sum _{n=0}^N \sum _{j \in \mathbb {Z}}\left| u_j^{n+1} - k^{\pm }_{\alpha ,j}\right| (\phi _j^{n+1} -\phi _j^n)/\Delta t\nonumber \\&\quad +\Delta x \Delta t \sum _{n=0}^N \sum _{j \in \mathbb {Z}}\mathcal {F}^n_{j+1/2} (\phi _{j+1}^{n} -\phi _j^n)/\Delta x\nonumber \\&\quad + \Delta x \sum _{j \in \mathbb {Z}}\left| u_j^0 - k^{\pm }_{\alpha ,j}\right| \phi _j^0 \ge 0. \end{aligned}$$
(85)

The first and third sums on the left side of (85) converge to \(\int _{\mathbb {R}_+}\int _{\mathbb {R}}\frac{\partial \phi }{\partial t}|u(x,t)-k^{\pm }_{\alpha }(x)|\,dxdt\) and \(\int _{\mathbb {R}} \left| u_0(x) -k^{\pm }_{\alpha }(x)\right| \phi (x,0) \,dx\), respectively. The crucial part of the argument is to prove convergence of the second sum on left hand side of (85). It suffices to prove that

$$\begin{aligned} \mathcal {I}_1:= & {} \Delta x \Delta t \sum _{n=0}^N \sum _{j \in \mathbb {Z}}{\bar{A}}(v_j^n,v_{j+1}^n,x_j,x_{j+1}) (\phi _{j+1}^{n} -\phi _j^n)/\Delta x \nonumber \\&\rightarrow \int _0^T \int _{\mathbb {R}} A(v,x) \phi _x \, dx \, dt, \end{aligned}$$
(86)

and that

$$\begin{aligned} \mathcal {I}_2:= & {} \Delta x \Delta t \sum _{n=0}^N \sum _{j \in \mathbb {Z}}{\bar{A}}(w_j^n,w_{j+1}^n,x_j,x_{j+1}) (\phi _{j+1}^{n} -\phi _j^n)/\Delta x \nonumber \\&\rightarrow \int _0^T \int _{\mathbb {R}} A(w,x) \phi _x \, dx \, dt. \end{aligned}$$
(87)

We will prove (86). The proof of (87) is similar. We start with the following identity:

$$\begin{aligned} \begin{aligned}&{\bar{A}}(v_j^n,v_{j+1}^n,x_j,x_{j+1}) - {\bar{A}}(v_j^n,v_{j}^n,x_j,x_{j})\\&\quad ={\bar{A}}(v_j^n,v_{j+1}^n,x_j,x_{j+1}) - {\bar{A}}(v_j^n,v_{j}^n,x_j,x_{j+1})\\&\qquad +{\bar{A}}(v_j^n,v_{j}^n,x_j,x_{j+1}) - {\bar{A}}(v_j^n,v_{j}^n,x_j,x_{j}). \end{aligned} \end{aligned}$$
(88)

Taking absolute values, using \({\bar{A}}(v_j^n,v_{j}^n,x_j,x_{j}) = A(v_j^n,x_j)\), and (27), we have

$$\begin{aligned} \begin{aligned} \left| {\bar{A}}(v_j^n,v_{j+1}^n,x_j,x_{j+1}) - A(v_j^n,x_j)\right|&\le L \left| v_{j+1}^n-v_j^n\right| + L \left| u_M(x_{j+1}) - u_M(x_j)\right| \\&\quad +R(\min (v_j^n,u_M(x_j)))\left| a(x_{j+1})-a(x_j)\right| . \end{aligned}\nonumber \\ \end{aligned}$$
(89)

Thus, with the abbreviation \(\rho _j^n := (\phi _{j+1}^{n} -\phi _j^n)/\Delta x\),

$$\begin{aligned} \begin{aligned}&\Delta x \Delta t \sum _{n=0}^N \sum _{j \in \mathbb {Z}}\left| {\bar{A}}(v_j^n,v_{j+1}^n,x_j,x_{j+1}) - A(v_j^n,x_j)\right| \rho _j^n\\&\quad \le L \underbrace{\Delta x \Delta t \sum _{n=0}^N \sum _{j \in \mathbb {Z}}\left| v_{j+1}^n-v_j^n\right| \rho _j^n}_{S_1} \\&\qquad + \underbrace{{\bar{R}}\Delta x \Delta t \sum _{n=0}^N \sum _{j \in \mathbb {Z}}\left| a(x_{j+1})-a(x_j)\right| \rho _j^n}_{S_2} \\&\qquad + L \underbrace{\Delta x \Delta t \sum _{n=0}^N \sum _{j \in \mathbb {Z}}\left| u_M(x_{j+1}) - u_M(x_j)\right| \rho _j^n}_{S_3}. \end{aligned} \end{aligned}$$
(90)

For \(S_1\), we can invoke the Kolmogorov compactness criterion [14] since \(v^{\Delta }\) converges in \(L^1_{{{\,\mathrm{loc}\,}}}(\Pi _T)\), and conclude that \(S_1 \rightarrow 0\). By Assumption H-2 and H-4, (\(a\in \text {BV}(\mathbb {R})\) and \(u_M \in \text {BV}(\mathbb {R})\)), we also have \(S_2,S_3 \rightarrow 0\). As a result, in order to prove (86) it suffices to show that

$$\begin{aligned} \mathcal {I}_1:=\Delta x \Delta t \sum _{n=0}^N \sum _{j \in \mathbb {Z}}A(v_j^n,x_j) (\phi _{j+1}^{n} -\phi _j^n)/\Delta x \rightarrow \int _0^T \int _{\mathbb {R}} A(v,x) \phi _x \, dx \, dt. \end{aligned}$$
(91)

This limit then follows from the estimate

$$\begin{aligned} \begin{aligned} \left| A(v_j^n,x_j)- A(v,x)\right|&\le \left| A(v_j^n,x_j)-A(v_j^n,x)\right| + \left| A(v_j^n,x) - A(v,x)\right| \\&\le R(v_j^n)\left| a(x_j)-a(x)\right| + L\left| v_j^n - v\right| , \end{aligned}\nonumber \\ \end{aligned}$$
(92)

along with the fact that \(a^{\Delta }\rightarrow a\) in \(L^1_{{{\,\mathrm{loc}\,}}}(\mathbb {R})\) (Lemma 3.14), and \(v^{\Delta } \rightarrow v\) in \(L^1_{{{\,\mathrm{loc}\,}}}(\Pi _T)\).

\(\square \)

We can now prove Theorem 2.5.

Proof

Taken together, Lemmas 3.15 and 4.3 establish that the approximations \(u^{\Delta }\) converge in \(L^1_{{{\,\mathrm{loc}\,}}}(\Pi _T)\) and pointwise a.e. in \(\Pi _T\), along a subsequence, to a function \(u \in L^{\infty }(\Pi _T) \cap C([0,T]:L^1_{{{\,\mathrm{loc}\,}}}(\mathbb {R}))\), and u is an adapted entropy solution in the sense of Definition 2.1. By Theorem 2.2, u is the unique solution to the Cauchy problem (1)–(2) with initial data \(u_0\). Moreover, as a consequence of the uniqueness result, the entire computed sequence \(u^{\Delta }\) converges to u, not just a subsequence. The final step of the proof is to extend the result to the case of \(u_0 \in L^{\infty }(\mathbb {R})\), as described in Sect. 3. \(\square \)