Abstract
Although front tracking can be thought of as a numerical method, and has indeed been shown to be excellent for one-dimensional conservation laws, it is not part of the standard repertoire of numerical methods for conservation laws. Traditionally, difference methods have been central to the development of the theory of conservation laws, and the study of such methods is very important in applications.
This chapter is intended to give a brief introduction to difference methods for conservation laws. The emphasis throughout will be on methods and general results rather than on particular examples. Although difference methods and the concepts we discuss can be formulated for systems, we will exclusively concentrate on scalar equations. This is partly because we want to keep this chapter introductory, and partly due to the lack of general results for difference methods applied to systems of conservation laws.
Computation will cure what ails you. — Clifford Truesdell, The Computer, Ruin of Science and Threat to Mankind, 1980/1982 Computation will cure what ails you. — Clifford Truesdell, The Computer, Ruin of Science and Threat to Mankind, 1980/1982
Computation will cure what ails you.
— Clifford Truesdell, The Computer, Ruin of Science and Threat to Mankind, 1980/1982
Access provided by Autonomous University of Puebla. Download chapter PDF
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Although front tracking can be thought of as a numerical method, and has indeed been shown to be excellent for one-dimensional conservation laws, it is not part of the standard repertoire of numerical methods for conservation laws. Traditionally, difference methods have been central to the development of the theory of conservation laws, and the study of such methods is very important in applications.
This chapter is intended to give a brief introduction to difference methods for conservation laws. The emphasis throughout will be on methods and general results rather than on particular examples. Although difference methods and the concepts we discuss can be formulated for systems, we will exclusively concentrate on scalar equations. This is partly because we want to keep this chapter introductory, and partly due to the lack of general results for difference methods applied to systems of conservation laws.
3.1 Conservative Methods
We are interested in numerical methods for the scalar conservation law in one dimension. (We will study multidimensional problems in Chapter 4.) Thus we consider
A difference method is created by replacing the derivatives by finite differences, e.g.,
Here \({\Updelta t}\) and \(\Updelta x\) are small positive numbers. We shall use the notation
where \(u^{n}_{j}\) now is our numerical approximation to the solution u of (3.1) at the point \((j\Updelta x,n{\Updelta t})\). Normally, since we are interested in the initial value problem (3.1), we know the initial approximation
and we want to use (3.2) to calculate u n for \(n\in\mathbb{N}\). We will not say much about boundary conditions in this book. Often one assumes that the initial data is periodic, i.e.,
which gives \(u^{n}_{-K+j}=u^{n}_{K+j}\). Another commonly used device is to assume that \(\partial_{x}f(u)=0\) at the boundary of the computational domain. For a numerical scheme this means that
For nonlinear equations, explicit methods are most common. These can be written
for some function G. We see that \(u^{n+1}\) can depend on the previous \(l+1\) approximations \(u^{n},\ldots,u^{n-l}\). The simplest methods are those with l = 0, where \(u^{n+1}=G(u^{n})\), and we shall restrict ourselves to such methods in this presentation.
⋄ Example 3.1 (A nonconservative method)
Consider Burgers’s equation written in nonconservative form (writing uu x instead of \(\frac{1}{2}(u^{2})_{x}\))
Based on the linear transport equation, if \(u^{n}_{j}> 0\), a natural discretization of this would be
with \(\lambda={\Updelta t}/{\Updelta x}\). Since it is based on the nonconservative formulation, we do not automatically have conservation of u. Indeed,
This in itself might not seem so bad, since it may happen that \({\Updelta x}\sum_{j}(u^{n}_{j}-u^{n}_{j-1})^{2}\) vanishes as \({\Updelta x}\to 0\). However, let us examine what happens in a specific case. Let the initial data be given by
The entropy solution to Burgers’s equation consists of a rarefaction wave, centered at x = 0, and a shock with left value u = 1 and right value u = 0, starting from x = 1 and moving to the right with speed 1/2. At t = 2 the rarefaction wave will catch up with the shock. Thus at t = 2 the entropy solution reads
We use \(u^{0}_{j}=u_{0}(j{\Updelta x})\) as initial data for the scheme. Then we have that for every j such that \(j{\Updelta x}> 1\), \(u^{n}_{j}=0\) for all \(n\geq 0\). So if \(N{\Updelta t}=2\), then \(u^{N}_{j}=0\), and clearly \(u^{N}_{j}\not\approx u(j{\Updelta x},2)\) for \(1\leq j{\Updelta x}\leq 2\). This method simply fails to ‘‘detect’’ the moving shock.
We might think that the situation would be better if we used a (second-order) approximation to u x instead, resulting in the scheme
In practice, this scheme computes something that moves to the right, but the rarefaction part of the solution is not well approximated. In Fig. 3.1 we show how these two nonconservative schemes work on this example. Henceforth, we will not discuss nonconservative schemes. ⋄
We call a difference method conservative if it can be written in the form
where
The function F is referred to as the numerical flux. For brevity, we shall often use the notation
so that (3.6) reads
The above equation has a nice formal explanation. Set \(x_{j}=j{\Updelta x}\) and \(x_{j+1/2}=x_{j}+{\Updelta x}/2\) for \(j\in\mathbb{Z}\). Likewise, set \(t_{n}=n{\Updelta t}\) for \(n\in\mathbb{N}_{0}=\{0\}\cup\mathbb{N}\). Define the interval \(I_{j}=[x_{j-1/2},x_{j+1/2})\) and the cell \(I^{n}_{j}=I_{j}\times[t_{n},t_{n+1})\). If we integrate the conservation law
over the cell \(I^{n}_{j}\), we obtain
Now defining \(u^{n}_{j}\) as the average of \(u(x,t_{n})\) in I j , i.e.,
we obtain the exact expression
Comparing this with (3.7), we see that it is reasonable that the numerical flux \(F_{j+1/2}\) approximates the average flux through the line segment \(x_{j+1/2}\times[t_{n},t_{n+1}]\). Thus
With this interpretation of \(F^{n}_{j+1/2}=F_{j+1/2}(u^{n})\), equation (3.7) states that the change in the amount of u inside the ‘‘volume’’ I j equals (approximately) the influx minus the outflux. Methods that can be written on the form (3.7 ) are often called finite volume methods.
If \(u(x,t_{n})\) is the piecewise constant function
we can solve the conservation law exactly for \(0\leq t-t_{n}\leq{\Updelta x}/(2\max_{u}\left|f^{\prime}(u)\right|)\). This is true because the initial data is a series of Riemann problems, whose solutions will not interact in this short time interval. We also see that \(f(u(x_{j+1/2},t))\) is independent of t, and depends only on \(u^{n}_{j}\) and \(u^{n}_{j+1}\). So if we set \(v=w(x/t)\) to be the entropy solution to
then
This method is called the Godunov method. In general, it is well defined (see Exercise 3.5) for
This last condition is called the Courant–Friedrichs–Lewy (CFL) condition.
If \(f^{\prime}(u)\geq 0\) for all u, then \(v(0)=u^{n}_{j}\), and the Godunov method simplifies to
This is called the upwind method.
Conservative methods have the property that \(\int u\,dx\) is conserved, since
If we set \(u^{0}_{j}\) equal to the average of u 0 over the jth grid cell, i.e.,
and for the moment assume that \(F^{n}_{-K-1/2}=F_{K+1/2}^{n}\), then
A conservative method is said to be consistent if
and in addition, we demand that F be Lipschitz continuous in all its variables, that is,
for some constant L.
⋄ Example 3.2 (Some conservative methods)
We have already seen that the Godunov method (and in particular the upwind method) is an example of a conservative finite volume method.
Another prominent examples is the Lax–Friedrichs scheme, usually written
This can be written in conservative form by defining
Some methods, so-called two-step methods, use iterates of the flux function. One such method is the Richtmyer two-step Lax–Wendroff scheme:
Another two-step method is the MacCormack scheme:
The Lax–Friedrichs and Godunov schemes are both of first order in the sense that the local truncation error is of order one. (We shall return to this concept below.) On the other hand, both the Lax–Wendroff and MacCormack methods are of second order. In general, higher-order methods are good for smooth solutions, but they also produce solutions that oscillate in the vicinity of discontinuities. See Sect. 3.2 . Lower-order methods have ‘‘enough diffusion’’ to prevent oscillations. Therefore, one often uses hybrid methods. These methods usually consist of a linear combination of a lower- and a higher-order method. The numerical flux is then given by
where F L denotes a lower-order numerical flux, and F H a higher-order numerical flux. The function \(\theta_{j+1/2}\) is close to zero where u n is smooth, and close to one near discontinuities. Needless to say, choosing appropriate θ’s is a discipline in its own right. We have implemented a method (called fluxlim in Fig. 3.2) that is a combination of the (second-order) MacCormack method and the (first-order) Lax–Friedrichs scheme, and this scheme is compared with the ‘‘pure’’ methods in this figure. We somewhat arbitrarily used
where \(D_{\pm}\) are the forward and backward divided differences,
so that \(D_{+}D_{-}\) is an approximation to the second derivative of u with respect to x, namely
Another approach is to try to generalize Godunov’s method by replacing the piecewise constant data u n by a smoother function. The simplest such replacement is by a piecewise linear function. To obtain a proper generalization, one should then solve a generalized ‘‘Riemann problem’’ with linear initial data to the left and right. While this is difficult to do exactly, one can use approximations instead. One such approximation leads to the following method:
Here \(\Updelta_{\pm}u^{n}_{j}=\pm(u^{n}_{j\pm 1}-u^{n}_{j})={\Updelta x}\,D_{\pm}u^{n}_{j}\), and
where
and
This method is labeled slopelim in the figures. Now we show how these methods perform on two test examples. In both examples the flux function is given by (see Exercise 2.1)
The example is motivated by applications in oil recovery, where one often encounters flux functions that have a shape similar to that of f, that is, \(f^{\prime}\geq 0\) and \(f^{\prime\prime}(u)=0\) at a single point u. The model is called the Buckley–Leverett equation. The first example uses initial data
In Fig. 3.2 we show the computed solution at time t = 1 for all methods, using 30 grid points in the interval \([-0.1,1.6]\), and \(\Updelta x=1.7/29\), \(\Updelta t=0.5\Updelta x\). The second example uses initial data
and 30 grid points in the interval \([-0.1,2.6]\), \(\Updelta x=2.7/29\), \(\Updelta t=0.5\Updelta x\). In Fig. 3.3 we also show a reference solution computed by the upwind method using 500 grid points. The most notable feature of the plots in Fig. 3.3 is the solutions computed by the second-order methods. We shall show that if a sequence of solutions produced by a consistent conservative method converges, then the limit is a weak solution. The exact solution to both these problems can be calculated by the method of characteristics. ⋄
The local truncation error of a numerical method \(L_{\Updelta t}\) is defined as
where \(S(t)\) is the solution operator associated with (3.1), that is, \(u=S(t)u_{0}\) denotes the solution at time t, and \(S_{N}(t)\) is the solution operator associated with the numerical method, i.e.,
Assuming that we have a smooth solution of the conservation law, allowing us to expand all relevant quantities in Taylor series, we say that the method is of kth order if
as \({\Updelta t}\to 0\). To compute \(L_{\Updelta t}(x)\) one uses a Taylor expansion of the exact solution \(u(x,t)\) near x. We know that u may have discontinuities, so it does not necessarily have a Taylor expansion. Therefore, the concept of truncation error is formal. However, if \(u(x,t)\) is smooth near \((x,t)\), then one would expect that a higher-order method would approximate u better than a lower-order method near \((x,t)\).
⋄ Example 3.3 (Local truncation error)
Consider the upwind method. Then
We verify that the upwind method is of first order:
Since u is a smooth solution of (3.1 ), we find that
and inserting this into the previous equation, we obtain
Hence, the upwind method is of first order. This means that Godunov’s scheme is also of first order. Similarly, computations based on the Lax–Friedrichs scheme yield
Consequently, the Lax–Friedrichs scheme is indeed of first order. From the above computations it also emerges that the Lax–Friedrichs scheme is second-order accurate when applied to the equation (see Exercise 3.6)
This is called the model equation for the Lax–Friedrichs scheme. In order for this to be well posed, the coefficient of u xx on the right-hand side must be nonnegative, that is,
This is a stability restriction on λ, and it is the Courant–Friedrichs–Lewy (CFL) condition that we encountered in (3.9); see also (1.50).
The model equation for the upwind method is
In order for this equation to be well posed, we must have \(f^{\prime}(u)\geq 0\) and \(\lambda f^{\prime}(u)\leq 1\). ⋄
From the above examples, we see that first-order methods have model equations with a diffusive term. Similarly, one finds that second-order methods have model equations with a dispersive right-hand side. Therefore, the oscillations observed in the computations were to be expected.
From now on we let the function \(u_{{\Updelta t}}\) be defined by
Observe that
We briefly mentioned in Example 3.2 the fact that if \(u_{{\Updelta t}}\) converges, then the limit is a weak solution. Precisely, we have the well-known Lax–Wendroff theorem.
Theorem 3.4 (Lax–Wendroff theorem)
Let \(u_{{\Updelta t}}\) be computed from a conservative and consistent method. Assume that \(\mathrm{T.V.}_{x}\left(u_{{\Updelta t}}\right)\) is uniformly bounded in \({\Updelta t}\). Consider a subsequence \(u_{{\Updelta t}_{k}}\) such that \(\Updelta t_{k}\to 0\), and assume that \(u_{{\Updelta t}_{k}}\) converges in \(L^{1}_{\mathrm{loc}}\) as \({\Updelta t}_{k}\to 0\). Then the limit is a weak solution to (3.1).
Proof
The proof uses summation by parts. Let \(\varphi(x,t)\) be a test function. For simplicity we write \(\varphi^{n}_{j}=\varphi(x_{j},t_{n})\). By the definition of \(u^{n+1}_{j}\),
where we choose \(T=N{\Updelta t}\) such that \(\varphi=0\) for \(t\geq T\). After a summation by parts we get
Rearranging, we find that
This almost looks like a Riemann sum for the weak formulation of (3.1). Thus
as \({\Updelta x}\to 0\), and
as \({\Updelta x},{\Updelta t}\to 0\).
Since
as \({\Updelta x},{\Updelta t}\to 0\), it remains to show that
tends to zero as \({\Updelta t}\to 0\) in order to conclude that the limit is a weak solution. Using consistency, (3.12), we find that (3.30) equals
which by the Lipschitz continuity of F is less than
where L is the Lipschitz constant of F. Using the uniform boundedness of the total variation of \(u_{{\Updelta x}}\), we infer that (3.30) is small for small \({\Updelta x}\), and the limit is a weak solution. □
We proved in Theorem 2.15 that the solution of a scalar conservation law in one dimension possesses several properties. The corresponding properties for conservative and consistent numerical schemes read as follows:
Definition 3.5
Let \(u_{{\Updelta t}}\) be computed from a conservative and consistent method.
-
(i)
A method is said to be total variation bounded (TVB), or total variation stable,Footnote 1 if the total variation of u n is uniformly bounded, independently of \(\Updelta x\) and \({\Updelta t}\).
-
(ii)
Assume that u 0 has finite total variation. We say that a numerical method is total variation diminishing (TVD) if \(\mathrm{T.V.}\left(u^{n+1}\right)\leq\mathrm{T.V.}\left(u^{n}\right)\) for all \(n\in\mathbb{N}_{0}\).
-
(iii)
A method is called monotonicity preserving if the initial data being monotone implies that u n is monotone for all \(n\in\mathbb{N}\).
-
(iv)
Assume that \(u_{0}\in L^{1}(\mathbb{R})\). Let \(v_{{\Updelta t}}\) be another solution with initial data \(v_{0}\in L^{1}(\mathbb{R})\). A numerical method is called L 1 -contractive if
$$\displaystyle{\left\|u_{{\Updelta t}}(t)-v_{{\Updelta t}}(t)\right\|}_{L^{1}}\leq{\left\|u_{\Updelta t}(0)-v_{{\Updelta t}}(0)\right\|}_{L^{1}}$$for all \(t\geq 0\). Alternatively, we can of course write this as
$$\displaystyle\sum_{j}\left|u^{n+1}_{j}-v^{n+1}_{j}\right|\leq\sum_{j}\left|u^{n}_{j}-v^{n}_{j}\right|,\quad n\in\mathbb{N}_{0}.$$ -
(v)
A method is said to be monotone if for initial data u 0 and v 0, we have
$$\displaystyle u^{0}_{j}\leq v^{0}_{j},\quad j\in\mathbb{Z}\quad\Rightarrow\quad v^{n}_{j}\leq v^{n}_{j},\quad j\in\mathbb{Z},\,n\in\mathbb{N}.$$
The above notions are strongly interrelated, as the next theorem shows.
Theorem 3.6
For conservative and consistent methods the following hold:
-
(i)
Assume initial data to be integrable. In that case, every monotone method is L 1-contractive.
-
(ii)
Every L 1-contractive method is TVD.
-
(iii)
Every TVD method is monotonicity preserving.
Proof
(i) We apply the Crandall–Tartar lemma, Lemma 2.13, with \(\Omega=\mathbb{R}\), and D equal to the set of all functions in L 1 that are piecewise constant on the grid I j , \(j\in\mathbb{Z}\), and we define \(T(u^{0})=u^{n}\). Since the method is conservative (cf. (3.11)), we have that
Lemma 2.13 immediately implies that (for \(t\in[t_{n},t_{n+1})\))
(ii) Assume now that the method is L 1-contractive, i.e.,
Let v n be the numerical solution with initial data
Then by the translation invariance induced by (3.6), we have \(v^{n}_{i}=u^{n}_{i+1}\) for all n. Furthermore,
(iii) Consider now a TVD method, and assume that we have monotone initial data. Since \(\mathrm{T.V.}\left(u^{0}\right)\) is finite by assumption, the limits
exist. Then \(\mathrm{T.V.}\left(u^{0}\right)=\left|u_{R}-u_{L}\right|\). If u 1 were not monotone, then \(\mathrm{T.V.}\left(u^{1}\right)> \left|u_{R}-u_{L}\right|=\mathrm{T.V.}\left(u^{0}\right)\), which is a contradiction. □
We can summarize the above theorem as follows:
Monotonicity is relatively easy to check for explicit methods, e.g., by calculating the partial derivatives \(\partial G/\partial u^{i}\) in (3.3).
⋄ Example 3.7 (Lax–Friedrichs scheme)
Recall from Example 3.2 that the Lax–Friedrichs scheme is given by
Computing partial derivatives, we obtain, assuming the flux function f to be continuously differentiable,
and hence we see that the Lax–Friedrichs scheme is monotone as long as the CFL condition
is fulfilled. See also Exercise 3.7. ⋄
Theorem 3.8
Fix T > 0. Assume that f is Lipschitz continuous. Let \(u_{0}\in L^{1}(\mathbb{R})\) have bounded variation. Assume that \(u_{{\Updelta t}}\) is computed with a method that is conservative, consistent, total variation bounded, and uniformly bounded, that is,
where M is independent of \({\Updelta x}\) and \({\Updelta t}\).
Then \(\left\{u_{{\Updelta t}}(t)\right\}\) has a subsequence that converges for all \(t\in[0,T]\) to a weak solution \(u(t)\) in \(L^{1}_{\mathrm{loc}}(\mathbb{R})\). Furthermore, the limit is in \(C\left([0,T];L^{1}_{\mathrm{loc}}(\mathbb{R})\right)\).
Proof
We intend to apply Theorem A.11. It remains to show that
for some nonnegative continuous function ν with \(\nu(0)=0\).
The Lipschitz continuity of the flux function implies, for fixed \({\Updelta t}\),
from which we conclude that
where L is the Lipschitz constant of F. More generally,
Now let \(\tau_{1},\tau_{2}\in[0,T]\), and choose \(\tilde{t}_{1},\tilde{t}_{2}\in\left\{n{\Updelta t}\mid 0\leq n\leq T/{\Updelta t}\right\}\) such that
By construction \(u_{{\Updelta t}}(\tau_{j})=u_{{\Updelta t}}(\tilde{t}_{j})\), and hence
Observe that this estimate is uniform in \(\tau_{1},\tau_{2}\in[0,T]\). We conclude that
for a sequence \({\Updelta t}\to 0\). The Lax–Wendroff theorem then says that this limit is a weak solution. □
At this point, the reader should review the concept of a Kružkov entropy condition; see Sect. 2.1. A function u is a Kružkov entropy solution of
if it satisfies
in the sense of distributions, where
for all \(k\in\mathbb{R}\).
The analogue of the Kružkov entropy pair for difference schemes reads as follows. We still employ \(\eta(u)=\left|u-k\right|\). Write
and observe the trivial identity
Then we define the numerical entropy flux Q by
or more explicitly,
Note that Q is consistent with the Kružkov entropy flux, i.e.,
Returning to monotone difference schemes, we have the following result.
Theorem 3.9
Fix T > 0. Assume that f is Lipschitz continuous. Let \(u_{0}\in L^{1}(\mathbb{R})\) have bounded variation. Assume that \(u_{{\Updelta t}}\) is computed with a method that is conservative, consistent, and monotone.
For every sequence \({\Updelta t}_{k}\to 0\), the family \(\left\{u_{{\Updelta t}_{k}}(t)\right\}\) converges in \(L^{1}_{\mathrm{loc}}(\mathbb{R})\) to the Kružkov entropy solution \(u(t)\) for all \(t\in[0,T]\). Furthermore, the limit is in \(C\left([0,T];L^{1}_{\mathrm{loc}}(\mathbb{R})\right)\).
Proof
Consider a sequence \({\Updelta t}_{k}\to 0\). Theorem 3.8 allows us to conclude that \(u_{{\Updelta t}_{k}}\) has a subsequence that converges in \(C([0,T];L^{1}([a,b]))\) to a weak solution. It remains to show that the limit satisfies a discrete Kružkov form. First we find, using (3.7) and (3.32), that
Using that \(u^{n+1}_{j}=G_{j}(u^{n})\), cf. (3.3), and the consistency of the scheme, see (3.12), which implies \(k=G(k,\dots,k)=G(k)\), we conclude from the monotonicity of the scheme that
Therefore,
Applying the technique used in proving the Lax–Wendroff theorem to (3.33) shows that the limit u satisfies
for every nonnegative test function \(\varphi\in C^{\infty}_{0}(\mathbb{R}\times[0,T])\) and for every \(k\in\mathbb{R}\).
Suppose there is another subsequence for which \(u_{\Updelta t}\) does not converge to the entropy solution. Then by the above argument, this subsequence has another subsequence for which the limit is the unique entropy solution. The uniqueness of the limit gives a contradiction, and we conclude that for all sequences \({\Updelta t}_{k}\to 0\), the sequence \(\{u_{{\Updelta t}_{k}}(t)\}\) converges to the unique entropy solution \(u(t)\). □
Note that the above theorem offers a constructive proof of the existence of weak entropy solutions to scalar conservation laws. The fact that monotone schemes converge to the entropy solution provides an alternative to the front-tracking method discussed in Chapt. 2.
Now we shall examine the local truncation error of a general conservative, consistent, and monotone method. Since this can be written
we write
We assume that F, and hence G, is three times continuously differentiable with respect to all arguments, and write the derivatives with respect to the ith argument as
We set \(\partial_{i}F=0\) if i = 0. Throughout this calculation, we assume that the jth slot of G contains \(u^{n}_{j}\), so that \(G(\alpha_{0},\dots,\alpha_{p+q+1})=u_{j}-\lambda(\cdots)\). By consistency we have that
Using this, we find that
and
Therefore,
Furthermore,
We also find that
Having established this, we now let \(u=u(x,t)\) be a smooth solution of the conservation law (3.1). We are interested in applying G to \(u(x,t)\), i.e., in calculating
Set \(u_{i}=u(x+(i-(p+1)){\Updelta x},t)\) for \(i=0,\dots,p+q+1\). Then we find that
Next we observe, since \(\partial^{2}_{i,k}G=\partial^{2}_{k,i}G\) and using (3.39), that
Consequently, the penultimate term in the Taylor expansion of G above is zero, and we have that
Since u is a smooth solution of (3.1), we have already established that
Hence, we compute the local truncation error as
Thus if \(\beta> 0\), then the method is of first order. What we have done so far is valid for every conservative and consistent method where the numerical flux function is three times continuously differentiable. Next, we use that \(\partial_{i}G\geq 0\), so that \(\sqrt{\partial_{i}G}\) is well defined. This means that
Using the Cauchy–Schwarz inequality and (3.37), we find that
Thus, \(\beta(u)\geq 0\). Furthermore, the inequality is strict if more than one term in the sum on the right-hand side is different from zero. If \(\partial_{i}G(u,\dots,u)=0\) except for \(i=k\) for some k, then \(G(u_{0},\dots,u_{p+q+1})=u_{k}\) by (3.37). Hence the scheme is a linear translation, and by consistency, \(f(u)=cu\), where \(c=(j-k)\lambda\). Therefore, monotone methods for nonlinear conservation laws are at most first-order accurate. This is indeed their main drawback. To recapitulate, we have proved the following theorem:
Theorem 3.10
Assume that the numerical flux F is three times continuously differentiable, and that the corresponding scheme is monotone. Then the method is at most first-order accurate.
3.2 Higher-Order Schemes
We want to derive a second-order difference approximation to the solution of a conservation law
In order to derive scheme that is second-order accurate, the local truncation error must be third-order accurate. For a smooth solution we have
For a difference scheme we have \({\Updelta x}=\mathcal{O}\left({\Updelta t}\right)\), so if the resulting scheme is of second order, the difference approximation to \(f(u)_{x}\) must be second-order accurate, and the approximation to \((f^{\prime}f_{x})_{x}\) can be first-order accurate. We can use the following (where we write \(D_{0}(g(x))=(g(x+{\Updelta x})-g(x-{\Updelta x}))/(2{\Updelta x})\)) relations:
This leads to the scheme
where
The scheme (3.42 ) is called the Lax–Wendroff scheme, and by construction it is of second order. We can see that it is conservative with a two-point numerical flux function given by \(F_{j+1/2}=F(u_{j},u_{j+1})\), where
⋄ Example 3.11
We test this second-order scheme on the equation
with two sets of periodic initial data
and u 2 extended periodically. By periodicity, we know that \(u^{i}(x,k)=u^{i}(x,0)\) for \(k\in\mathbb{N}\). In Fig. 3.4 we have plotted the numerical solution at t = 3 with initial data u 1 and u 2 and \({\Updelta x}=1/30\). Note that for the smooth solution the method gives very accurate results, and the errors are indeed of second order. For the discontinuous solution, the errors seem large, and we also see the prominent oscillations trailing the discontinuity.
⋄
For simplicity we will for the moment assume that \(f^{\prime}\geq 0\), so that the upwind method is monotone (and hence TVD). If f is not monotone, then the upwind flux below should be replaced by a numerical flux giving a monotone method.
The Lax–Wendroff numerical flux function can be rearranged to read
We would like to modify the Lax–Wendroff method so that it is locally of second order where the solution is smooth, and first order and monotone near discontinuities. Hence, we would like to turn off the second-order correction near discontinuities. One way of doing this is to observe that the oscillations occur near discontinuities (this is the Gibbs phenomenon), and use oscillations as an indicator of when the second-order term should be turned off. As an important side effect, this is likely to make the resulting method TVD.
To this end let r j (whose exact form will be specified later) be some ‘‘indicator of oscillations’’ near x j . We assume that if there are oscillations, then \(r_{j}<0\). Let \(\varphi(r)\) be a continuous function that is zero if r < 0.
Now we modify the numerical flux for the Lax–Wendroff method to read
If we set
the modified scheme reads
where we have defined
At this point the following lemma is convenient.
Lemma 3.12 (Harten’s lemma)
Let v j be given by
where \(\Updelta_{\pm}u_{j}=\pm(u_{j\pm 1}-u_{j})\).
-
(i)
If \(A_{j+1/2}\) and \(B_{j+1/2}\) are nonnegative for all j, and \(A_{j+1/2}+B_{j+1/2}\leq 1\) for all j, then
$$\displaystyle\mathrm{T.V.}\left(v\right)\leq\mathrm{T.V.}\left(u\right).$$ -
(ii)
If \(A_{j+1/2}\) and \(B_{j+1/2}\) are nonnegative for all j, and \(A_{j-1/2}+B_{j+1/2}\leq 1\) for all j, then
$$\displaystyle\min_{k}{u_{k}}\leq v_{j}\leq\max_{k}u_{k},\quad j\in\mathbb{Z}.$$
Proof
(i) We have
Hence
(ii) We may write
from which the statement follows. □
Returning to the scheme (3.43), we introduce
Hence, we get the scheme
We want to choose \(\varphi\) and r such that we can use the above lemma, with \(B_{j+1/2}=0\), to conclude that the scheme is TVD. Note that \(\lambda\max_{u}f^{\prime}(u)\leq 1\) by the CFL condition and thus \(\alpha_{j+1/2}\geq 0\) and \(\lambda\alpha_{j+1/2}\leq 1\).
We define
To see that this can be used as an ‘‘indicator of oscillations,’’ note that since we have assumed that \(f^{\prime}\geq 0\), we have \(v_{j+1/2}\geq 0\) for all j, and by the CFL condition, \(\lambda v_{j+1/2}\leq 1\) for all j. Hence \(\alpha_{j+1/2}=\frac{1}{2}\nu_{j+1/2}(1-\lambda v_{j+1/2})\geq 0\) for all j. We say that ‘‘oscillations’’ are present at x j if u j is a local maximum or minimum. If so, then \(\mathrm{sign}\left(\Updelta_{-}u_{j}\right)\neq\mathrm{sign}\left(\Updelta_{+}u_{j}\right)\), and consequently, \(r_{j}\leq 0\). We also calculate
Hence
Let us assume that
If this assumption holds, then
This means that
For the other bound,
Summing up, we have proved the following result.
Lemma 3.13
Assume \(f^{\prime}\geq 0\). Let r j be defined by (3.45), and assume \(\lambda> 0\) is such that the CFL condition \(\lambda\max_{u}f^{\prime}(u)\leq 1\) holds. Assume further that the function \(\varphi\) is such that \(\varphi(r)\) vanishes for \(r\leq 0\) and satisfies (3.46). Then the finite volume scheme with numerical flux function (3.43) is TVD.
If we choose \(\varphi(r)=r\), we get another scheme, called the Beam–Warming (BW) scheme. The Beam–Warming scheme is also of second order, but not TVD. The Lax–Wendroff (LW) scheme is obtained by choosing \(\varphi(r)=1\).
If (for the moment) we do not care about TVD, we can define a family of second-order schemes by linear interpolation between the Beam–Warming and the Lax–Wendroff schemes. This interpolation can be done locally, meaning that we choose \(\varphi\) as
The scheme reads
If now \(u^{n}_{j}=u(x_{j},t_{n})\) is the exact solution, then we can calculate
This means that
If \(I=\mathcal{O}\left({\Updelta t}^{3}\right)\), then the combination of the LW and the BW schemes is of second order. By the CFL condition, \(0\leq\lambda\alpha_{j-1/2}\leq 1\). Furthermore, since u is an exact smooth solution, \(\alpha_{j+1/2}\Updelta_{+}u\approx{\Updelta x}f^{\prime}(u)(1-\lambda f^{\prime}(u))u_{x}\), or more precisely
Recall the definition of r j , equation (3.45), and set \(h(x)=f^{\prime}(u(x,t))(1-\lambda f^{\prime}(u(x,t)))u_{x}(x,t)\). With this notation we get
Therefore, to show that \(I=\mathcal{O}\left({\Updelta t}^{3}\right)\), it suffices to show that \(\Updelta_{-}\theta_{j}=\mathcal{O}\left({\Updelta t}\right)\). Since θ is a smooth function with values in \([0,1]\), we get
Thus we have shown that if θ is a Lipschitz continuous function, the resulting scheme is of second order.
Returning to \(\varphi\), we have shown that the scheme (3.47) is of second order if \(\varphi\) is Lipschitz continuous and
If \(\varphi\) satisfies both (3.46) and (3.48), then the resulting scheme (3.47) is TVD, and second-order accurate away from local extrema. The scheme also produces a convergent sequence of approximations, and the limit is a weak solution (prove this!).
The function \(\varphi\) is called a limiter; a list of popular limiters follows. It is clear that the graph of a limiter must lie in the shaded region in Fig. 3.5.
In Fig. 3.6 we show the approximate solutions to
and for \(x\notin[0,1]\) we extend \(u(x,0)\) periodically. The figure shows approximate solutions at t = 0 as well as the exact solution. To the left we see that both the Lax–Wendroff and the Beam–Warming schemes have pronounced oscillations, but the linear combination of the two schemes, in this case using the van Leer limiter, does not. This solution is also superior to the solution found by the upwind method. Since these methods limit the contribution of the higher-order numerical flux function, they are often called flux-limiter methods.
3.2.1 Semidiscrete Higher-Order Methods
Let us now consider semidiscrete higher-order methods, where we do not (initially) discretize time, only space. Based on the finite volume approach, such methods can be written
where \(u_{j}(t)\) is some approximation to the average of u in the cell \((x_{j-1/2},x_{j+1/2}]\). If the right-hand side of the above is a second-order approximation to \(-f(u)_{x}\) for smooth functions \(u(x)\), then the method is said to be second-order accurate. To get second-order accuracy in time as well, one could use a second-order Runge–Kutta method to integrate (3.49 ) numerically. One such example is Heun’s method:
The simplest way of achieving second-order accuracy is by choosing
This, however, gives a nonviable method if we combine it with a first-order Euler method in time. This combination is not stableFootnote 2. To see this, set \(f(u)=u\). With the Euler method it gives
Making the ansatz \(u^{n}_{j}=\mu_{n}e^{ij{\Updelta x}}\) (here \(i=\sqrt{-1}\)) yields
Therefore, \(\left|\mu_{n+1}\right|=\left|\mu_{n}\right|\sqrt{1+\lambda^{2}\sin^{2}({\Updelta x})}\), or
This is unconditionally unstable. Also using the second-order Heun’s method with (3.50) gives an unstable method (see Exercise 3.8). Thus the choice (3.50) is of second order, but useless.
In order to overcome this, we define values to the left and right of a cell edge \(u^{L}_{j+1/2}\) and \(u^{R}_{j-1/2}\) by
Then we can use any two-point monotone first-order numerical flux \(F(u,v)\) to define a second-order approximation
Even if we use Heun’s method for time integration, the extrapolation values (3.51) do not give a TVD method. This is to be expected, since the method is formally second-order accurate. We illustrate this in Fig. 3.7 for the linear equation \(u_{t}+ u_{x}= 0\) with smooth and discontinuous initial values. We used the upwind first-order numerical flux \(F(u,v)= f(u)= u\). From Fig. 3.7 we see that for smooth initial data, the approximation is ‘‘reasonably close’’ to the correct function, whereas for discontinuous initial data, the approximation bears little relation to the exact solution.
These results suggest that the method will be improved if we use some kind of limiter to define the extrapolated values \(u^{L,R}_{j+1/2}\). To this end, set \(\varphi_{j}=\varphi(r_{j})\), where r j is to be defined, and redefine the extrapolations as
For simplicity, we now assume that \(f^{\prime}\geq 0\), and that the numerical flux function is the upwind flux, i.e., \(F(u,v)=f(u)\). In this case the resulting scheme is
We aim to define r j and find conditions on \(\varphi\) such that the above scheme is TVD but retains the formal second order away from oscillations. In order to use Lemma 3.12, we rewrite the scheme as
where we have used a first-order Euler method for the integration in time. This will of course destroy the formal second-order accuracy, but it is convenient for analysis. With
the scheme will be TVD if \(0\leq A_{j-1/2}\leq 1\). Dropping the superscript n, we calculate
where \(\bar{u}_{j}\) is some value between \(u^{L}_{j-1/2}\) and \(u^{L}_{j+1/2}\). If we now choose
this can be rewritten as
We now demand that the scheme satisfy the CFL condition
In this case \(0\leq A_{j-1/2}\leq 1\) if
which can be rewritten
This is the case if
which gives the same TVD-region as for the flux-limiter schemes; see Fig. 3.5.
The scheme with \(\phi(r)\equiv 1\) is not TVD, but of second order, and the choice \(\phi(r)=r\) gives the (useless) second-order scheme with numerical flux (3.50). It follows as before that every smooth (in r) convex combination of these two schemes will also be of second order. Therefore, we get the same second-order region as in Fig. 3.5 . Hence we have the same choice of limiter functions as before. Each choice will give a formally second-order scheme away from local extrema. This method is called MUSCL (monotone upstream centered scheme for conservation laws).
If Fig. 3.8 we show how the above schemes perform on the model equation \(u_{t}+u_{x}=0\) with smooth and discontinuous initial data. The MUSCL method does not perform as well as the flux limiter method, but a clear difference can be seen between the first-order upstream method and the high-resolution methods (MUSCL and flux limiter). For both the high-resolution methods, the computations in Fig. 3.8 use the van Leer limiter. The perceptive reader may have noticed that the flux-limiter method is further from the exact solution than the methods shown in Fig. 3.6. This is because we choose to use the same timestep for all the methods, this being limited by the MUSCL method. Thus, the upwind and flux limiter methods will also have a time step \({\Updelta t}\leq\lambda{\Updelta x}\), with \(\lambda=0.49\).
3.3 Error Estimates
Let others bring order to chaos. I would bring chaos to order instead. — Kurt Vonnegut, Breakfast of Champions (1973)
The concept of local error estimates is based on formal computations, and such estimates indicate how the method performs in regions where the solution is smooth. Since the convergence of the methods discussed was in L 1, it is reasonable to ask how far the approximated solution is from the true solution in this space.
In this section we will consider functions u that are maps \(t\mapsto u(t)\) from \([0,\infty)\) to \(L^{1}_{\mathrm{loc}}\cap{BV}(\mathbb{R})\) such that the one-sided limits \(u(t\pm)\) exist in \(L^{1}_{\mathrm{loc}}\), and for definiteness we assume that this map is right continuous. Furthermore, we assume that
We denote this class of functions by \(\mathcal{K}\). From Theorem 2.15 we know that solutions of scalar conservation laws are in the class \(\mathcal{K}\).
It is convenient to introduce moduli of continuity in time (see Appendix A)
From Theorem 2.15 we have that
for weak solutions of conservation laws.
Now let \(u(x,t)\) be any function in \(\mathcal{K}\), not necessarily a solution of (3.1). In order to measure how far u is from being a solution of (3.1) we insert u in the Kružkov form (cf. (2.23))
If u is a solution, then \(\Lambda_{T}\geq 0\) for all constants k and all nonnegative test functions ϕ. We shall now use the special test function
where
and \(\omega(x)\) is an even \(C^{\infty}\) function satisfying
Let \(v(x^{\prime},s^{\prime})\) be the unique weak solution of (3.1), and define
The comparison result reads as follows.
Theorem 3.14 (Kuznetsov’s lemma)
Let \(u(\,\cdot\,,t)\) be a function in \(\mathcal{K}\), and v a solution of (3.1). If \(0<\varepsilon_{0}<T\) and \(\varepsilon> 0\), then
where \(u_{0}=u(\,\cdot\,,0)\) and \(v_{0}=v(\,\cdot\,,0)\).
Proof
We use special properties of the test function Ω, namely that
and
Using (3.58) and (3.59), we find that
Since v is a weak solution, \(\Lambda_{\varepsilon,\varepsilon_{0}}(v,u)\geq 0\), and hence
Therefore, we would like to obtain a lower bound on A and an upper bound on B, the lower bound on A involving \({\left\|u(T)-v(T)\right\|}_{L^{1}}\) and the upper bound on B involving \({\left\|u_{0}-v_{0}\right\|}_{L^{1}}\). We start with the lower bound on A.
Let \(\rho_{\varepsilon}\) be defined by
Then
Now by a use of the triangle inequality,
Hence
Regarding the upper estimate on B, we similarly have that
and we also obtain
Since v is a solution, it satisfies the TVD property, and hence
using (A.10). By the properties of ω,
Applying (3.55), we obtain (recall that \(\varepsilon_{0}<T\))
and
Similarly,
and
If we collect all the above bounds, we should obtain the statement of the theorem. □
Observe that in the special case that u is a solution of the conservation law (3.1), we know that \(\Lambda_{\varepsilon,\varepsilon_{0}}(u,v)\geq 0\), and hence we obtain, as \(\varepsilon,\varepsilon_{0}\to 0\), the familiar stability result
We shall now show in three cases how Kuznetsov’s lemma can be used to give estimates on how fast a method converges to the entropy solution of (3.1).
⋄ Example 3.15 (The smoothing method)
While not a proper numerical method, the smoothing method provides an example of how the result of Kuznetsov may be used. The smoothing method is a (semi)numerical method approximating the solution of (3.1) as follows: Let \(\omega_{\delta}(x)\) be a standard mollifier with support in \([-\delta,\delta]\), and let \(t_{n}=n\Updelta t\). Set \(u^{0}=u_{0}*\omega_{\delta}\). For \(0\leq t<{\Updelta t}\) define u 1 to be the solution of (3.1) with initial data u 0. If \({\Updelta t}\) is small enough, u 1 remains differentiable for \(t<{\Updelta t}\). In the interval \(\left[(n-1){\Updelta t},n\Updelta t\right)\), we define u n to be the solution of (3.1), with \(u^{n}\left(x,(n-1){\Updelta t}\right)=u^{n-1}(\,\cdot\,,t_{n}-)*\omega_{\delta}\). The advantage of doing this is that u n will remain differentiable in x for all times, and the solution in the strips \(\left[t_{n},t_{n+1}\right)\) can be found by, e.g., the method of characteristics. To show that u n is differentiable, we calculate
Let \(\mu(t)=\max_{x}\left|u_{x}(x,t)\right|\). Using that u is a classical solution of (3.1), we find by differentiating (3.1) with respect to x that
Write
where \(x_{0}(t)\) is the location of the maximum of \(\left|u_{x}\right|\). Then
since \(u_{xx}=0\) at an extremum of u x . Thus
where \(c={\left\|f^{\prime\prime}\right\|}_{\infty}\). The idea is now that (3.61) has a blowup at some finite time, and we choose \({\Updelta t}\) less than this time. We shall be needing a precise relation between \(\Updelta t\) and δ and must therefore investigate (3.61) further. Solving (3.61) we obtain
So if
the method is well defined. Choosing \({\Updelta t}=\delta/(2c\mathrm{T.V.}\left(u_{0}\right))\) will do.
Since u is an exact solution in the strips \(\left[t_{n},t_{n+1}\right)\), we have
Summing these inequalities and setting \(k=v(y,s)\), where v is an exact solution of (3.1), we obtain
where we use the test function \(\Omega(x,y,t,s)=\omega_{\varepsilon_{0}}(t-s)\omega_{\varepsilon}(x-y)\). Integrating this over y and s, and letting \(\varepsilon_{0}\) tend to zero, we get
Using this in Kuznetsov’s lemma, and letting \(\varepsilon_{0}\to 0\), we obtain
where we have used that \(\lim_{\varepsilon_{0}\to 0}\nu_{t}\left(u,\varepsilon_{0}\right)=0\), which holds because u is a solution of the conservation law in each strip \(\left[t_{n},t_{n+1}\right)\).
To obtain a more explicit bound on the difference of u and v, we investigate \(\rho_{\varepsilon}(\omega_{\delta}*u,v)-\rho_{\varepsilon}(u,v)\), where \(\rho_{\varepsilon}\) is defined by (3.60),
which follows after writing \(\iiint=\frac{1}{2}\iiint+\frac{1}{2}\iiint\) and making the substitution \(x\mapsto x-\delta z\), \(z\mapsto-z\) in one of these integrals. Therefore,
by the triangle inequality and a further substitution \(y\mapsto x-y\). Since \(N=T/{\Updelta t}\), the last term in (3.63) is less than
using (3.62). Furthermore, we have that
Letting \(K=\mathrm{T.V.}\left(u_{0}\right)c\), we find that
using (3.63). Minimizing with respect to \(\varepsilon\), we find that
So, we have shown that the smoothing method is of order \(\frac{1}{2}\) in the smoothing coefficient δ. ⋄
⋄ Example 3.16 (The method of vanishing viscosity)
Another (semi)numerical method for (3.1) is the method of vanishing viscosity. Here we approximate the solution of (3.1) by the solution of
using the same initial data. Let \(u^{\delta}\) denote the solution of (3.65). Due to the dissipative term on the right-hand side, the solution of (3.65) remains a classical (twice differentiable) solution for all t > 0. Furthermore, the solution operator for (3.65) is TVD. Hence a numerical method for (3.65) will (presumably) not experience the same difficulties as a numerical method for (3.1). If \((\eta,q)\) is a convex entropy pair, we have, using the differentiability of the solution, that
Multiplying by a nonnegative test function \(\varphi\) and integrating by parts, we get
where we have used the convexity of η. Applying this with \(\eta=\left|u^{\delta}-u\right|\) and \(q=F(u^{\delta},u)\), we can bound \(\lim_{\varepsilon_{0}\to 0}\Lambda_{\varepsilon,\varepsilon_{0}}(u^{\delta},u)\) as follows:
Now letting \(\varepsilon_{0}\to 0\) in (3.57), we obtain
So the method of vanishing viscosity also has order \(\frac{1}{2}\). ⋄
⋄ Example 3.17 (Monotone schemes)
We will here show that monotone schemes converge in L 1 to the solution of (3.1) at a rate of \(({{\Updelta t}})^{1/2}\). In particular, this applies to the Lax–Friedrichs scheme.
Let \(u_{{\Updelta t}}\) be defined by (3.27), where \(u^{n}_{j}\) is defined by (3.6), that is,
where \(F^{n}_{j+1/2}=F\left(u^{n}_{j-p},\dots,u^{n}_{j+p^{\prime}}\right)\), for a scheme that is assumed to be monotone; cf. Definition 3.5. In the following we use the notation
We find that
by a summation by parts. Recall that we define the numerical entropy flux by
Monotonicity of the scheme implies, cf. (3.33), that
For a nonnegative test function ϕ we obtain
We also have that
and
which implies that
Next, we subtract \(\phi({x_{j-1/2}},t_{n+1})\) from the integrand in each of the latter two integrals. Since \({\Updelta t}=\lambda{\Updelta x}\), the extra terms cancel, and we obtain
Let \(v=v(y,s)\) denote the unique entropy solution of (3.1), and let \(k=v(y,s)\). Then
Thus to estimate \(-\Lambda_{\varepsilon_{0},\varepsilon}(u,v)\) we must integrate the terms on the right-hand side of (3.67) in \((y,s)\). To this end,
Recalling that \(\lambda={\Updelta t}/{\Updelta x}\), we get
We also have that
Therefore,
Similarly,
and therefore
Collecting the estimates (3.68)–(3.70), we obtain
where the constant C depends only on f, F, and \(\left|u_{0}\right|_{BV}\). Regarding the term \(\nu(u,\varepsilon_{0})\), we have that \(t\mapsto u_{\Updelta t}(x,\,\cdot\,)\) is ‘‘almost’’ L 1 Lipschitz continuous, so
The entropy solution v is of uniformly bounded variation in x for each t. Therefore, we conclude that
Choosing
we have that \(\left\|u_{\Updelta t}(\,\cdot\,,0)-v_{0}\right\|_{1}\leq{\Updelta x}\left|v_{0}\right|_{BV}\). Then we can choose \(\varepsilon=\sqrt{{\Updelta x}}\) and \(\varepsilon_{0}=\sqrt{{\Updelta t}}\) to find that
where C depends on T, \(\left|v_{0}\right|_{BV}\), f, and F. ⋄
If one uses Kuznetsov’s lemma to estimate the error of a scheme, one must estimate the modulus of continuity \(\tilde{\nu}_{t}\left(u,\varepsilon_{0}\right)\) and the term \(\Lambda_{\varepsilon,\varepsilon_{0}}(u,v)\). In other words, one must obtain regularity estimates on the approximation u. Therefore, this approach gives a posteriori error estimates, and perhaps the proper use for this approach should be in adaptive methods, in which it would provide error control and govern mesh refinement. However, despite this weakness, Kuznetsov’s theory is still actively used.
3.4 A Priori Error Estimates
We shall now describe an application of a variation of Kuznetsov’s approach in which we obtain an error estimate for the method of vanishing viscosity without using the regularity properties of the viscous approximation. Of course, this application only motivates the approach, since regularity of the solutions of parabolic equations is not difficult to obtain elsewhere. Nevertheless, it is interesting in its own right, since many difference methods have (3.73) as their model equation. We first state the result.
Theorem 3.18
Let \(v(x,t)\) be a solution of (3.1) with initial value v 0, and let u solve the equation
in the classical sense, with \(\delta(u)> 0\). Then
where
and
This result is not surprising, and in some sense is weaker than the corresponding result found using Kuznetsov’s lemma. The new element here is that the proof does not rely on any smoothness properties of the function u, and is therefore also considerably more complicated than the proof using Kuznetsov’s lemma.
Proof
The proof consists in choosing new Λ’s, and using a special form of the test function \(\varphi\). Let \(\omega^{\infty}\) be defined as
We will consider a family of smooth functions ω such that \(\omega\to\omega^{\infty}\). To keep the notation simple we will not add another parameter to the functions ω, but rather write \(\omega\to\omega^{\infty}\) when we approach the limit. Let
with \(\omega_{\alpha}(x)=(1/\alpha)\,\omega(x/\alpha)\) as usual. In this notation,
In the following we will use the entropy pair
and except where explicitly stated, we always let \(u=u(y,s)\) and \(v=v(x,t)\). Let \(\eta_{\sigma}(u,k)\) and \(q_{\sigma}(u,k)\) be smooth approximations to η and q such that
For a test function \(\varphi\) define
(which is clearly zero because of (3.73)) and
Note that since u satisfies (3.73), \(\Lambda_{\varepsilon,\varepsilon_{0}}^{\sigma}=0\) for every v. We now split \(\Lambda_{\varepsilon,\varepsilon_{0}}^{\sigma}\) into two parts. Writing (cf. (2.15))
we may introduce
such that \(\Lambda_{\varepsilon,\varepsilon_{0}}^{\sigma}=\Lambda_{1}^{\sigma}+\Lambda_{2}^{\sigma}\). Note that if \(\delta(u)> 0\), we always have \(\Lambda_{1}^{\sigma}\geq 0\), and hence \(\Lambda_{2}^{\sigma}\leq 0\). Then we have that
To estimate \(\Lambda_{2}\), we integrate by parts:
where
Now define (the ‘‘dual of \(\Lambda_{2}\)’’)
Then we can write
We will need later that
Let
and
To continue estimating, we need the following proposition.
Proposition 3.19
Proof (of Proposition 3.19)
We start by estimating \(\Phi_{1}\). First note that
Thus
Here we have used that v is an exact solution. The estimate for \(\Phi_{2}\) is similar, yielding
To estimate \(\Phi_{3}\) we proceed in the same manner:
This gives
while by the same reasoning, the estimate for \(\Phi_{4}\) reads
The proof of Proposition 3.19 is complete. □
To proceed further, we shall need the following Gronwall-type lemma:
Lemma 3.20
Let θ be a nonnegative function that satisfies
for all \(\tau\in[0,T]\) and some constant C. Then
Proof (of Lemma 3.20)
If \(\tau\leq\varepsilon_{0}\), then for \(t\in[0,\tau]\), \(\omega^{\infty}_{\varepsilon_{0}}(t)=\omega^{\infty}_{\varepsilon_{0}}(\tau-t)=1/(2\varepsilon_{0})\). In this case (3.75) immediately simplifies to \(\theta(t)\leq C\).
For \(\tau> \varepsilon_{0}\), we can write (3.75) as
For \(t\in\left[0,\varepsilon_{0}\right]\) we have \(\theta(t)\leq C\), and this implies
This concludes the proof of the lemma. □
Now we can continue the estimate of \(e(T)\).
Proposition 3.21
We have that
Proof (of Proposition 3.21)
Starting with the inequality (3.74), using the estimate for Φ from Proposition 3.19, we have, after passing to the limit \(\omega\to\omega^{\infty}\), that
We apply Lemma 3.20 with
to complete the proof. □
To finish the proof of the theorem, it remains only to estimate
We will use the following inequality:
Since v is an entropy solution to (3.1), we have that
Since v is of bounded variation, it suffices to study the case that v is differentiable except on a countable number of curves \(x=x(t)\). We shall bound \(\Lambda_{2}^{*}\) in the case that we have one such curve; the generalization to more than one is straightforward. Integrating (3.77) by parts, we obtain
where Ψ is given by
As before, \([\![a]\!]\) denotes the jump in a, i.e., \([\![a]\!]=a(x(t)+,t)-a(x(t)-,t)\). Using (3.76), we obtain
Let D be given by
A simple calculation shows that
Consequently,
Inserting this in (3.79), and the result in (3.78), we find that
Summing up, we have now shown that
We can set \(\varepsilon_{0}\) to zero, and minimize over \(\varepsilon\), obtaining
The theorem is proved. □
The main idea behind this approach to getting a priori error estimates is to choose the ‘‘Kuznetsov-type’’ form \(\Lambda_{\varepsilon,\varepsilon_{0}}\) such that
for every function v, and then write \(\Lambda_{\varepsilon,\varepsilon_{0}}\) as the sum of a nonnegative and a nonpositive part. Given a numerical scheme, the task is then to prove a discrete analogue of the previous theorem.
3.5 Measure-Valued Solutions
You try so hard, but you don’t understand \(\dots\) — Bob Dylan, Ballad of a Thin Man (1965)
Monotone methods are at most first-order accurate. Consequently, one must work harder to show that higher-order methods converge to the entropy solution. While this is possible in one space dimension, i.e., in the above setting, it is much more difficult in several space dimensions. One useful tool to aid the analysis of higher-order methods is the concept of measure-valued solutions. This is a rather complicated concept, which requires a solid background in analysis beyond this book. Therefore, the presentation in this section is brief, and is intended to give the reader a first flavor, and an idea of what this method can accomplish.
3.5.1 The Young Measure
Consider a sequence \(\left\{u_{n}\right\}_{n\in\mathbb{N}}\) that is uniformly bounded in \(L^{\infty}(\mathbb{R}\times[0,\infty))\). This is typically the result of a numerical method, where one has \(L^{\infty}\) bounds, but no uniform bounds on the total variation. Passing to a subsequence, we can still infer that the weak-star limit
exists, which means that for all \(\varphi\in L^{1}(\mathbb{R}\times[0,\infty))\),
with \(\Omega=\mathbb{R}\times[0,\infty)\). In order to show that the limit u is a weak solution to the conservation law, we must study
The first term in this equation has a limit \(\iint u\varphi_{t}\,dx\,dt\), but the second term is more complicated, as the next example shows.
⋄ Example 3.22
Let \(u_{n}=\sin(nx)\) and \(f(u)=u^{2}\), and \(\varphi\) a smooth function in \(L^{1}(\mathbb{R})\). Then
On the other hand, \(f(u_{n})=\sin^{2}(nx)=(1-\cos(2nx))/2\), and hence a similar estimate shows that
Thus we conclude that
⋄
The Young measure is one method for studying the weak limits of nonlinear functions of a weak-star convergent sequence.
In order to define it, we first define the function
It is easily verified that for every differentiable function f,
Furthermore, let \(g(\lambda)\) be a function such that
Define \(m(\lambda)\) by
Then \(\lim_{\lambda\to-\infty}m(\lambda)=0\), and
Furthermore, by (3.82), we have that m is nondecreasing in the interval \((-\infty,u)\) and nonincreasing in the interval \((u,\infty)\). Hence \(m(\lambda)\) is nonnegative. For every twice differentiable convex function \(S(\lambda)\) we have
Thus, for a strictly convex function S, the function \(\chi(\,\cdot\,,u)\) is the unique minimizer of the problem: Find \(g\in L^{1}(\mathbb{R})\) such that (3.82) holds and
If \(\left\{u_{n}\right\}_{n\in\mathbb{N}}\subset L^{\infty}(\Omega)\) is uniformly bounded, then \(\left\{\chi(\,\cdot\,,u_{n})\right\}_{n\in\mathbb{N}}\subset L^{\infty}(\mathbb{R}\times\Omega)\) is also uniformly bounded. Thus it has (modulo subsequences) a weak-star limit, which we call \(f(\lambda,x,t)\). The next lemma gives some properties of this limit.
Lemma 3.23
Let \(f(\lambda,x,t)\) denote the weak-star limit of \(\chi(\lambda,u_{n})\). Then f is in \(L^{\infty}(\mathbb{R}\times\Omega)\) and satisfies
for almost all \((x,t)\). Furthermore,
where \(\delta(\lambda)\) is the Dirac measure, and \(\nu_{(x,t)}(\lambda)\) is a nonnegative measure in \((\lambda,x,t)\) such that
for almost all \((x,t)\).
Remark 3.24
The derivative in (3.86) is to be interpreted in the distributional sense, i.e., (3.86) means that
for all \(\varphi\in C^{\infty}_{0}(\mathbb{R})\).
Proof
The first equality, (3.84) follows from the observation
To prove (3.85) we choose a test function of the form \(\varphi(x,t)\psi(\lambda)\), where the ψ has support in \((0,\infty)\) and \(\varphi\geq 0\). By definition of the weak-star limit,
Thus \(f\geq 0\) for \(\lambda\geq 0\), and one similarly shows that \(f\leq 0\) if \(\lambda\leq 0\).
To prove (3.86), by Remark 3.24 we have that for all test functions \(\varphi(\lambda,x,t)\),
where \(\delta_{u_{n}}\) is the Dirac mass centered at u n . Thus we define
so that
The measure \(\nu_{n,(x,t)}\) is a probability measure in the first variable, in the sense that it is nonnegative and has unit total mass. Thus we have that there exists a nonnegative measure \(\nu_{(x,t)}\) such that
for all continuous functions ψ. In order to conclude, we must prove (3.87). Choose a test function of the form \(\psi(\lambda)\varphi(x,t)\), where ψ has compact support and \(\psi\equiv 1\) for \(\left|\lambda\right|\leq\left\|u_{n}\right\|_{\infty}\). Then
Thus (3.87) holds. □
If now \(u_{n}\overset{*}{\rightharpoonup}u\) in \(L^{\infty}\), then we have
Similarly, for every function \(S(u)\) with \(S^{\prime}\) bounded and \(S(0)=0\),
Therefore, if \(\bar{S}(x,t)\) denotes the weak-star limit of \(S(u_{n})\), then
The limit measure \(\nu_{(x,t)}\) is called the Young measure associated with the sequence \(\left\{u_{n}\right\}\). If S is strictly convex, then using (3.83), we obtain
with equality if and only if \(f(\lambda,x,t)=\chi(\lambda,u(x,t))\). Hence, \(u_{n}\to u\) strongly, if and only if \(\nu_{(x,t)}(\lambda)=\delta_{u}(\lambda)\).
We have proved the following theorem:
Theorem 3.25 (Young’s theorem)
Let \(\left\{u_{n}\right\}\) be a sequence of functions from \(\Omega=\mathbb{R}\times[0,\infty)\) with values in \([-K,K]\). Then there exists a family of probability measures \(\left\{\nu_{(x,t)}(\lambda)\right\}_{(x,t)\in\Omega}\), depending weak-star measurably on \((x,t)\), such that for every continuously differentiable function \(S\colon[-K,K]\to\mathbb{R}\) with \(S^{\prime}\) bounded and \(S(0)=0\), we have
where
and where the exceptional set possibly depends on S. Furthermore,
We also have that \(u_{n}\to u\) strongly in \(L^{1}_{\mathrm{loc}}(\Omega)\) if and only if \(\nu_{(x,t)}(\lambda)=\delta_{u(x,t)}(\lambda)\).
⋄ Example 3.26
Let us compute the Young measure associated with the sequence \(\left\{\sin(nx)\right\}\). In this case the weak limit of \(\chi(\lambda,\sin(nx))\) will be independent of x. If \(\lambda> 0\), then
and similarly, if \(\lambda<0\), then
We have \(\chi(\lambda,\sin(nx))\overset{*}{\rightharpoonup}f(\lambda)\), where
This can be rewritten
Thus from (3.86),
and we see that
⋄
Theorem 3.25 is indeed the main reason why measure-valued solutions are easier to obtain than weak solutions, since for every bounded sequence of approximations to a solution of a conservation law we can associate (at least) one probability measure \(\nu_{(x,t)}\) representing the weak-star limits of the sequence. Thus we avoid having to show that the method is TVD stable and use Helly’s theorem to be able to work with the limit of the sequence. The measures associated with weakly convergent sequences are frequently called Young measures.
Intuitively, when we are in the situation that we have no knowledge of eventual oscillations in \(u_{\varepsilon}\) as \(\varepsilon\to 0\), the Young measure \(\nu_{(x,t)}(E)\) can be thought of as the probability that the ‘‘limit’’ at the point \((x,t)\) takes a value in the set E. To be a bit more precise, define
Then for small r, \(\nu^{\varepsilon,r}_{(x,t)}(E)\) is the probability that \(u^{\varepsilon}\) takes values in E near x. It can be shown that
see 10; .
3.5.2 Measure-Valued Solutions
Now we can define measure-valued solutions. We use the notation
A probability measure \(\nu_{(x,t)}\) is a measure-valued solution to (3.1) if
in the distributional sense, where \(\mathop{\mathrm{Id}}\) is the identity map, \(\mathop{\mathrm{Id}}(\lambda)=\lambda\). As with weak solutions, we call a measure-valued solution compatible with the entropy pair \((\eta,q)\) (recall that \(q^{\prime}=\eta^{\prime}f^{\prime}\)) if
in the distributional sense. If (3.89) holds for all convex η, we call \(\nu_{(x,t)}\) a measure-valued entropy solution. Clearly, weak entropy solutions are also measure-valued solutions, as we can see by setting
for a weak entropy solution u. But measure-valued solutions are more general than weak solutions, since for every two measure-valued solutions \(\nu_{(x,t)}\) and \(\mu_{(x,t)}\) and \(\theta\in[0,1]\), the convex combination
is also a measure-valued solution. It is not clear, however, what are the initial data satisfied by the measure-valued solution defined by (3.90). We would like our measure-valued solutions initially to be Dirac masses, i.e., \(\nu_{(x,0)}=\delta_{u_{0}(x)}\). Concretely, we shall assume the following:
for every A. For every Young measure \(\nu_{(x,t)}\) we have the following lemma.
Lemma 3.27
Let \(\nu_{(x,t)}\) be a Young measure with \(\mathop{\mathrm{supp}}\nu_{(x,t)}\subset[-K,K]\), and let \(\omega_{\varepsilon}\) be a standard mollifier in x and t. Then:
-
(i)
there exists a Young measure \(\nu_{(x,t)}^{\varepsilon}\) defined by
$$\displaystyle\begin{aligned}\displaystyle\left\langle\nu_{(x,t)}^{\varepsilon},g\right\rangle&\displaystyle=\left\langle\nu_{(x,t)},g\right\rangle*\omega_{\varepsilon}\\ \displaystyle&\displaystyle=\iint\omega_{\varepsilon}(x-y)\omega_{\varepsilon}(t-s)\left\langle\nu_{(y,s)},g\right\rangle\,dy\,ds.\end{aligned}$$(3.92) -
(ii)
For all \((x,t)\in\mathbb{R}\times[0,T]\) there exist bounded measures \(\partial_{x}\nu_{(x,t)}^{\varepsilon}\) and \(\partial_{t}\nu_{(x,t)}^{\varepsilon}\), defined by
$$\displaystyle\begin{aligned}\displaystyle\left\langle\partial_{t}\nu_{(x,t)}^{\varepsilon},g\right\rangle&\displaystyle=\partial_{t}\left\langle\nu_{(x,t)}^{\varepsilon},g\right\rangle,\\ \displaystyle\left\langle\partial_{x}\nu_{(x,t)}^{\varepsilon},g\right\rangle&\displaystyle=\partial_{x}\left\langle\nu_{(x,t)}^{\varepsilon},g\right\rangle.\end{aligned}$$(3.93)
Proof
Clearly, the right-hand side of (3.92) is a bounded linear functional on \({\mathcal{C}}_{0}(\mathbb{R})\), the set of compactly supported continuous functions, and hence the Riesz representation theorem guarantees the existence of \(\nu_{(x,t)}^{\varepsilon}\). To show that \(\|\nu_{(x,t)}^{\varepsilon}\|_{\mathcal{M}(\mathbb{R})}=1\), where \(\mathcal{M}(\mathbb{R})\) is the set of all Radon measures, we let \(\left\{\psi_{n}\right\}\) be a sequence of test functions such that
Then for all \(1> \kappa> 0\) we can find an N such that
for \(n\geq N\). Thus, for such n,
and therefore \(\|\nu_{(x,t)}^{\varepsilon}\|_{\mathcal{M}(\mathbb{R})}\geq 1\). The opposite inequality is immediate, since
for all test functions ψ. Therefore, \(\nu_{(x,t)}^{\varepsilon}\) is a probability measure. Similarly, the existence of \(\partial_{x}\nu_{(x,t)}^{\varepsilon}\) and \(\partial_{t}\nu_{(x,t)}^{\varepsilon}\) follows by the Riesz representation theorem. Since \(\nu_{(x,t)}\) is bounded, the boundedness of \(\partial_{x}\nu_{(x,t)}^{\varepsilon}\) and \(\partial_{t}\nu_{(x,t)}^{\varepsilon}\) follows for each fixed \(\varepsilon> 0\). □
Now that we have established the existence of the ‘‘smooth approximation’’ to a Young measure, we can use this to prove the following lemma.
Lemma 3.28
Assume that f is a Lipschitz continuous function and that \(\nu_{(x,t)}(\lambda)\) and \(\sigma_{(x,t)}(\mu)\) are measure-valued solutions with support in \([-K,K]\). Then
in the distributional sense, where
and \(\nu_{(x,t)}\otimes\sigma_{(x,t)}\) denotes the product measure \(d\nu_{(x,t)}d\sigma_{(x,t)}\) on \(\mathbb{R}\times\mathbb{R}\).
Proof
If \(\nu_{(x,t)}^{\varepsilon}\) and \(\sigma_{(x,t)}^{\varepsilon}\) are defined by (3.92), and \(\varphi\in C^{\infty}_{0}(\mathbb{R}\times[0,T])\), then we have that
and similarly,
and analogous identities also hold for \(\sigma_{(x,t)}\). Therefore,
Next, we observe that for every continuous function g,
and an analogous equality holds for
Therefore, we find that
for every nonnegative test function \(\varphi\). Now we would like to conclude the proof by sending \(\varepsilon_{1}\) and \(\varepsilon_{2}\) to zero. Consider the second term:
Since
for almost all \((x,t)\) as \(\varepsilon_{1}\to 0\), we can use the Lebesgue dominated convergence theorem to conclude that
We can apply this argument once more for \(\varepsilon_{2}\), obtaining
Similarly, we obtain
This concludes the proof of the lemma. □
Let \(\left\{u_{\varepsilon}\right\}\) and \(\left\{v_{\varepsilon}\right\}\) be the sequences associated with \(\nu_{(x,t)}\) and \(\sigma_{(x,t)}\), respectively, and assume that for \(t\leq T\), the support of \(u_{\varepsilon}(\,\cdot\,,t)\) and \(v_{\varepsilon}(\,\cdot\,,t)\) is contained in a finite interval I. Then both \(u_{\varepsilon}(\,\cdot\,,t)\) and \(v_{\varepsilon}(\,\cdot\,,t)\) are in \(L^{1}(\mathbb{R})\) uniformly in \(\varepsilon\). This means that both
are in \(L^{1}(\mathbb{R})\) for almost all t. Using this observation and the preceding lemma, Lemma 3.28, we can continue. Define a smooth approximation to the characteristic function of \([t_{1},t_{2}]\) by
where \(t_{2}> t_{1}> 0\) and \(\omega_{\varepsilon}\) is the usual mollifier. Also define
and set \(\psi_{\varepsilon,n}=\psi_{n}*\omega_{\varepsilon}(x)\). Hence
is an admissible test function. Furthermore, \(\left|\psi^{\prime}_{\varepsilon,n}\right|\leq 1/n\), and \(\phi_{\varepsilon}(t)\) tends to the characteristic function of the interval \([t_{1},t_{2}]\) as \(\varepsilon\to 0\). Therefore,
Set
Using this definition, we find that
The right-hand side of this is bounded by
as \(n\to\infty\). Since \(\nu_{(x,t)}\) and \(\sigma_{(x,t)}\) are probability measures, for almost all t, the set
has zero Lebesgue measure. Therefore, for almost all t,
Integrating (3.99) with respect to t 1 from 0 to T, then dividing by T and sending T to 0, using (3.91), and finally sending \(n\to\infty\), we find that
where the Lebesgue measure of the (exceptional) set E is zero. Suppose now that for \((x,t)\notin E\) there is a \(\bar{\lambda}\) in the support of \(\nu_{(x,t)}\) and a \(\bar{\mu}\) in the support of \(\sigma_{(x,t)}\) and \(\bar{\lambda}\neq\bar{\mu}\). Then we can find positive functions g and h such that
and
Furthermore,
Thus
This contradiction shows that both \(\nu_{(x,t)}\) and \(\sigma_{(x,t)}\) are unit point measures with support at a common point. Precisely, we have proved the following theorem:
Theorem 3.29
Let \(u_{0}\in L^{1}(\mathbb{R})\cap L^{\infty}(\mathbb{R})\).
-
(i)
Suppose that \(\nu_{(x,t)}\) is a measure-valued entropy solution to the conservation law
$$\displaystyle u_{t}+f(u)_{x}=0$$such that \(\nu_{(x,t)}\) satisfies the initial condition (3.91), and that \(\langle\nu_{(x,t)},\left|\lambda\right|\rangle\) is in \(L^{\infty}([0,T];L^{1}(\mathbb{R}))\). Then there exists a function \(u\in L^{\infty}([0,T];L^{1}(\mathbb{R}))\cap L^{\infty}(\mathbb{R}\times[0,T])\) such that
$$\displaystyle\nu_{(x,t)}=\delta_{u(x,t)},\quad\text{for almost all $(x,t)$.}$$ -
(ii)
Assume that \(\sigma_{(x,t)}\) is (another) measure-valued entropy solution satisfying the same regularity assumptions as \(\nu_{(x,t)}\). Then
$$\displaystyle\nu_{(x,t)}=\sigma_{(x,t)}=\delta_{u(x,t)},\quad\text{for almost all $(x,t)$.}$$
In order to avoid checking (3.91) directly, we can use the following lemma.
Lemma 3.30
Let \(\nu_{(x,t)}\) be a probability measure, and assume that for all test functions \(\varphi(x)\) we have
and that for all nonnegative \(\varphi(x)\) and for at least one strictly convex continuous function η,
Then (3.91) holds.
Proof
We shall prove
from which the desired result will follow from (3.101) and the identity
where \(a^{+}=\max\{a,0\}\) denotes the positive part of a. To get started, we write \(\eta^{\prime}_{+}\) for the right-hand derivative of η. It exists by virtue of the convexity of η; moreover,
for all λ. Whenever \(\varepsilon> 0\), write
Since η is strictly convex, \(\zeta(y,\varepsilon)> 0\), and this quantity is an increasing function of \(\varepsilon\). In particular, if \(\lambda> y+\varepsilon\), then \(\zeta(y,\lambda-y)> \zeta(y,\varepsilon)\), or
In every case, then,
On the other hand, whenever \(y<\lambda<y+\varepsilon\), then \(\zeta(y,\lambda-y)> \zeta(y,\varepsilon)\), so
Let us now assume that \(\varphi\geq 0\) is such that
We use (3.104) on the left-hand side and (3.105) on the right-hand side of (3.102), and get
Here, thanks to (3.101) and the fact that \(\nu_{(x,t)}\) is a probability measure, all the terms not involving \(\zeta(y,\varepsilon)\) cancel, and then we can divide by \(\zeta(y,\varepsilon)\neq 0\) to arrive at
Now, remembering (3.106), we see that whenever \(\varphi(x)\neq 0\) we have \((\lambda-y)^{+}\leq(\lambda-u_{0}(x))^{+}+\varepsilon\), so the above implies
whenever (3.106) holds.
It remains only to divide up the common support \([-M,M]\) of all the measures \(\nu_{(x,t)}\), writing \(y_{i}=-M+i\varepsilon\) for \(i=0,1,\dots,N-1\), where \(\varepsilon=2M/N\). Let \(\varphi_{i}\) be the characteristic function of \([-A,A]\cap u_{0}^{-1}([y_{i},y_{i}+\varepsilon))\), and add together the above inequalities, one for each i, to arrive at
Since \(\varepsilon\) can be made arbitrarily small, (3.103) follows, and the proof is complete. □
Remark 3.31
We cannot conclude thatFootnote 3
from the present assumptions. Here is an example to show this.
Let \(\nu_{(x,t)}=\mu_{\gamma(x,t)}\), where \(\mu_{\beta}=\frac{1}{2}(\delta_{-\beta}+\delta_{\beta})\) and γ is a continuous, nonnegative function with \(\gamma(x,0)=0\). Let \(u_{0}(x)=0\) and \(\eta(y)=y^{2}\).
Then (3.101) holds trivially, and (3.102) becomes
which is also true due to the stated assumptions on γ.
The desired conclusion (3.107), however, is now
But the simple choice
yields
We shall now describe a framework that allows one to prove convergence of a sequence of approximations without proving that the method is TV stable. Unfortunately, the application of this method to concrete examples, while not very difficult, involves quite large calculations, and will be omitted here. Readers are encouraged to try their hands at it themselves.
We give one application of these concepts. The setting is as follows. Let u n be computed from a conservative and consistent scheme, and assume uniform boundedness of u n. Young’s theorem states that there exists a family of probability measures \(\nu_{(x,t)}\) such that \(g(u^{n})\overset{*}{\rightharpoonup}\langle\nu_{(x,t)},g\rangle\) for Lipschitz continuous functions g. We assume that the CFL condition, \(\lambda\sup_{u}\left|f^{\prime}(u)\right|\leq 1\), is satisfied. The next theorem states conditions, strictly weaker than TVD, for which we prove that the limit measure \(\nu_{(x,t)}\) is a measure-valued solution of the scalar conservation law.
Theorem 3.32
Let \(u_{0}\in L^{1}(\mathbb{R})\cap L^{\infty}(\mathbb{R})\). Assume that the sequence \(\left\{u^{n}\right\}\) is the result of a conservative, consistent method, and define \(u_{{\Updelta t}}\) as in (3.27). Assume that \(u_{{\Updelta t}}\) is uniformly bounded in \(L^{\infty}(\mathbb{R}\times[0,T])\), \(T=n{\Updelta t}\). Let \({\Updelta t}_{n}\to 0\) be a sequence such that \(u_{{\Updelta t}_{n}}\overset{*}{\rightharpoonup}u\), and let \(\nu_{(x,t)}\) be the Young measure associated with \(u_{{\Updelta t}_{n}}\), and assume that \(u^{n}_{j}\) satisfies the estimate
for some \(\beta\in[0,1)\) and some constant \(C(T)\). Then \(\nu_{(x,t)}\) is a measure-valued solution to (3.1).
Furthermore, let \(\left(\eta,q\right)\) be a strictly convex entropy pair, and let Q be a numerical entropy flux consistent with q. Write \(\eta_{j}^{n}=\eta(u^{n}_{j})\) and \(Q_{j+1/2}^{n}=Q(u^{n})_{j+1/2}\). Assume that
for all n and j, where \(R^{n}_{j}\) satisfies,
for all nonnegative \(\varphi\in C^{1}_{0}\) where \(\varphi^{n}_{j}=\varphi(j{\Updelta x},n{\Updelta t})\). Then \(\nu_{(x,t)}\) is a measure-valued solution compatible with \(\left(\eta,q\right)\), and the initial data is assumed in the sense of (3.101), (3.102). If (3.109) and (3.110) hold for all entropy pairs \((\eta,q)\), then \(\nu_{(x,t)}\) is a measure-valued entropy solution to (3.1).
Remark 3.33
For \(\beta=0\), (3.108) is the standard TV estimate, while for \(\beta> 0\), (3.108) is genuinely weaker than a TV estimate.
Proof
We start by proving the first statement in the theorem, assuming (3.108). As before, we obtain (3.28) by rearranging. For simplicity, we now write \(F^{n}_{j+1/2}=F(u^{n})_{j+1/2}\), \(f^{n}_{j}=f(u^{n}_{j})\), and observe that \(F^{n}_{j+1/2}=f^{n}_{j}+\left(F^{n}_{j+1/2}-f^{n}_{j}\right)\), getting
Here we use the notation
and
The first term on the left-hand side in (3.111) reads
The third term on the right-hand side of (3.112) clearly tends to zero as \({\Updelta t}\) goes to zero. Furthermore, by definition of the Young measure \(\nu_{(x,t)}\), the second term tends to zero as well. Thus the left-hand side of (3.112) approaches \(\iint\langle\nu_{(x,t)},\mathop{\mathrm{Id}}\rangle\varphi_{t}\,dx\,dt\).
One can use a similar argument for the second term on the left-hand side of (3.111) to show that the (whole) left-hand side of (3.111) tends to
as \({\Updelta t}\to 0\). We now study the right-hand side of (3.111). Mimicking the proof of the Lax–Wendroff theorem, we have
Therefore,
using the assumption (3.108). Thus the right-hand side of (3.114), and hence also of (3.111), tends to zero. Since the left-hand side of (3.111) tends to (3.113), we conclude that \(\nu_{(x,t)}\) is a measure-valued solution. Using similar calculations, and (3.110), one shows that \(\nu_{(x,t)}\) is also an entropy measure-valued solution.
It remains to show consistency with the initial condition, i.e., (3.101) and (3.102). Let \(\varphi(x)\) be a test function, and we use the notation \(\varphi(j{\Updelta x})=\varphi_{j}\). From the definition of \(u^{n+1}_{j}\), after a summation by parts, we have that
since \(u^{n}_{j}\) is bounded. Recall that \(\varphi=\varphi(x)\), we get
Let \(t_{1}=n_{1}{\Updelta t}\) and \(t_{2}=n_{2}{\Updelta t}\). Then (3.115) yields
which implies that the Young measure \(\nu_{(x,t)}\) satisfies
We let \(t_{1}\to 0\) and set \(t_{2}=\tau\) in (3.116), obtaining
which proves (3.101). Now for (3.102). We have that there exists a strictly convex entropy η for which (3.109) holds. Now let \(\varphi(x)\) be a nonnegative test function. Using (3.109), and proceeding as before, we obtain
Using this estimate and the assumption on \(R^{l}_{j}\), (3.110), we can use the same arguments as in proving (3.117) to prove (3.102). The proof of the theorem is complete. □
A trivial application of this approach is found by considering monotone schemes. Here we have seen that (3.108) holds for \(\beta=0\), and (3.109) for \(R^{n}_{j}=0\). The theorem then gives the convergence of these schemes without using Helly’s theorem. However, in this case the application does not give the existence of a solution, since we must have this in order to use DiPerna’s theorem. The main usefulness of the method is for schemes in several space dimensions, where TV bounds are more difficult to obtain.
3.6 Notes
The Lax–Friedrichs scheme was introduced by Lax in 1954; see 124; . Godunov discussed what has later become the Godunov scheme in 1959 as a method to study gas dynamics; see 80; . The CFL condition was introduced in the seminal paper 50; ; see also 57; .
The Lax–Wendroff theorem, Theorem 3.4, was first proved in 128; . Theorem 3.8 was proved by Oleı̆nik in her fundamental paper 145; ; see also 169; . Several of the key results concerning monotone schemes are due to Crandall and Majda 53; , 52; . Theorem 3.10 is due to Harten, Hyman, and Lax; see 84; . Harten’s lemma, Lemma 3.12, can be found in 83; . See also 148; .
The error analysis is based on the fundamental analysis by Kuznetsov, 119; , where one also can find a short discussion of the examples we have analyzed, namely the smoothing method, the method of vanishing viscosity, as well as monotone schemes. Our presentation of the a priori estimates follows the approach due to Cockburn and Gremaud; see 44; and 45; , where also applications to numerical methods are given.
The concept of measure-valued solutions is due to DiPerna, and the key results can be found in 62; , while Lemma 3.30 is to be found in 61; . Our presentation of the Young measure follows the exposition of Perthame, 150; . For further information regarding the functional-analytic framework, see, e.g., 34; and references therein. The proof of Lemma 3.30 and Remark 3.31 are due to H. Hanche-Olsen. Our presentation of the uniqueness of measure-valued solutions, Theorem 3.29, is taken mainly from Szepessy, 173; . Theorem 3.32 is due to Coquel and LeFloch, 48; ; see also 49; , where several extensions are discussed. For numerical schemes that satisfy the criteria in Theorem 3.32, see 49; and 65; .
3.7 Exercises
-
3.1
Consider the difference scheme (3.4). Show that if u 0 is given by
$$\displaystyle u^{0}_{j}=\begin{cases}0&\text{for $j<0$,}\\ 1&\text{for $j\geq 0$,}\end{cases}$$then \(u^{n}=u^{0}\) for all n, thus indicating the solution \(u(x,t)=\chi_{[0,\infty)}\). Determine the weak entropy solution.
-
3.2
Show that the Lax–Wendroff and the MacCormack methods are of second order.
-
3.3
The Engquist–Osher (or generalized upwind) method, see 63; , is a conservative difference scheme with a numerical flux defined as follows:
$$\begin{aligned}\displaystyle F_{j+1/2}(u)&\displaystyle=f^{\mathrm{EO}}\left(u_{j},u_{j+1}\right),\quad\text{where}\\ \displaystyle f^{\mathrm{EO}}(u,v)&\displaystyle=\int_{0}^{u}\max\{f^{\prime}(s),0\}\,ds+\int_{0}^{v}\min\{f^{\prime}(s),0\}\,ds+f(0).\end{aligned}$$-
(a)
Show that this method is consistent and monotone.
-
(b)
Find the order of the scheme.
-
(c)
Show that the Engquist–Osher flux \(f^{\mathrm{EO}}\) can be written
$$\displaystyle f^{\mathrm{EO}}(u,v)=\frac{1}{2}\left(f(u)+f(v)-\int_{u}^{v}\left|f^{\prime}(s)\right|\,ds\right).$$ -
(d)
If \(f(u)=u^{2}/2\), show that the numerical flux can be written
$$\displaystyle f^{\mathrm{EO}}(u,v)=\frac{1}{2}\left(\max\{u,0\}^{2}+\min\{v,0\}^{2}\right).$$Generalize this simple expression to the case that \(f^{\prime\prime}(u)\neq 0\) and \(\lim_{\left|u\right|\to\infty}\left|f(u)\right|=\infty\).
-
(a)
-
3.4
Why does the method
$$\displaystyle u^{n+1}_{j}=u^{n}_{j}-\frac{{\Updelta t}}{2{\Updelta x}}\left(f\left(u^{n}_{j+1}\right)-f\left(u^{n}_{j-1}\right)\right)$$not give a viable difference scheme?
-
3.5
In the derivation of the Godunov scheme it is assumed that \({\Updelta t}\max_{u}\left|f^{\prime}(u)\right|\leq\frac{1}{2}{\Updelta x}\), yet it is stated that the method is well defined if the CFL condition \({\Updelta t}\max_{u}\left|f^{\prime}(u)\right|\leq{\Updelta x}\) is satisfied; see (3.9). Please explain.
-
3.6
Show that (3.24) is the model equation for the Lax–Friedrichs scheme.
-
3.7
Show that the Lax–Friedrichs scheme is monotone also in the case that the flux function is assumed only to be Lipschitz continuous.
-
3.8
Show that Heun’s method is unstable.
-
3.9
We study a nonconservative method for Burgers’s equation. Assume that \(u^{0}_{j}\in[0,1]\) for all j. Then the characteristic speed is nonnegative, and we define
$$\displaystyle u^{n+1}_{j}=u^{n}_{j}-\lambda u^{n+1}_{j}\left(u^{n}_{j}-u^{n}_{j-1}\right),\quad n\geq 0,$$(3.118)where \(\lambda=\Updelta t/{\Updelta x}\).
-
(a)
Show that this yields a monotone method, provided that a CFL condition holds.
-
(b)
Show that this method is consistent and determine the truncation error.
-
(a)
-
3.10
Assume that \(f^{\prime}(u)> 0\) and that \(f^{\prime\prime}(u)\geq 2c> 0\) for all u in the range of u 0. We use the upwind method to generate approximate solutions to
$$\displaystyle u_{t}+f(u)_{x}=0,\quad u(x,0)=u_{0}(x);$$(3.119)i.e., we set
$$\displaystyle u^{n+1}_{j}=u^{n}_{j}-\lambda\left(f(u^{n}_{j})-f(u^{n}_{j-1})\right).$$Set
$$\displaystyle v^{n}_{j}=\frac{u^{n}_{j}-u^{n}_{j-1}}{{\Updelta x}}.$$-
(a)
Show that
$$\displaystyle\begin{aligned}\displaystyle v^{n+1}_{j}&\displaystyle=\left(1-\lambda f^{\prime}(u^{n}_{j-1})\right)v^{n}_{j}+\lambda f^{\prime}(u^{n}_{j-1})v^{n}_{j-1}\\ \displaystyle&\displaystyle\quad-\frac{\Updelta t}{2}\left(f^{\prime\prime}(\eta_{j-1/2})\left(v^{n}_{j}\right)^{2}+f^{\prime\prime}(\eta_{j-3/2})\left(v^{n}_{j-1}\right)^{2}\right),\end{aligned}$$where \(\eta_{j-1/2}\) is between \(u^{n}_{j}\) and \(u^{n}_{j-1}\).
-
(b)
Next, assume inductively that
$$\displaystyle v^{n}_{j}\leq\frac{1}{(n+2)c{\Updelta t}},\quad\text{for all $j$,}$$and set \(\hat{v}^{n}=\max\{\max_{j}v^{n}_{j},0\}\). Then show that
$$\displaystyle\hat{v}^{n+1}\leq\hat{v}^{n}-c\Updelta t\left(\hat{v}^{n}\right)^{2}.$$ -
(c)
Use this to show that
$$\displaystyle\hat{v}^{n}\leq\frac{\hat{v}^{0}}{1+\hat{v}^{0}cn\Updelta t}.$$ -
(d)
Show that this implies that
$$\displaystyle u^{n}_{i}-u^{n}_{j}\leq{\Updelta x}(i-j)\frac{\hat{v}^{0}}{1+\hat{v}^{0}cn\Updelta t},$$for \(i\geq j\).
-
(e)
Let u be the entropy solution of (3.119 ), and assume that
$$\displaystyle 0\leq\max_{x}u_{0}^{\prime}(x)=M<\infty.$$Show that for almost every x, y, and t we have that
$$\displaystyle\frac{u(x,t)-u(y,t)}{x-y}\leq\frac{M}{1+cMt}.$$(3.120)This is the Oleı̆nik entropy condition for convex scalar conservation laws.
-
(a)
-
3.11
Assume that f is as in the previous exercise, and that u 0 is periodic with period p.
-
(a)
Use uniqueness of the entropy solution to (3.119) to show that the entropy solution \(u(x,t)\) is also periodic in x with period p.
-
(b)
Then use the Oleı̆nik entropy condition (3.120) to deduce that
$$\displaystyle\sup_{x}u(x,t)-\inf_{x}u(x,t)\leq\frac{Mp}{1+cMt}.$$Thus \(\lim_{t\to\infty}u(x,t)=\bar{u}\) for some constant \(\bar{u}\).
-
(c)
Use conservation to show that
$$\displaystyle\bar{u}=\frac{1}{p}\int_{0}^{p}u_{0}(x)\,dx.$$
-
(a)
-
3.12
Let \(u_{n}\colon[0,1)\to[-1,1]\) be defined as
$$\displaystyle u_{n}(x)=\begin{cases}1&x\in[2k/2n,(2k+1)/2n),\\ -1&x\in[(2k+1)/2n,(2k+2)/2n),\end{cases}\quad\text{for $k=0,\ldots,n-1$,}$$for \(n\in\mathbb{N}\). Find the weak limit of u n as \(n\to\infty\), and the associated Young measure.
-
3.13
We shall consider a scalar conservation law with a ‘‘fractal’’ function as the initial data. Define the set of piecewise linear functions
$$\displaystyle\mathcal{D}=\{\phi(x)=Ax+B\mid x\in[a,b],A,B\in\mathbb{R}\},$$and the map
$$\displaystyle F(\phi)=\begin{cases}2D(x-a)+\phi(a)&\text{for $x\in[a,a+L/3]$},\\ -D(x-a)+\phi(a)&\text{for $x\in[a+L/3,a+2L/3]$},\\ 2D(x-b)+\phi(b)&\text{for $x\in[a+2L/3,b]$},\end{cases}$$for \(\phi\in\mathcal{D}\), where \(L=b-a\) and \(D=(\phi(b)-\phi(a))/L\). For a nonnegative integer k introduce \(\chi_{j,k}\) as the characteristic function of the interval \(I_{j,k}=[j/3^{k},(j+1)/3^{k}]\), \(j=0,\dots,3^{k+1}-1\). We define functions \(\{v_{k}\}\) recursively as follows. Let
$$\displaystyle v_{0}(x)=\begin{cases}0&\text{for $x\leq 0$},\\ x&\text{for $0\leq x\leq 1$},\\ 1&\text{for $1\leq x\leq 2$},\\ 3-x&\text{for $2\leq x\leq 3$},\\ 0&\text{for $3\leq x$}.\end{cases}$$Assume that \(v_{j,k}\) is linear on \(I_{j,k}\) and let
$$\displaystyle v_{k}=\sum_{j=-3^{k}}^{3^{k}-1}v_{j,k}\chi_{j,k},$$(3.121)and define the next function \(v_{k+1}\) by
$$\displaystyle v_{k+1}=\sum_{j=0}^{3^{k+1}-1}F(v_{j,k})\chi_{j,k}=\sum_{j=0}^{3^{k+2}-1}v_{j,k+1}\chi_{j,k+1}.$$(3.122)In the left part of Fig. 3.9 we show the effect of the map F, and on the right we show \(v_{5}(x)\) (which is piecewise linear on \(3^{6}=729\) segments).
-
(a)
Show that the sequence \(\left\{v_{k}\right\}_{k> 1}\) is a Cauchy sequence in the supremum norm, and hence we can define a continuous function v by setting
$$\displaystyle v(x)=\lim_{k\to\infty}v_{k}(x).$$ -
(b)
Show that v is not of bounded variation, and determine the total variation of v k .
-
(c)
Show that
$$\displaystyle v(j/3^{k})=v_{k}(j/3^{k}),$$for all integers \(j=0,\dots,3^{k+1}\), \(k\in\mathbb{N}\).
-
(d)
Assume that f is a C 1 function on \([0,1]\) with \(0\leq f^{\prime}(u)\leq 1\). We are interested in solving the conservation law
$$\displaystyle u_{t}+f(u)_{x}=0,\quad u_{0}(x)=v(x).$$To this end we shall use the upwind scheme defined by (3.10), with \({\Updelta t}={\Updelta x}=1/3^{k}\), and
$$\displaystyle u^{0}_{j}=v(j{\Updelta x}).$$Show that \(u_{{\Updelta t}}(x,t)\) converges to an entropy solution of the conservation law above.
-
(a)
Notes
- 1.
This definition is slightly different from the standard definition of T.V. stable methods.
- 2.
Often called von Neumann stability.
- 3.
Where the integral over the compact interval \([-A,A]\) in (3.91) has been replaced by an integral over the entire real line.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2015 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Holden, H., Risebro, N.H. (2015). A Short Course in Difference Methods. In: Front Tracking for Hyperbolic Conservation Laws. Applied Mathematical Sciences, vol 152. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-47507-2_3
Download citation
DOI: https://doi.org/10.1007/978-3-662-47507-2_3
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-662-47506-5
Online ISBN: 978-3-662-47507-2
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)