Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

In this chapter, we deal with finite-difference methods for parabolic partial differential equations, including algorithms, stability and convergence analysis, and extrapolation techniques of numerical solutions.

1 Finite-Difference Schemes

In this section, we will discuss the finite-difference methods for parabolic partial differential equation problems (parabolic PDE problems). Usually, a parabolic partial differential equation problem is formulated as follows:

$$\displaystyle{ \left \{\begin{array}{ll} { \partial u \over \partial \tau } = a(x,\tau ){ {\partial }^{2}u \over \partial {x}^{2}} +&b(x,\tau ){ \partial u \over \partial x} + c(x,\tau )u + g(x,\tau ), \\ &x_{l} \leq x \leq x_{u},\quad 0 \leq \tau \leq T, \\ u(x,0) = f(x), &x_{l} \leq x \leq x_{u}, \\ u(x_{l},\tau ) = f_{l}(\tau ), &0 \leq \tau \leq T, \\ u(x_{u},\tau ) = f_{u}(\tau ), &0 \leq \tau \leq T,\end{array} \right . }$$
(7.1)

where a(x, τ) > 0 on the domain \([x_{l},x_{u}] \times [0,T]\) and the compatibility conditions \(f(x_{l}) = f_{l}(0)\) and \(f(x_{u}) = f_{u}(0)\) hold. Though sometimes, a European option problem can be approximately formulated in such a way after giving some approximate boundary condition on certain artificial boundary. However, for most of the European option problems, the problems are in or can be transformed into the following degenerate parabolic partial differential equation problem:

$$\displaystyle{ \left \{\begin{array}{ll} { \partial u \over \partial \tau } = a(x,\tau ){ {\partial }^{2}u \over \partial {x}^{2}} +&b(x,\tau ){ \partial u \over \partial x} + c(x,\tau )u + g(x,\tau ), \\ &x_{l} \leq x \leq x_{u},\quad 0 \leq \tau \leq T, \\ u(x,0) = f(x), &x_{l} \leq x \leq x_{u},\end{array} \right . }$$
(7.2)

where a(x, τ) ≥ 0 on the domain \([x_{l},x_{u}] \times [0,T]\),

$$\displaystyle{ \left \{\begin{array}{l} b(x_{l},\tau ) -{ \partial a \over \partial x} (x_{l},\tau ) \geq 0,\quad 0 \leq \tau \leq T, \\ a\left (x_{l},\tau \right ) = 0,\quad 0 \leq \tau \leq T,\end{array} \right . }$$
(7.3)

and

$$\displaystyle{ \left \{\begin{array}{l} b(x_{u},\tau ) -{ \partial a \over \partial x} (x_{u},\tau ) \leq 0,\quad 0 \leq \tau \leq T, \\ a\left (x_{u},\tau \right ) = 0,\quad 0 \leq \tau \leq T. \end{array} \right . }$$
(7.4)

For example, the prices of vanilla European call/put options are solutions of the problem

$$\displaystyle{\left \{\begin{array}{l} { \partial V \over \partial t} +{{ 1 \over 2} \sigma }^{2}(S){S}^{2}{ {\partial }^{2}V \over \partial {S}^{2}} + (r - D_{0})S{ \partial V \over \partial S} - rV = 0\;,\;0 \leq S,\;\;0 \leq t \leq T, \\ V (S,t) =\max (\pm (S - E),0),\;\;0 \leq S.\end{array} \right .}$$

Through the transformation

$$\displaystyle{\left \{\begin{array}{l} \xi ={ S \over S + E} , \\ \tau = T - t, \\ V (S,t) = (S + E)\overline{V }(\xi ,\tau ), \end{array} \right .}$$

the problem is converted into

$$\displaystyle{\left \{\begin{array}{ll} { \partial \overline{V } \over \partial \tau } ={{ 1 \over 2} \bar{\sigma }}^{2}{(\xi )\xi }^{2}{(1-\xi )}^{2}{ {\partial }^{2}\overline{V } \over {\partial \xi }^{2}} + (r - D_{0})\xi (1-\xi ){ \partial \overline{V } \over \partial \xi } -&[r(1-\xi ) + D_{0}\xi ]\overline{V }, \\ &0 \leq \xi \leq 1,\quad 0 \leq \tau \leq T, \\ \overline{V }(\xi ,0) =\max (\pm (2\xi - 1),0), &0 \leq \xi \leq 1, \end{array} \right .}$$

where \(\bar{\sigma }(\xi ) =\sigma (E\xi /(1-\xi ))\). (For details, see Sect. 2.2.5.) Clearly, this problem is in the form (7.2). Moreover, if a stochastic model

$$\displaystyle{dS = udt + wdX}$$

is defined on \([S_{l},S_{u}]\), and the conditions

$$\displaystyle{\left \{\begin{array}{l} u\left (S_{l},t\right ) - w(S_{l},t){ \partial \over \partial S} w(S_{l},t) \geq 0, \\ w\left (S_{l},t\right ) = 0 \end{array} \right .}$$

and

$$\displaystyle{\left \{\begin{array}{l} u\left (S_{u},t\right ) - w(S_{u},t){ \partial \over \partial S} w(S_{u},t) \leq 0, \\ w\left (S_{u},t\right ) = 0 \end{array} \right .}$$

hold, then prices of European-style derivatives on this random variable also are solutions of the problem (7.2). (For details, see Sect. 2.4.)

To find an approximate solution of a partial differential equation problem by finite-difference methods, we first divide the domain \([x_{l},x_{u}] \times [0,T]\) into small subdomains using lines \(x_{m} = x_{l} + m\Delta x\) and τ n = nΔτ, where \(\Delta x = (x_{u} - x_{l})/M\), Δτ = T ∕ N and M, N are positive integers. These lines form a grid, and these points \((x_{m}{,\tau }^{n})\) are called grid points (see Fig. 7.1). We want to find the approximate values of the solution on these grid points.

Fig. 7.1
figure 1

A mesh for finite-difference methods

Let us look at the problem (7.2). First consider the caseFootnote 1

$$\displaystyle{b(x_{l},\tau ) = 0,\quad 0 \leq \tau \leq T}$$

and

$$\displaystyle{b(x_{u},\tau ) = 0,\quad 0 \leq \tau \leq T.}$$

In this case, the partial differential equation in the problem (7.2) degenerates into an ordinary differential equation at each boundary, and the degenerate parabolic problem (7.2) can be discretized in the following way.

Using forward difference for \({ \partial u \over \partial \tau } (x_{m}{,\tau }^{n})\), second-order central difference for \({ \partial u \over \partial x} (x_{m}{,\tau }^{n})\) and \({ {\partial }^{2}u \over \partial {x}^{2}} (x_{m}{,\tau }^{n})\) in the problem (7.2) at the point \((x_{m}{,\tau }^{n})\), we have

$$\displaystyle\begin{array}{rcl} & &{ u(x_{m}{,\tau }^{n+1}) - u(x_{m}{,\tau }^{n}) \over \Delta \tau } -{ \Delta \tau \over 2} { {\partial }^{2}u \over {\partial \tau }^{2}} (x_{m},\eta ) {}\\ & =& a_{m}^{n}\left [{ u(x_{m+1}{,\tau }^{n}) - 2u(x_{ m}{,\tau }^{n}) + u(x_{ m-1}{,\tau }^{n}) \over \Delta {x}^{2}} -{ \Delta {x}^{2} \over 12} { {\partial }^{4}u \over \partial {x}^{4}} (\xi {,\tau }^{n})\right ] {}\\ & & +b_{m}^{n}\left [{ u(x_{m+1}{,\tau }^{n}) - u(x_{ m-1}{,\tau }^{n}) \over 2\Delta x} -{ \Delta {x}^{2} \over 6} { {\partial }^{3}u \over \partial {x}^{3}} (\bar{\xi }{,\tau }^{n})\right ] {}\\ & & +c_{m}^{n}u(x_{ m}{,\tau }^{n}) + g_{ m}^{n}, {}\\ \end{array}$$

where

$$\displaystyle{\eta \in {(\tau }^{n}{,\tau }^{n+1}),\quad \xi \in (x_{ m-1},x_{m+1}),\quad \bar{\xi } \in (x_{m-1},x_{m+1}),}$$

and \(a_{m}^{n},b_{m}^{n},c_{m}^{n}\), and \(g_{m}^{n}\) denote \(a(x_{m}{,\tau }^{n}),b(x_{m}{,\tau }^{n}),c(x_{m}{,\tau }^{n})\), and \(g(x_{m}{,\tau }^{n})\), respectively. Dropping the term \(-{ \Delta \tau \over 2} { {\partial }^{2}u \over {\partial \tau }^{2}} (x_{m},\eta )\) from the left-hand side and the two terms \(-a_{m}^{n}{ \Delta {x}^{2} \over 12} { {\partial }^{4}u \over \partial {x}^{4}} (\xi {,\tau }^{n})\) and \(-b_{m}^{n}{ \Delta {x}^{2} \over 6} { {\partial }^{3}u \over \partial {x}^{3}} (\bar{\xi }{,\tau }^{n})\) from the right-hand side, and denoting the approximate solution of \(u(x_{m}{,\tau }^{n})\) by \(u_{m}^{n}\), we obtain the following approximation to the partial differential equation in the problem (7.2):

$$\displaystyle\begin{array}{rcl} & &{ u_{m}^{n+1} - u_{m}^{n} \over \Delta \tau } = a_{m}^{n}{ u_{m+1}^{n} - 2u_{ m}^{n} + u_{ m-1}^{n} \over \Delta {x}^{2}} + b_{m}^{n}{ u_{m+1}^{n} - u_{ m-1}^{n} \over 2\Delta x} + c_{m}^{n}u_{ m}^{n} + g_{ m}^{n}, {}\\ & & m = 0,1,\cdots \,,M,\quad n = 0,1,\cdots \,,N - 1. {}\\ \end{array}$$

From the initial condition in problem (7.2), we have \(u_{m}^{0} = f(x_{m})\), m = 0, 1, ⋯ , M. Therefore, the degenerate parabolic problem (7.2) can be discretized by

$$\displaystyle{ \left \{\begin{array}{rcl} &&u_{m}^{n+1} = \left ({ a_{m}^{n}\Delta \tau \over \Delta {x}^{2}} +{ b_{m}^{n}\Delta \tau \over 2\Delta x} \right )u_{m+1}^{n} + \left (1 - 2{ a_{m}^{n}\Delta \tau \over \Delta {x}^{2}} + c_{m}^{n}\Delta \tau \right )u_{m}^{n} \\ &&\qquad \qquad + \left ({ a_{m}^{n}\Delta \tau \over \Delta {x}^{2}} -{ b_{m}^{n}\Delta \tau \over 2\Delta x} \right )u_{m-1}^{n} + g_{m}^{n}\Delta \tau , \\ &&\qquad \qquad m = 0,1,\cdots \,,M,\quad n = 0,1,\cdots \,,N - 1, \\ &&u_{m}^{0} = f(x_{m}),\quad m = 0,1,\cdots \,,M.\end{array} \right . }$$
(7.5)

Here, we need to point out that because we discretize ordinary differential equations at the boundaries, only \(u_{0}^{n}\) appears in the equation for m = 0 and only \(u_{M}^{n}\) for m = M. That is, because \(a_{0}^{n} = b_{0}^{n} = a_{M}^{n} = b_{M}^{n} = 0\), \(u_{-1}^{n}\) and \(u_{M+1}^{n}\) actually do not appear in the equations above.

When \(u_{m}^{n}\), m = 0, 1, ⋯ , M are known, we can find \(u_{m}^{n+1}\), m = 0, 1, ⋯ , M by difference scheme (7.5). Because \(u_{m}^{0}\), m = 0, 1, ⋯ , M are given in the scheme (7.5), this procedure can be done for n = 0, 1, ⋯ , N − 1 successively, and the approximate solution on all the grid points can be obtained. This method is called an explicit finite-difference method. This is because when \(u_{m}^{n}\) has been obtained, one equation involves only one unknown, so the unknown \(u_{m}^{n+1}\) can be computed from \(u_{m-1}^{n}\), \(u_{m}^{n}\) and \(u_{m+1}^{n}\) explicitly. Figure 7.2 gives a diagram for this procedure. When we have the approximation (7.5), we have dropped the terms

$$\displaystyle{{ \Delta \tau \over 2} { {\partial }^{2}u \over {\partial \tau }^{2}} (x_{m},\eta ) - a_{m}^{n}{ \Delta {x}^{2} \over 12} { {\partial }^{4}u \over \partial {x}^{4}} (\xi {,\tau }^{n}) - b_{ m}^{n}{ \Delta {x}^{2} \over 6} { {\partial }^{3}u \over \partial {x}^{3}} (\bar{\xi }{,\tau }^{n})}$$

from the equations. These terms as a whole are called the truncation error for scheme (7.5). Because the truncation error can be rewritten as O(Δx 2, Δτ), we say that for scheme (7.5), the truncation error is second order in Δx and first order in Δτ.

Fig. 7.2
figure 2

An explicit finite-difference discretization

Now let us discretize the problem (7.2) at the point \((x_{m}{,\tau }^{n+1/2})\). For \({ \partial u \over \partial \tau } (x_{m}{,\tau }^{n+1/2})\), we use the central scheme. The derivative \({ \partial u \over \partial x} (x_{m}{,\tau }^{n+1/2})\) is approximated first by the average of the values at the points \((x_{m}{,\tau }^{n})\) and \((x_{m}{,\tau }^{n+1})\), and then the derivatives at these two points are discretized by the central difference. The second derivative \({ {\partial }^{2}u \over \partial {x}^{2}} (x_{m}{,\tau }^{n+1/2})\) is dealt with similarly. Using this way, the degenerate parabolic problem (7.2) can be approximated by the implicit finite-difference method:

$$\displaystyle{ \left \{\begin{array}{ll} { u_{m}^{n+1} - u_{m}^{n} \over \Delta \tau } ={ a_{m}^{n+1/2} \over 2} \left ({ u_{m+1}^{n+1} - 2u_{m}^{n+1} + u_{m-1}^{n+1} \over \Delta {x}^{2}} \right .& \\ \left .+\,{ u_{m+1}^{n} - 2u_{m}^{n} + u_{m-1}^{n} \over \Delta {x}^{2}} \right ) & \\ +{ b_{m}^{n+1/2} \over 2} \left ({ u_{m+1}^{n+1} - u_{m-1}^{n+1} \over 2\Delta x} +{ u_{m+1}^{n} - u_{m-1}^{n} \over 2\Delta x} \right ) & \\ +{ c_{m}^{n+1/2} \over 2} (u_{m}^{n+1} + u_{m}^{n}) + g_{m}^{n+1/2}, & \\ &m = 0,1,\cdots \,,M,\quad n = 0,1,\cdots \,,N - 1, \\ u_{m}^{0} = f(x_{m}), &m = 0,1,\cdots \,,M.\end{array} \right . }$$
(7.6)

From here, we see that each equation involves six grid points (see Fig. 7.3) and that there are three unknowns. As we know, the error of a central difference is second order. For a function, the average of the values at the points \((x_{m}{,\tau }^{n})\) and \((x_{m}{,\tau }^{n+1})\) is an approximate value at the point \((x_{m}{,\tau }^{n+1/2})\) with an error of O(Δτ 2) because it actually is the result obtained by the linear interpolation. Therefore, the truncation error of this scheme is \(O(\Delta {x}^{2},{\Delta \tau }^{2})\).

Fig. 7.3
figure 3

An implicit finite-difference discretization

Similar to the scheme (7.5), because we actually discretize ordinary differential equations at the boundaries, the equations for m = 0 and m = M can be written as

$$\displaystyle\begin{array}{rcl} & &{ u_{m}^{n+1} - u_{m}^{n} \over \Delta \tau } ={ c_{m}^{n+1/2} \over 2} (u_{m}^{n+1} + u_{ m}^{n}) + g_{ m}^{n+1/2}, {}\\ & & \quad m = 0,M,\quad n = 0,1,\cdots \,,N - 1. {}\\ \end{array}$$

Consequently, these equations actually do not involve \(u_{-1}^{n}\) and \(u_{M+1}^{n}\). Furthermore, the equations for m = 0 alone can determine \(u_{0}^{n}\), n = 1, 2, ⋯ , N from \(u_{0}^{0}\). For \(u_{M}^{n}\), the situation is similar. However, for \(u_{m}^{n}\), m≠0 and M, the situation is different. We cannot determine \(u_{m}^{n+1}\) only from a few equations. In order to obtain \(u_{m}^{n+1}\), m = 1, 2, ⋯ , M − 1, we have to solve a tridiagonal system of linear equations, and each of \(u_{m}^{n+1}\) is determined by all the \(u_{m}^{n}\). Consequently, this method is called an implicit finite-difference method.

The problem (7.1) can be discretized similarly. The only difference is that the partial differential equation should not be discretized for m = 0 and m = M because the boundary conditions

$$\displaystyle{u(x_{l},\tau ) = f_{l}(\tau )}$$

and

$$\displaystyle{u(x_{u},\tau ) = f_{u}(\tau )}$$

provide the equations we need. When a(x, τ) is equal to a positive constant a, b(x, τ) = 0, c(x, τ) = 0, and g(x, τ) = 0, i.e., for the heat conductivity problem

$$\displaystyle{ \left \{\begin{array}{ll} { \partial u \over \partial \tau } = a{ {\partial }^{2}u \over \partial {x}^{2}} , &x_{l} \leq x \leq x_{u},\quad 0 \leq \tau \leq T, \\ u(x,0) = f(x), &x_{l} \leq x \leq x_{u}, \\ u(x_{l},\tau ) = f_{l}(\tau ), &0 \leq \tau \leq T, \\ u(x_{u},\tau ) = f_{u}(\tau ),\quad &0 \leq \tau \leq T, \end{array} \right . }$$
(7.7)

corresponding to the explicit scheme (7.5), (7.7) can be approximated by

$$\displaystyle{ \left \{\begin{array}{ll} u_{m}^{n+1} =\alpha u_{m+1}^{n} + (1 - 2\alpha )u_{m}^{n} +\alpha u_{m-1}^{n}, \\ &m = 1,2,\cdots \,,M - 1, \\ &n = 0,1,\cdots \,,N - 1, \\ u_{0}^{n+1} = f_{l}{(\tau }^{n+1}), &n = 0,1,\cdots \,,N - 1, \\ u_{M}^{n+1} = f_{u}{(\tau }^{n+1}), &n = 0,1,\cdots \,,N - 1, \\ u_{m}^{0} = f(x_{m}), &m = 0,1,\cdots \,,M,\end{array} \right . }$$
(7.8)

where

$$\displaystyle{\alpha ={ a\Delta \tau \over \Delta {x}^{2}} .}$$

Similar to the implicit scheme (7.6), (7.7) can also be approximated by

$$\displaystyle{ \left \{\begin{array}{rcl} &&{ u_{m}^{n+1} - u_{m}^{n} \over \Delta \tau } ={ a \over 2} \left ({ u_{m+1}^{n+1} - 2u_{m}^{n+1} + u_{m-1}^{n+1} \over \Delta {x}^{2}} \right . \\ &&\qquad \qquad \qquad \qquad \left .+{ u_{m+1}^{n} - 2u_{m}^{n} + u_{m-1}^{n} \over \Delta {x}^{2}} \right ), \\ &&\qquad m = 1,2,\cdots \,,M - 1,\qquad \qquad n = 0,1,\cdots \,,N - 1, \\ &&u_{0}^{n+1} = f_{l}{(\tau }^{n+1}),\qquad \qquad \qquad \qquad n = 0,1,\cdots \,,N - 1, \\ &&u_{M}^{n+1} = f_{u}{(\tau }^{n+1}),\qquad \qquad \qquad \qquad n = 0,1,\cdots \,,N - 1, \\ &&u_{m}^{0} = f(x_{m}),\qquad \qquad \qquad \qquad \qquad m = 0,1,\cdots \,,M,\end{array} \right . }$$
(7.9)

which is called the Crank–Nicolson scheme.

Since u(x l , τ) and u(x u , τ) are given, there are only M − 1 unknowns for each time level, and the M − 1 equations in the difference scheme (7.9) can be written together in matrix form:

$$\displaystyle{ \mathbf{A}{\mathbf{u}}^{n+1} = \mathbf{B}{\mathbf{u}}^{n} +{ \mathbf{b}}^{n}, }$$
(7.10)

where

$$\displaystyle\begin{array}{rcl} \mathbf{A}& =& \left [\begin{array}{ccccc} 1+\alpha & - \frac{\alpha } {2} & 0 & \cdots & 0 \\ - \frac{\alpha } {2} & 1+\alpha & - \frac{\alpha } {2} & \ddots & \vdots \\ 0 & - \frac{\alpha } {2} & \ddots & \ddots & 0 \\ \vdots & \ddots & \ddots & \ddots & - \frac{\alpha } {2} \\ 0 & \cdots & 0 & - \frac{\alpha } {2} & 1+\alpha \end{array} \right ], {}\\ \mathbf{B}& =& \left [\begin{array}{ccccc} 1-\alpha & \frac{\alpha } {2} & 0 & \cdots & 0 \\ \frac{\alpha } {2} & 1-\alpha &\frac{\alpha } {2} & \ddots & \vdots \\ 0 & \frac{\alpha } {2} & \ddots & \ddots & 0 \\ \vdots & \ddots & \ddots & \ddots & \frac{\alpha } {2} \\ 0 & \cdots & 0 & \frac{\alpha } {2} & 1-\alpha \end{array} \right ],{}\\ \end{array}$$
$$\displaystyle{{\mathbf{u}}^{n} = \left [\begin{array}{c} u_{1}^{n} \\ u_{2}^{n}\\ \vdots \\ u_{M-2}^{n} \\ u_{M-1}^{n} \end{array} \right ]\quad \mbox{ and}\quad {\mathbf{b}}^{n} = \left [\begin{array}{c} \frac{1} {2}\alpha u_{0}^{n} + \frac{1} {2}\alpha u_{0}^{n+1} \\ 0\\ \vdots\\ 0 \\ \frac{1} {2}\alpha u_{M}^{n} + \frac{1} {2}\alpha u_{M}^{n+1} \end{array} \right ].}$$

Now we consider the problem (7.2) for the case

$$\displaystyle{b(x_{l},\tau ) > 0,\quad 0 \leq \tau \leq T}$$

and

$$\displaystyle{b(x_{u},\tau ) < 0,\quad 0 \leq \tau \leq T.}$$

In this case, the PDE degenerates into hyperbolic partial differential equations at the boundaries, and the first derivative there has to be discretized by a one-sided difference. For example, if in the scheme (7.5) or (7.6), we use a one-sided difference for the first derivative in the equations for m = 0 and m = M, we can have the approximation we need. We call them the modified schemes (7.5) and (7.6). However, here the way of discretizing the first derivative at m = 0 is different from that at m = 1, namely, the discretization “jumps” from m = 0 to m = 1, so from the finite-difference equation at m = 0 to m = 1, the coefficients do not satisfy the Lipschitz condition. This causes some problems when doing stability analysis. A similar situation occurs from m = M − 1 to m = M. In order to avoid the “jump,” we can approximate the degenerate parabolic problem (7.2) by the explicit finite-difference method:

$$\displaystyle{ \left \{\begin{array}{ll} { u_{m}^{n+1} - u_{m}^{n} \over \Delta \tau } =&a_{m}^{n}{ u_{m+1}^{n} - 2u_{m}^{n} + u_{m-1}^{n} \over \Delta {x}^{2}} + \Phi _{m}^{n} + c_{m}^{n}u_{m}^{n} + g_{m}^{n}, \\ &\qquad m = 0,1,\cdots \,,M,\quad n = 0,1,\cdots \,,N - 1, \\ u_{m}^{0} = f(x_{m}), &\qquad m = 0,1,\cdots \,,M,\end{array} \right . }$$
(7.11)

where

$$\displaystyle{\Phi _{m}^{n} = \left \{\begin{array}{lll} b_{m}^{n}{ -u_{m+2}^{n} + 4u_{m+1}^{n} - 3u_{m}^{n} \over 2\Delta x} ,&\mbox{ if }&\quad b_{m}^{n} > 0, \\ 0, &\mbox{ if }&\quad b_{m}^{n} = 0, \\ b_{m}^{n}{ 3u_{m}^{n} - 4u_{m-1}^{n} + u_{m-2}^{n} \over 2\Delta x} , &\mbox{ if }&\quad b_{m}^{n} < 0 \end{array} \right .}$$

or by the implicit finite-difference method:

$$\displaystyle{ \left \{\begin{array}{rcl} &&{ u_{m}^{n+1} - u_{m}^{n} \over \Delta \tau } ={ a_{m}^{n+1/2} \over 2} \left ({ u_{m+1}^{n+1} - 2u_{m}^{n+1} + u_{m-1}^{n+1} \over \Delta {x}^{2}} \right . \\ &&\qquad \qquad \qquad \qquad \qquad \left .+{ u_{m+1}^{n} - 2u_{m}^{n} + u_{m-1}^{n} \over \Delta {x}^{2}} \right ) \\ &&\qquad \qquad \qquad + \Phi _{m}^{n+1/2} +{ c_{m}^{n+1/2} \over 2} (u_{m}^{n+1} + u_{m}^{n}) + g_{m}^{n+1/2}, \\ &&\qquad \qquad \qquad \qquad m = 0,1,\cdots \,,M,\quad n = 0,1,\cdots \,,N - 1, \\ &&u_{m}^{0} = f(x_{m}),\qquad m = 0,1,\cdots \,,M, \end{array} \right . }$$
(7.12)

where

$$\displaystyle{\Phi _{m}^{n+1/2} = \left \{\begin{array}{lll} { b_{m}^{n+1/2} \over 2} \left ({ -u_{m+2}^{n+1} + 4u_{m+1}^{n+1} - 3u_{m}^{n+1} \over 2\Delta x} \right .& & \\ \left .+{ -u_{m+2}^{n} + 4u_{m+1}^{n} - 3u_{m}^{n} \over 2\Delta x} \right ), &\mbox{ if }&\quad b_{m}^{n+1/2} > 0, \\ 0, &\mbox{ if }&\quad b_{m}^{n+1/2} = 0, \\ { b_{m}^{n+1/2} \over 2} \left ({ 3u_{m}^{n+1} - 4u_{m-1}^{n+1} + u_{m-2}^{n+1} \over 2\Delta x} \right . & & \\ \left .+{ 3u_{m}^{n} - 4u_{m-1}^{n} + u_{m-2}^{n} \over 2\Delta x} \right ), &\mbox{ if }&\quad b_{m}^{n+1/2} < 0. \end{array} \right .}$$

Scheme (7.12) usually involves eight points, among them there are four unknowns (see Fig. 7.4). However, at boundaries there are three unknowns because \(a_{0}^{n+1/2} = a_{M}^{n+1/2} = 0\). When the partial differential equation is discretized in this way, the stability analysis can be done much easier. In the paper [79] by Sun, Yan, and Zhu, the stability problem of scheme (7.12) has been carefully studied. Clearly, the truncation error of the scheme (7.11) is O(Δx 2, Δτ) and that of the scheme (7.12) is \(O(\Delta {x}^{2},{\Delta \tau }^{2})\).

Fig. 7.4
figure 4

Implicit eight-point finite-difference discretizations

Therefore, in order to find a solution, we can use either an explicit finite-difference method or an implicit finite-difference method. From the next section, we will see that for an explicit method, the step size Δτ must be less than a constant times Δx 2 for a stable computation. Thus, if a small Δx must be adopted in order to have satisfying results, the computation could take quite a long time. However, there is no restriction on the step size Δτ for implicit finite-difference methods. This is the main advantage of implicit methods over explicit methods.

A European-style derivative could involve several random state variables. In this case, we need to discretize a multi-dimensional problem, which will be dealt with in Chaps. 8 and 10. Usually, an American-style derivative problem can be formulated as a free boundary problem. Discretization of such a problem will be discussed in Chap. 9.

2 Stability and Convergence Analysis

2.1 Stability

Stability is concerned with the propagation of errors. During the computation, truncation errors are brought into approximate solutions at each step. Also rounding errors are introduced into solutions all the time because any computer has a finite number of digits for numbers. If for a given finite-difference method, the errors are not magnified at each step in some norm, then we say that the finite-difference method is stable. There are two different norms that are often used in studying stability. Suppose

$$\displaystyle{\mathbf{x} = {(x_{1},x_{2},\cdots \,,x_{M-1})}^{T }}$$

is a vector with M − 1 components. The L and L2 norms of the vector x are defined as follows:

$$\displaystyle{\vert \vert \mathbf{x}\vert \vert _{\mathrm{L_{\infty }}} = \max _{1\leq m\leq M-1}\vert x_{m}\vert }$$

and

$$\displaystyle{\vert \vert \mathbf{x}\vert \vert _{\mathrm{L}_{2}} ={ \left ({ 1 \over M - 1} \sum _{m=1}^{M-1}x_{ m}^{2}\right )}^{1/2}.}$$

Here, M − 1 could be any positive integer and is allowed to go to infinity.

Stability of Explicit Finite-Difference Methods for the Heat Equation. Consider the explicit finite-difference method (7.8) for the heat conductivity problem. Suppose an initial error \(e_{m}^{0}\) appears in computing f(x m ) for m = 1, 2, ⋯ , M − 1. That is, instead of f(x m ), \(f(x_{m}) + e_{m}^{0}\) is given as the initial value. We assume that there is no error from boundary conditions, that is, \(e_{0}^{0} = e_{M}^{0} = 0\). Let \(\tilde{u}_{m}^{n},m = 0,1,\cdots \,,M,n = 0,1,\cdots \,,N\), be the computed solution. We want to study how \(\tilde{u}_{m}^{n}\) is affected by \(e_{m}^{0}\). This is usually referred to as studying the stability of schemes with respect to initial values. Clearly, \(\tilde{u}_{m}^{n}\) satisfies

$$\displaystyle{\left \{\begin{array}{ll} \tilde{u}_{m}^{n+1} =\alpha \tilde{ u}_{m+1}^{n} + (1 - 2\alpha )\tilde{u}_{m}^{n} +\alpha \tilde{ u}_{m-1}^{n}, \\ m = 1,2,\cdots \,,M - 1, &n = 0,1,\cdots \,,N - 1, \\ \tilde{u}_{0}^{n+1} = f_{l}{(\tau }^{n+1}), &n = 0,1,\cdots \,,N - 1, \\ \tilde{u}_{M}^{n+1} = f_{u}{(\tau }^{n+1}), &n = 0,1,\cdots \,,N - 1, \\ \tilde{u}_{m}^{0} = f(x_{m}) + e_{m}^{0}, &m = 0,1,\cdots \,,M. \end{array} \right .}$$

Let

$$\displaystyle{e_{m}^{n} =\tilde{ u}_{ m}^{n} - u_{ m}^{n},\quad m = 0,1,\cdots \,,M,\quad n = 0,1,\cdots \,,N.}$$

Taking the difference of the scheme (7.8) and this system, we get

$$\displaystyle{ \left \{\begin{array}{ll} e_{m}^{n+1} =\alpha e_{m+1}^{n} + (1 - 2\alpha )e_{m}^{n} +\alpha e_{m-1}^{n}, \\ m = 1,2,\cdots \,,M - 1, &n = 0,1,\cdots \,,N - 1, \\ e_{0}^{n+1} = 0, &n = 0,1,\cdots \,,N - 1, \\ e_{M}^{n+1} = 0, &n = 0,1,\cdots \,,N - 1, \\ e_{m}^{0} = e_{m}^{0}, &m = 0,1,\cdots \,,M.\end{array} \right . }$$
(7.13)

For this scheme, we can analyze its stability in two ways. First, we show that this scheme is stable in the maximum norm if α ≤ 1 ∕ 2. In this case, all the coefficients in the right-hand side of the finite-difference equation, α, 1 − 2α, α, are nonnegative, so

$$\displaystyle\begin{array}{rcl} \vert e_{m}^{n+1}\vert & =& \vert \alpha e_{ m+1}^{n} + (1 - 2\alpha )e_{ m}^{n} +\alpha e_{ m-1}^{n}\vert {}\\ &\leq & \alpha \vert e_{m+1}^{n}\vert + (1 - 2\alpha )\vert e_{ m}^{n}\vert +\alpha \vert e_{ m-1}^{n}\vert {}\\ &\leq & \max _{1\leq m\leq M-1}\vert e_{m}^{n}\vert ,\quad m = 1,2,\cdots \,,M - 1, {}\\ \end{array}$$

or

$$\displaystyle{\max _{1\leq m\leq M-1}\vert e_{m}^{n+1}\vert \leq \max _{ 1\leq m\leq M-1}\vert e_{m}^{n}\vert ,}$$

where we have used the fact \(e_{0}^{n} = e_{M}^{n} = 0\), n = 0, 1, ⋯ , N. This is true for any n. Therefore,

$$\displaystyle{\max _{1\leq m\leq M-1}\vert e_{m}^{n}\vert \leq \max _{ 1\leq m\leq M-1}\vert e_{m}^{0}\vert }$$

or

$$\displaystyle{\vert \vert {\mathbf{e}}^{n}\vert \vert _{\mathrm{ L_{\infty }}} \leq \vert \vert {\mathbf{e}}^{0}\vert \vert _{\mathrm{ L_{\infty }}}.}$$

Consequently, the difference scheme (7.8) is stable with respect to initial value in the maximum norm. This method of analyzing stability is very simple. Unfortunately, it seems that this method works only for explicit schemes with positive coefficients on the right-hand side.

Now let us study the stability of scheme (7.8) in another way. Set

$$\displaystyle{ \mathbf{A}_{1} = \left [\begin{array}{ccccc} 1 - 2\alpha & \alpha & 0 &\cdots & 0\\ \alpha &1 - 2\alpha & \alpha & \ddots & \vdots \\ 0 & \alpha &1 - 2\alpha & \ddots & 0\\ \vdots & \ddots & \ddots & \ddots & \alpha \\ 0 & \cdots & 0 & \alpha &1 - 2\alpha \end{array} \right ],{ \mathbf{e}}^{n} = \left [\begin{array}{c} e_{1}^{n} \\ e_{2}^{n}\\ \vdots\\ \vdots \\ e_{M-1}^{n} \end{array} \right ]. }$$
(7.14)

From the system (7.13), we see that between e n + 1 and e n there is the following relation:

$$\displaystyle{{\mathbf{e}}^{n+1} = \mathbf{A}_{ 1}{\mathbf{e}}^{n}.}$$

Suppose λ is an eigenvalue of A 1 and \(\mathbf{x} = {(x_{1},x_{2},\cdots \,,x_{M-1})}^{T}\) is an associated eigenvector, i.e., we assume that λ and x satisfy the equation

$$\displaystyle{\mathbf{A}_{1}\mathbf{x} =\lambda \mathbf{x}.}$$

Now let us find M − 1 linearly independent eigenvectors of A 1 and their associated eigenvalues. Define

$$\displaystyle{x_{0} = x_{M} = 0.}$$

Then the equation above can be rewritten as

$$\displaystyle{ \alpha x_{m-1} + (1 - 2\alpha )x_{m} +\alpha x_{m+1} =\lambda x_{m},\quad 1 \leq m \leq M - 1, }$$
(7.15)

or

$$\displaystyle{ \alpha x_{m-1} + (1 - 2\alpha -\lambda )x_{m} +\alpha x_{m+1} = 0,\quad 1 \leq m \leq M - 1. }$$
(7.16)

For the system (7.16) with arbitrary x 0 and x M , let us try to find a solution in the form

$$\displaystyle{ x_{m} {=\mu }^{m},\quad 0 \leq m \leq M. }$$
(7.17)

Substituting it into system (7.16), we have

$$\displaystyle{{\left [\alpha +(1 - 2\alpha -\lambda )\mu {+\alpha \mu }^{2}\right ]\mu }^{m-1} = 0,\quad 1 \leq m \leq M - 1,}$$

which can be reduced to one equation:

$$\displaystyle{{ \alpha \mu }^{2} + (1 - 2\alpha -\lambda )\mu +\alpha = 0. }$$
(7.18)

Denote the two roots of Eq. (7.18) by μ 1 and μ 2. It is clear that μ 1 and μ 2 should satisfy the following conditions:

$$\displaystyle{\mu _{1} +\mu _{2} = -\frac{1} {\alpha } (1 - 2\alpha -\lambda ),\quad \quad \mu _{1}\mu _{2} = 1.}$$

Case one: \(\mu _{1} =\mu _{2} =\mu _{{\ast}}.\,\) In this case,

$$\displaystyle{x_{m} = m\mu _{{\ast}}^{m},\quad 0 \leq m \leq M,}$$

also is a solution of the system (7.16). Substituting it into system (7.16) yields

$$\displaystyle\begin{array}{rcl} & & \alpha (m - 1)\mu _{{\ast}}^{m-1} + (1 - 2\alpha -\lambda )m\mu _{ {\ast}}^{m} +\alpha (m + 1)\mu _{ {\ast}}^{m+1} {}\\ & =& -\alpha \mu _{{\ast}}^{m-1} +\alpha \mu _{ {\ast}}^{m+1} =\alpha \mu _{ {\ast}}^{m-1}(\mu _{ {\ast}}^{2} - 1) = 0,\quad 1 \leq m \leq M - 1, {}\\ \end{array}$$

because of \(\mu _{1}\mu _{2} =\mu _{ {\ast}}^{2} = 1\), so it is true that \(x_{m} = m\mu _{{\ast}}^{m},\quad 0 \leq m \leq M,\) is another solution of the system (7.16) besides the solution (7.17) with μ = μ  ∗ . Thus for any c 1 and c 2,

$$\displaystyle{x_{m} = (\,c_{1} + c_{2}m\,)\mu _{{\ast}}^{m},\quad 0 \leq m \leq M,}$$

should be a solution of the system (7.16). It follows from \(x_{0} = x_{M} = 0\) that \(c_{1} = c_{2} = 0.\) Consequently, x m  ≡ 0, 1 ≤ m ≤ M − 1, which contradicts that \(\mathbf{x} = {(x_{1},x_{2},\cdots \,,x_{M-1})}^{T}\) is an eigenvector.

Case two: \(\mu _{1}\neq \mu _{2}.\,\) In this case for any c 1 and c 2,

$$\displaystyle{x_{m} = c_{1}\mu _{1}^{m} + c_{ 2}\mu _{2}^{m},\quad 0 \leq m \leq M,}$$

should be a solution of the system (7.16). It follows from \(x_{0} = x_{M} = 0\) that

$$\displaystyle{c_{1} + c_{2} = 0,\quad c_{1}\mu _{1}^{M} + c_{ 2}\mu _{2}^{M} = 0.}$$

From these two relations we can obtain

$$\displaystyle{{\left ({ \mu _{1} \over \mu _{2}} \right )}^{M} = -{ c_{2} \over c_{1}} = 1 =\mathrm{ {e}}^{\mathrm{i}2k\pi },\quad k\mbox{ being any integer}.}$$

Consequently,

$$\displaystyle{\frac{\mu _{1}} {\mu _{2}} =\mathrm{ {e}}^{\mathrm{i}2\omega _{k} },\quad \omega _{k} = \frac{k\pi } {M},\quad k\mbox{ being any integer}.}$$

It is clear that k = k  ∗  and k = k  ∗  + M give the same solution. Thus we need to set k = 0, 1, ⋯ , M − 1 only. For k = 0, we have μ 1 = μ 2. As we have pointed out, in this case we could not find any eigenvector. For k = 1, 2, ⋯ , or M − 1, we have \({ \mu _{1} \over \mu _{2}} =\mathrm{ {e}}^{\mathrm{i}2\omega _{k}}\). Combining this relation with \(\mu _{1}\mu _{2} = 1\) yields

$$\displaystyle{\mu _{1}^{(k)} =\mathrm{ {e}}^{\mathrm{i}\omega _{k} },\quad \mu _{2}^{(k)} =\mathrm{ {e}}^{-\mathrm{i}\omega _{k} }.}$$

For such a k, taking \(c_{1} = \frac{1} {2}\) and \(c_{2} = -\frac{1} {2}\), we have the following eigenvector

$$\displaystyle{ \mathbf{x}_{\omega _{k}} = \left [\begin{array}{c} \frac{1} {2}\mathrm{{e}}^{\mathrm{i}\omega _{k}} -\frac{1} {2}\mathrm{{e}}^{-\mathrm{i}\omega _{k}} \\ \frac{1} {2}\mathrm{{e}}^{\mathrm{i}2\omega _{k}} -\frac{1} {2}\mathrm{{e}}^{-\mathrm{i}2\omega _{k}} \\ \vdots\\ \vdots \\ \frac{1} {2}\mathrm{{e}}^{\mathrm{i}(M-1)\omega _{k}} -\frac{1} {2}\mathrm{{e}}^{-\mathrm{i}(M-1)\omega _{k}} \\ \end{array} \right ] = \left [\begin{array}{c} \sin \omega _{k} \\ \sin 2\omega _{k}\\ \vdots\\ \vdots \\ \sin (M - 1)\omega _{k} \end{array} \right ]. }$$
(7.19)

The corresponding eigenvalue \(\lambda _{\omega _{k}}\) satisfies system (7.15), i.e.,

$$\displaystyle\begin{array}{rcl} \lambda _{\omega _{k}}& =& \frac{\alpha \sin \,(m - 1)\omega _{k} + (1 - 2\alpha )\sin \,m\omega _{k} +\alpha \sin \, (m + 1)\omega _{k}} {\sin \,m\omega _{k}} {}\\ & =& \frac{\alpha \sin \,m\omega _{k}\cos \,\omega _{k} + (1 - 2\alpha )\sin \,m\omega _{k} +\alpha \sin \, m\omega _{k}\cos \,\omega _{k}} {\sin \,m\omega _{k}} {}\\ & =& 1 - 2\alpha + 2\alpha \cos \,\omega _{k} = 1 - {4\alpha \sin }^{2}(\omega _{ k}/2). {}\\ \end{array}$$

Here k = 1, 2, ⋯ , M − 1, i.e., we have found M − 1 eigenvalues of A 1 and their associated eigenvectors. Because \(\lambda _{\omega _{k}}\), k = 1, 2, ⋯ , M − 1, are distinct eigenvalues of the symmetric matrix A 1, the M − 1 associated eigenvectors, \(\mathbf{x}_{\omega _{k}}\), k = 1, 2, ⋯ , M − 1, are linearly independent.

As a consequence, any vector with M − 1 components can be expressed as linear combination of \(\mathbf{x}_{\omega _{ k}}\), which means that an error e 0 can be expressed as

$$\displaystyle{{\mathbf{e}}^{0} =\sum _{ k=1}^{M-1}\varepsilon _{ \omega _{k}}\mathbf{x}_{\omega _{k}}.}$$

Substituting this expression into \({\mathbf{e}}^{n+1} = \mathbf{A}_{1}{\mathbf{e}}^{n}\), we have

$$\displaystyle{{\mathbf{e}}^{1} = \mathbf{A}_{ 1}{\mathbf{e}}^{0} =\sum _{ k=1}^{M-1}\varepsilon _{ \omega _{k}}\lambda _{\omega _{k}}\mathbf{x}_{\omega _{k}}}$$

and furthermore

$$\displaystyle{{\mathbf{e}}^{n} =\sum _{ k=1}^{M-1}\varepsilon _{ \omega _{k}}\lambda _{\omega _{k}}^{n}\mathbf{x}_{\omega _{ k}}}$$

or in component form

$$\displaystyle{e_{m}^{n} =\sum _{ k=1}^{M-1}\varepsilon _{ \omega _{k}}\lambda _{\omega _{k}}^{n}\sin m\omega _{ k},\quad m = 1,2,\cdots \,,M - 1.}$$

As eigenvectors of a symmetric matrix A 1, \(\mathbf{x}_{\omega _{ k}}\), k = 1, 2, ⋯ , M − 1 are orthogonal. Thus, from the expressions of e 0 and e n above, we have

$$\displaystyle{\vert \vert {\mathbf{e}}^{0}\vert \vert _{\mathrm{ L}_{2}} ={ \left ({ 1 \over M - 1} \sum _{m=1}^{M-1}\varepsilon _{ \omega _{k}}^{2}\vert \vert \mathbf{x}_{\omega _{ k}}\vert \vert _{\mathrm{L}_{2}}^{2}\right )}^{1/2}}$$

and

$$\displaystyle{\vert \vert {\mathbf{e}}^{n}\vert \vert _{\mathrm{ L}_{2}} ={ \left ({ 1 \over M - 1} \sum _{m=1}^{M-1}\varepsilon _{ \omega _{k}}^{2}\lambda _{ \omega _{k}}^{2n}\vert \vert \mathbf{x}_{\omega _{ k}}\vert \vert _{\mathrm{L}_{2}}^{2}\right )}^{1/2}.}$$

Consequently, we obtain

$$\displaystyle{\vert \vert {\mathbf{e}}^{n}\vert \vert _{\mathrm{ L}_{2}} \leq \vert \vert {\mathbf{e}}^{0}\vert \vert _{\mathrm{ L}_{2}}}$$

if all the eigenvalues of A 1 are in [ − 1, 1]. From what we have gotten the following conclusion is obtained: if

$$\displaystyle{0 \leq \alpha \leq 1/2,}$$

then we have the following inequality

$$\displaystyle{-1 \leq 1 - 4\alpha \leq \lambda _{\omega _{k}} = 1 - {4\alpha \sin }^{2}(\omega _{ k}/2) \leq 1,\quad k = 1,2,\cdots \,,M - 1,}$$

which means that the computation is stable with respect to the initial value. If α > 1 ∕ 2, then when M is large enough, some of the eigenvalues of A 1 must be less than − 1. Hence, if a component of e 0 associated with such an eigenvalue is not zero, then the corresponding component of e n will be greater than the component of e 0 and go to infinity as n goes to infinity. Because the errors are random variables, the \(\varepsilon _{\omega _{ k}}\) corresponding to such an eigenvalue \(\lambda _{\omega _{k}}\) might not be zero. Thus, the computation is unstable. This can be summarized as: scheme (7.8) is stable if

$$\displaystyle{\alpha ={ a\Delta \tau \over \Delta {x}^{2}} \leq 1/2;}$$

whereas the scheme is unstable if

$$\displaystyle{\alpha ={ a\Delta \tau \over \Delta {x}^{2}} > 1/2.}$$

Stability of Implicit Finite-Difference Methods for the Heat Equation. The second method used above to analyze stability can be applied to other cases, for example, implicit finite-difference methods. For an implicit finite-difference scheme, suppose e n satisfies

$$\displaystyle{\mathbf{A}{\mathbf{e}}^{n+1} = \mathbf{B}{\mathbf{e}}^{n},}$$

where A and B are two matrices, and A is invertible. Also, assume that the following relations hold:

$$\displaystyle{ \lambda _{\omega _{k}}\mathbf{A}\mathbf{x}_{\omega _{k}} = \mathbf{B}\mathbf{x}_{\omega _{k}},\quad k = 1,2,\cdots \,,M - 1, }$$
(7.20)

where \(\mathbf{x}_{\omega _{k}}\), k = 1, 2, ⋯ , M − 1 are linear independent vectors. In this case, this method still works: if all the \(\lambda _{\omega _{ k}} \in [-1,1]\), then the scheme is stable; if certain \(\lambda _{\omega _{k}}\) does not belong to [ − 1, 1], then the scheme is unstable. In fact, any initial error can be expressed as

$$\displaystyle{{\mathbf{e}}^{0} =\sum _{ k=1}^{M-1}\varepsilon _{ \omega _{k}}\mathbf{x}_{\omega _{k}}}$$

and because of the set of relations (7.20), we have

$$\displaystyle{{\mathbf{e}}^{n} =\sum _{ k=1}^{M-1}\varepsilon _{ \omega _{k}}\lambda _{\omega _{k}}^{n}\mathbf{x}_{\omega _{ k}}}$$

for any n. Therefore, the scheme is stable if and only if

$$\displaystyle{\left \vert \lambda _{\omega _{k}}\right \vert \leq 1}$$

for all the ω k .

For the Crank–Nicolson scheme (7.9), A and B are given in Sect. 7.1. As pointed out above, in order to study the stability, we need to find the solution of

$$\displaystyle{\lambda \mathbf{A}\mathbf{x} = \mathbf{B}\mathbf{x}.}$$

In Problem 7, for more general equations, readers are asked to find the eigenvectors and the eigenvalues. Here we only give the result. The result is as follows. For this case, there are M − 1 linearly independent vectors given by the expression (7.19) and the corresponding eigenvalues are

$$\displaystyle\begin{array}{rcl} \lambda _{\omega _{k}}& =&{ \frac{1} {2}\alpha \sin \,(m + 1)\omega _{k} + (1-\alpha )\sin m\omega _{k} + \frac{1} {2}\alpha \sin \,(m - 1)\omega _{k} \over -\frac{1} {2}\alpha \sin \,(m + 1)\omega _{k} + (1+\alpha )\sin m\omega _{k} -\frac{1} {2}\alpha \sin \,(m - 1)\omega _{k}} {}\\ & =&{ (1-\alpha )\sin m\omega _{k} +\alpha \sin m\omega _{k}\cos \omega _{k} \over (1+\alpha )\sin m\omega _{k} -\alpha \sin m\omega _{k}\cos \omega _{k}} {}\\ & =&{ 1 - {2\alpha \sin }^{2}{ \omega _{k} \over 2} \over 1 + {2\alpha \sin }^{2}{ \omega _{k} \over 2} } ,\quad k = 1,2,\cdots \,,M - 1, {}\\ \end{array}$$

where ω k  =  ∕ M. Because \(\left \vert \lambda _{\omega _{k}}\right \vert \leq 1\) for any ω k , the difference scheme (7.9) is stable in the L2 norm.

Stability for Periodic Problems. In schemes (7.8) and (7.9), the values are given at both boundaries, and during stability analysis, we assume that there is no error at the boundaries. It is clear that this is not always the case. Consider problems satisfying periodic conditions and assume \(u_{m}^{n} = u_{m+M}^{n}\). In this case, we only need to find \(u_{m}^{n}\), m = 0, 1, ⋯ , M − 1 for each time level. If the coefficients of the problem are constant, then we can analyze the stability in a similar way. Let us further assume that the solution satisfies the system:

$$\displaystyle{a_{1}u_{m+1}^{n+1}+a_{ 0}u_{m}^{n+1}+a_{ -1}u_{m-1}^{n+1} = b_{ 1}u_{m+1}^{n}+b_{ 0}u_{m}^{n}+b_{ -1}u_{m-1}^{n},\;m = 0,1,\cdots \,,M-1.}$$

If \(e_{m}^{n}\) is the error of \(u_{m}^{n}\), then \(e_{m}^{n}\) satisfy the same system. Thus, the system for \(e_{m}^{n}\) can be written as

$$\displaystyle{\mathbf{A}_{2}{\mathbf{e}}^{n+1} = \mathbf{B}_{ 2}{\mathbf{e}}^{n},}$$

where we have used the conditions

$$\displaystyle{e_{-1}^{n} = e_{ M-1}^{n},\quad e_{ M}^{n} = e_{ 0}^{n}}$$

and adopted the following notation:

$$\displaystyle{\mathbf{A}_{2} = \left [\begin{array}{ccccc} a_{0} & a_{1} & 0 & \cdots &a_{-1} \\ a_{-1} & a_{0} & a_{1} & \ddots & \vdots \\ 0 &a_{-1} & a_{0} & \ddots & 0\\ \vdots & \ddots & \ddots & \ddots & a_{ 1} \\ a_{1} & \cdots & 0 &a_{-1} & a_{0} \end{array} \right ],{ \mathbf{e}}^{n} = \left [\begin{array}{c} e_{0}^{n} \\ e_{1}^{n}\\ \vdots\\ \vdots \\ e_{M-1}^{n} \end{array} \right ]}$$

and

$$\displaystyle{\mathbf{B}_{2} = \left [\begin{array}{ccccc} b_{0} & b_{1} & 0 & \cdots &b_{-1} \\ b_{-1} & b_{0} & b_{1} & \ddots & \vdots \\ 0 &b_{-1} & b_{0} & \ddots & 0 \\ \vdots & \ddots & \ddots & \ddots & b_{1} \\ b_{1} & \cdots & 0 &b_{-1} & b_{0} \end{array} \right ].}$$

In order to study stability, we need to find the solution of the equation

$$\displaystyle{\lambda \mathbf{A}_{2}\mathbf{x} = \mathbf{B}_{2}\mathbf{x}.}$$

This is left for readers to do as Problem 8. The result is as follows. For this equation, the eigenvectors are

$$\displaystyle{\mathbf{x}_{\theta _{k}} = \left [\begin{array}{c} 1 \\ \mathrm{{e}}^{\mathrm{i}\theta _{k}}\\ \vdots\\ \vdots \\ \mathrm{{e}}^{\mathrm{i}(M-1)\theta _{k}} \end{array} \right ],\quad k = 0,1,\cdots \,,M-1,}$$

where θ k  = 2 ∕ M and the eigenvalues are

$$\displaystyle{\lambda _{\theta _{k}} ={ b_{1}\mathrm{{e}}^{\mathrm{i}\theta _{k}} + b_{ 0} + b_{-1}\mathrm{{e}}^{-\mathrm{i}\theta _{k}} \over a_{1}\mathrm{{e}}^{\mathrm{i}\theta _{k}} + a_{0} + a_{-1}\mathrm{{e}}^{-\mathrm{i}\theta _{k}}} ,\quad k = 0,1,\cdots \,,M - 1.}$$

By using the relations \(\mathrm{{e}}^{-\mathrm{i}\theta _{k}} =\mathrm{ {e}}^{\mathrm{i}(M-1)\theta _{k}}\) and \(\mathrm{{e}}^{\mathrm{i}M\theta _{k}} = 1\), this result can be shown by a straightforward calculation. If \(\vert \lambda _{\theta _{k}}\vert \leq 1,\quad k = 0,1,\cdots \,,M - 1\), then the method is stable. If \(\vert \lambda _{\theta _{k}}\vert > 1\) for some k, then the method is unstable. Because M can go to infinity, θ k indeed can be any number in the interval [0, 2π]. Therefore, if for any θ ∈ [0, 2π],

$$\displaystyle{ \vert \lambda _{\theta }\vert = \left \vert { b_{1}\mathrm{{e}}^{\mathrm{i}\theta } + b_{0} + b_{-1}\mathrm{{e}}^{-\mathrm{i}\theta } \over a_{1}\mathrm{{e}}^{\mathrm{i}\theta } + a_{0} + a_{-1}\mathrm{{e}}^{-\mathrm{i}\theta }} \right \vert \leq 1, }$$
(7.21)

then the scheme is stable. Otherwise, the method is unstable. Such a method of analyzing stability is usually called the von Neumann method and λ θ is called the amplification factor. This method gives a complete stability analysis for periodic initial value problems with constant coefficients. For more general case, this method can be performed in the following way. Assume

$$\displaystyle{ e_{m}^{n} =\lambda _{ \theta }^{n}\mathrm{{e}}^{\mathrm{i}m\theta }, }$$
(7.22)

where θ can be any real number in the interval [0, 2π]. Substituting this expression into the finite-difference equation, we can find λ θ . If all | λ θ  | ≤ 1, then the scheme is stable; if some | λ θ  |  > 1, then the scheme is unstable. For more about this method, see the book [67] by Richtmyer and Morton and many other books.

Stability Analysis in Practice. In practice, most problems have variable coefficients. Therefore, the von Neumann method does not give a complete stability analysis. However, it is still very useful. The von Neumann method can be applied in practice in the following way.

Consider the following scheme with variable coefficients:

$$\displaystyle\begin{array}{rcl} & & a_{1,m}^{n}u_{ m+1}^{n+1} + a_{ 0,m}^{n}u_{ m}^{n+1} + a_{ -1,m}^{n}u_{ m-1}^{n+1} \\ & =& b_{1,m}^{n}u_{ m+1}^{n} + b_{ 0,m}^{n}u_{ m}^{n} + b_{ -1,m}^{n}u_{ m-1}^{n}, {}\end{array}$$
(7.23)

where for simplicity, we assume that only three points in the x direction are involved. If more points are involved, the procedure is still the same. Suppose

$$\displaystyle{\vert f_{m+1}^{n} - f_{ m}^{n}\vert < c\Delta x,\quad \quad \vert f_{ m+1}^{n} - 2f_{ m}^{n} + f_{ m-1}^{n}\vert < c\Delta {x}^{2},}$$

and

$$\displaystyle{\vert f_{m}^{n+1} - f_{ m}^{n}\vert < c\Delta \tau }$$

for \(f = a_{1},\,a_{0},\,a_{-1},\,b_{1},\,b_{0}\), and b  − 1. Assume that e m n has the form (7.22). Substituting this expression into the finite-difference equation (7.23) yields

$$\displaystyle{\lambda _{\theta }(x_{m}{,\tau }^{n}) ={ b_{1,m}^{n}\mathrm{{e}}^{\mathrm{i}\left (m+1\right )\theta } + b_{ 0,m}^{n}\mathrm{{e}}^{\mathrm{i}m\theta } + b_{ -1,m}^{n}\mathrm{{e}}^{\mathrm{i}\left (m-1\right )\theta } \over a_{1,m}^{n}\mathrm{{e}}^{\mathrm{i}\left (m+1\right )\theta } + a_{0,m}^{n}\mathrm{{e}}^{\mathrm{i}m\theta } + a_{-1,m}^{n}\mathrm{{e}}^{\mathrm{i}\left (m-1\right )\theta }} .}$$

If for the amplification factor, we have

$$\displaystyle{\vert \lambda _{\theta }(x_{m}{,\tau }^{n})\vert \leq 1}$$

for every point and the treatment of boundary conditions is reasonable, then we can expect the scheme to be stable. Clearly, the condition \(\vert \lambda _{\theta }(x_{m}{,\tau }^{n})\vert \leq 1\) is equivalent to

$$\displaystyle{ \vert b_{1,m}^{n}\mathrm{{e}}^{\mathrm{i}\theta } + b_{ 0,m}^{n} + b_{ -1,m}^{n}\mathrm{{e}}^{-\mathrm{i}\theta }{\vert }^{2} -\vert a_{ 1,m}^{n}\mathrm{{e}}^{\mathrm{i}\theta } + a_{ 0,m}^{n} + a_{ -1,m}^{n}\mathrm{{e}}^{-\mathrm{i}\theta }{\vert }^{2} \leq 0 }$$
(7.24)

if \(\vert a_{1,m}^{n}\mathrm{{e}}^{\mathrm{i}\theta } + a_{0,m}^{n} + a_{-1,m}^{n}\mathrm{{e}}^{-\mathrm{i}\theta }{\vert }^{2} \geq \tilde{ c} > 0,\;\tilde{c}\) being a constant. The latter is easier to use in practice than the former.

Let us analyze the stability of scheme (7.6) in this way. This scheme has the form (7.23) with

$$\displaystyle\begin{array}{rcl} a_{1,m}^{n}& =& -\left ({ a_{m}^{n+1/2} \over 2\Delta {x}^{2}} +{ b_{m}^{n+1/2} \over 4\Delta x} \right )\Delta \tau , {}\\ a_{0,m}^{n}& =& 1 +{ a_{m}^{n+1/2} \over \Delta {x}^{2}} \Delta \tau , {}\\ a_{-1,m}^{n}& =& -\left ({ a_{m}^{n+1/2} \over 2\Delta {x}^{2}} -{ b_{m}^{n+1/2} \over 4\Delta x} \right )\Delta \tau , {}\\ b_{1,m}^{n}& =& -a_{ 1,m}^{n}, {}\\ b_{0,m}^{n}& =& 2 - a_{ 0,m}^{n}, {}\\ b_{-1,m}^{n}& =& -a_{ -1,m}^{n}. {}\\ \end{array}$$

Here, we assume

$$\displaystyle{g_{m}^{n+1/2} = c_{ m}^{n+1/2} = 0}$$

because we analyze the stability with respect to initial values only and ignoring a term of O(Δτ) in coefficients will have no effect on the conclusion on stability. The left-hand side of the condition (7.24) for this scheme is

$$\displaystyle\begin{array}{rcl} & & \left [-a_{1,m}^{n}\mathrm{{e}}^{\mathrm{i}\theta } + (2 - a_{ 0,m}^{n}) - a_{ -1,m}^{n}\mathrm{{e}}^{-\mathrm{i}\theta }\right ]\left [-a_{ 1,m}^{n}\mathrm{{e}}^{-\mathrm{i}\theta } + (2 - a_{ 0,m}^{n}) - a_{ -1,m}^{n}\mathrm{{e}}^{\mathrm{i}\theta }\right ] {}\\ & & -(a_{1,m}^{n}\mathrm{{e}}^{\mathrm{i}\theta } + a_{ 0,m}^{n} + a_{ -1,m}^{n}\mathrm{{e}}^{-\mathrm{i}\theta })(a_{ 1,m}^{n}\mathrm{{e}}^{-\mathrm{i}\theta } + a_{ 0,m}^{n} + a_{ -1,m}^{n}\mathrm{{e}}^{\mathrm{i}\theta }) {}\\ & =& {(a_{1,m}^{n})}^{2} + {(a_{ 0,m}^{n} - 2)}^{2} + {(a_{ -1,m}^{n})}^{2} + 2a_{ 1,m}^{n}(a_{ 0,m}^{n} - 2)\cos \theta {}\\ & & +2(a_{0,m}^{n} - 2)a_{ -1,m}^{n}\cos \theta + 2a_{ 1,m}^{n}a_{ -1,m}^{n}\cos 2\theta {}\\ & & -\left [{(a_{1,m}^{n})}^{2} + {(a_{ 0,m}^{n})}^{2} + {(a_{ -1,m}^{n})}^{2} + 2a_{ 1,m}^{n}a_{ 0,m}^{n}\cos \theta + 2a_{ 0,m}^{n}a_{ -1,m}^{n}\cos \theta \right . {}\\ & & \left .+2a_{1,m}^{n}a_{ -1,m}^{n}\cos 2\theta \right ] {}\\ & =& {(a_{0,m}^{n} - 2)}^{2} - {(a_{ 0,m}^{n})}^{2} - 4a_{ 1,m}^{n}\cos \theta - 4a_{ -1,m}^{n}\cos \theta {}\\ & =& -{ 4a_{m}^{n+1/2} \over \Delta {x}^{2}} \Delta \tau +{ 4a_{m}^{n+1/2} \over \Delta {x}^{2}} \Delta \tau \cos \theta {}\\ & =&{ 4a_{m}^{n+1/2} \over \Delta {x}^{2}} \Delta \tau (\cos \theta -1). {}\\ \end{array}$$

This expression is always nonpositive. Therefore, the condition (7.24) is satisfied at every grid point. For scheme (7.6), there is no other boundary condition. Consequently, the scheme is expected to be stable.

So far, we say that a scheme is stable with respect to initial values if the error of the solution caused by the error in the initial condition is less than or equal to the error in the initial condition. However, generally speaking, we say that a scheme is stable with respect to initial values if the error of the solution caused by the error in the initial condition is less than c times the error in the initial condition. c is a constant independent of Δx and Δτ, but is allowed to be greater than one. That is, the error is allowed to increase by a certain factor, but the factor must be bounded and independent of Δx and Δτ. Therefore, we can take

$$\displaystyle{ \vert \lambda _{\theta }(x_{m}{,\tau }^{n})\vert \leq 1 +\bar{ c}\Delta \tau }$$
(7.25)

as a criterion for stability.Footnote 2 In fact, if the inequality (7.25) holds for any θ, then usually we can have

$$\displaystyle{\vert \vert {\mathbf{e}}^{n}\vert \vert _{\mathrm{ L}_{2}} \leq (1 +\bar{ c}\Delta \tau )\vert \vert {\mathbf{e}}^{n-1}\vert \vert _{\mathrm{ L}_{2}} \leq {(1 +\bar{ c}\Delta \tau )}^{n}\vert \vert {\mathbf{e}}^{0}\vert \vert _{\mathrm{ L}_{2}} \leq \mathrm{ {e}}^{\bar{c}nT/N}\vert \vert {\mathbf{e}}^{0}\vert \vert _{\mathrm{ L}_{2}}}$$

for any n ≤ N, so the error increases at most by a factor \(\mathrm{{e}}^{\bar{c}T}\). Here we have used the relation \({(1 +\bar{ c}\Delta \tau )}^{{ 1 \over \bar{c}\Delta \tau } } \leq \mathrm{ e}\) for any positive Δτ.

Now let us study the stability of the difference scheme (7.5) by using the criterion (7.25). We consider the stability with respect to initial values only, so we can set \(g_{m}^{n} = 0\). In this case, the scheme has the form (7.23) with \(a_{1,m}^{n} = 0\), \(a_{0,m}^{n} = 1\), \(a_{-1,m}^{n} = 0\) and

$$\displaystyle\begin{array}{rcl} b_{1,m}^{n}& =&{ a_{m}^{n}\Delta \tau \over \Delta {x}^{2}} +{ b_{m}^{n}\Delta \tau \over 2\Delta x} , {}\\ b_{0,m}^{n}& =& 1 - 2{ a_{m}^{n}\Delta \tau \over \Delta {x}^{2}} + c_{m}^{n}\Delta \tau , {}\\ b_{-1,m}^{n}& =&{ a_{m}^{n}\Delta \tau \over \Delta {x}^{2}} -{ b_{m}^{n}\Delta \tau \over 2\Delta x} . {}\\ \end{array}$$

Therefore,

$$\displaystyle\begin{array}{rcl} \lambda _{\theta }(x_{m}{,\tau }^{n})& =& b_{ 1,m}^{n}\mathrm{{e}}^{\mathrm{i}\theta } + b_{ 0,m}^{n} + b_{ -1,m}^{n}\mathrm{{e}}^{-\mathrm{i}\theta } {}\\ & =& b_{0,m}^{n} + \left (b_{ 1,m}^{n} + b_{ -1,m}^{n}\right )\cos \theta + \mathrm{i}\left (b_{ 1,m}^{n} - b_{ -1,m}^{n}\right )\sin \theta {}\\ & =& 1 - 2{ a_{m}^{n}\Delta \tau \over \Delta {x}^{2}} + c_{m}^{n}\Delta \tau + 2{ a_{m}^{n}\Delta \tau \over \Delta {x}^{2}} \cos \theta + \mathrm{i}{ b_{m}^{n}\Delta \tau \over \Delta x} \sin \theta {}\\ & =& 1 - 4{{ a_{m}^{n}\Delta \tau \over \Delta {x}^{2}} \sin }^{2}{ \theta \over 2} + c_{m}^{n}\Delta \tau + \mathrm{i}{ b_{m}^{n}\Delta \tau \over \Delta x} \sin \theta . {}\\ \end{array}$$

If

$$\displaystyle{ \max { a_{m}^{n}\Delta \tau \over \Delta {x}^{2}} \leq { 1 \over 2} \quad \mbox{ or }\quad { \Delta \tau \over \Delta {x}^{2}} \leq { 1 \over 2\max a_{m}^{n}} , }$$
(7.26)

then

$$\displaystyle\begin{array}{rcl} \vert \lambda _{\theta }(x_{m}{,\tau }^{n}){\vert }^{2}& \leq &{ \left (1 + \left \vert c_{ m}^{n}\right \vert \Delta \tau \right )}^{2} +{ \left ({ b_{m}^{n}\Delta \tau \over \Delta x} \right )}^{2} {}\\ & \leq &{ \left (1 + \left \vert c_{m}^{n}\right \vert \Delta \tau \right )}^{2} +{ { \left (b_{m}^{n}\right )}^{2} \over 2\max a_{m}^{n}} \Delta \tau {}\\ & \leq &{ \left (1 + \left \vert c_{m}^{n}\right \vert \Delta \tau \right )}^{2} + 2\left (1 + \left \vert c_{ m}^{n}\right \vert \Delta \tau \right ){ {\left (b_{m}^{n}\right )}^{2} \over 4\max a_{m}^{n}} \Delta \tau {}\\ & & +{\left [{ {\left (b_{m}^{n}\right )}^{2} \over 4\max a_{m}^{n}} \Delta \tau \right ]}^{2} {}\\ & =&{ \left [1 + \left \vert c_{m}^{n}\right \vert \Delta \tau +{ { \left (b_{m}^{n}\right )}^{2} \over 4\max a_{m}^{n}} \Delta \tau \right ]}^{2}. {}\\ \end{array}$$

Thus, let \(\bar{c} = \left \vert c_{m}^{n}\right \vert +{ \left (b_{m}^{n}\right )}^{2}/(4\max a_{m}^{n})\), we have

$$\displaystyle{\vert \lambda _{\theta }(x_{m}{,\tau }^{n})\vert \leq 1 +\bar{ c}\Delta \tau }$$

and we can expect this scheme to be stable if inequality (7.26) holds.

In fact, the stability of scheme (7.6) with variable coefficients has been proved rigorously in the paper [79] by Sun, Yan, and Zhu. By a similar method, the stability of scheme (7.5) with variable coefficients can also be shown when inequality (7.26) holds. If readers are interested in such a subject, please see that paper and the book [97] by Zhu, Zhong, Chen, and Zhang.

2.2 Convergence

If a scheme is stable with respect to initial values, and the truncation error of the scheme goes to zero as Δx and Δτ tend to zero, then the approximate solution will usually go to the exact solution. Such a result is usually referred to as the Lax equivalence theorem (see the book [67] by Richtmyer and Morton). We are not going to prove this conclusion for general cases but explain this result intuitively through proving this result for special cases.

Consider the explicit finite-difference method (7.8). We know that the exact solution u(x, τ) satisfies the equation

$$\displaystyle\begin{array}{rcl} & & u(x_{m}{,\tau }^{n+1}) {}\\ & =& \alpha u(x_{m+1}{,\tau }^{n}) + (1 - 2\alpha )u(x_{ m}{,\tau }^{n}) +\alpha u(x_{ m-1}{,\tau }^{n}) + \Delta \tau R_{ m}^{n}(\Delta {x}^{2},\Delta \tau ), {}\\ & & m = 1,2,\cdots \,,M - 1,\quad n = 0,1,\cdots \,,N - 1, {}\\ \end{array}$$

where

$$\displaystyle{R_{m}^{n}(\Delta {x}^{2},\Delta \tau ) ={ \Delta \tau \over 2} { {\partial }^{2}u \over {\partial \tau }^{2}} (x_{m},\eta ) - a{ \Delta {x}^{2} \over 12} { {\partial }^{4}u \over \partial {x}^{4}} (\xi {,\tau }^{n}).}$$

Let \(e_{m}^{n}\) be the error of the approximate solution on the point \((x_{m}{,\tau }^{n})\), that is,

$$\displaystyle{e_{m}^{n} = u(x_{ m}{,\tau }^{n}) - u_{ m}^{n},\quad m = 0,1,\cdots \,,M,\,\,n = 0,1,\cdots \,,N.}$$

Then, \(e_{m}^{n}\) is the solution of the problem

$$\displaystyle{\left \{\begin{array}{ll} e_{m}^{n+1} =\alpha e_{m+1}^{n} + (1 - 2\alpha )e_{m}^{n} +\alpha e_{m-1}^{n}+&\Delta \tau R_{m}^{n}(\Delta {x}^{2},\Delta \tau ), \\ m = 1,2,\cdots \,,M - 1, &n = 0,1,\cdots \,,N - 1, \\ e_{0}^{n+1} = 0, &n = 0,1,\cdots \,,N - 1, \\ e_{M}^{n+1} = 0, &n = 0,1,\cdots \,,N - 1, \\ e_{m}^{0} = 0, &m = 0,1,\cdots \,,M. \end{array} \right .}$$

Because \(e_{0}^{n} = e_{M}^{n} = 0\) for any n, the system can be written as

$$\displaystyle{\left \{\begin{array}{ll} {\mathbf{e}}^{n+1} = \mathbf{A}_{ 1}{\mathbf{e}}^{n} + \Delta \tau {\mathbf{R}}^{n}(\Delta {x}^{2},\Delta \tau ),\quad n = 0,1,\cdots \,,N - 1, \\ {\mathbf{e}}^{0} = 0, \end{array} \right .}$$

where e n is a vector with M − 1 components \(e_{m}^{n}\), m = 1, 2, ⋯ , M − 1 and

$$\displaystyle{{\mathbf{R}}^{n}(\Delta {x}^{2},\Delta \tau ) = \left [\begin{array}{c} R_{1}^{n}(\Delta {x}^{2},\Delta \tau ) \\ R_{2}^{n}(\Delta {x}^{2},\Delta \tau )\\ \vdots \\ R_{M-1}^{n}(\Delta {x}^{2},\Delta \tau ) \end{array} \right ].}$$

Actually, e n can be written as \(\sum _{k=1}^{n}\mathbf{e}_{(k)}^{n}\). Here, for k = n,

$$\displaystyle{\mathbf{e}_{(n)}^{n} = \Delta \tau {\mathbf{R}}^{n-1}(\Delta {x}^{2},\Delta \tau )}$$

and for k = 1, 2, ⋯ , n − 1, \(\mathbf{e}_{(k)}^{n}\) is the solution of the following problem

$$\displaystyle{\left \{\begin{array}{ll} \mathbf{e}_{(k)}^{\bar{n}+1} = \mathbf{A}_{1}\mathbf{e}_{(k)}^{\bar{n}},\quad \bar{n} = k,k + 1,\cdots \,,n - 1, \\ \mathbf{e}_{(k)}^{k} = \Delta \tau {\mathbf{R}}^{k-1}(\Delta {x}^{2},\Delta \tau ). \end{array} \right .}$$

Because the error does not increase for the scheme (7.8) if α ≤ 1 ∕ 2, \(\vert \vert {\mathbf{e}}^{n}\vert \vert _{\mathrm{L}_{2}}\) should not be greater than \(\sum _{k=1}^{n}\Delta \tau \vert \vert {\mathbf{R}}^{k-1}(\Delta {x}^{2},\Delta \tau )\vert \vert _{\mathrm{L}_{2}}\). Noticing n ≤ T ∕ Δτ, we see that \(e_{m}^{n}\) goes to zero as \(R_{m}^{k-1}(\Delta {x}^{2},\Delta \tau )\) tends to zero for k = 1, 2, ⋯ , n and m = 1, 2, ⋯ , M − 1. Hence, the approximate solution converges to the exact solution as Δx and Δτ tend to zero and α stays less than 1/2 and \(\vert \vert {\mathbf{e}}^{n}\vert \vert _{\mathrm{L}_{2}}\) has an order of O(Δx 2, Δτ). Usually, α = aΔτ ∕ Δx 2 stays constant as Δx and Δτ tend to zero. Therefore, \(\vert \vert {\mathbf{e}}^{n}\vert \vert _{\mathrm{L}_{2}} = O(\Delta \tau )\), and we say that the scheme (7.8) converges with order of Δτ.

For implicit schemes, the situation is similar. Consider the Crank–Nicolson scheme (7.9). The exact solution satisfies

$$\displaystyle{\begin{array}{ll} &{ u(x_{m}{,\tau }^{n+1}) - u(x_{m}{,\tau }^{n}) \over \Delta \tau } \\ =&{ a \over 2} \left [{ u(x_{m+1}{,\tau }^{n+1}) - 2u(x_{m}{,\tau }^{n+1}) + u(x_{m-1}{,\tau }^{n+1}) \over \Delta {x}^{2}} \right . \\ &\left .+{ u(x_{m+1}{,\tau }^{n}) - 2u(x_{m}{,\tau }^{n}) + u(x_{m-1}{,\tau }^{n}) \over \Delta {x}^{2}} \right ] + R_{m}^{n}(\Delta {x}^{2},{\Delta \tau }^{2}), \\ &m = 1,2,\cdots \,,M - 1,\end{array} }$$

where

$$\displaystyle{\begin{array}{ll} &R_{m}^{n}(\Delta {x}^{2},{\Delta \tau }^{2}) \\ =&{\Delta \tau }^{2}\left [{ 1 \over 24} { {\partial }^{3}u \over {\partial \tau }^{3}} (x_{m}{,\eta }^{(1)}) -{ a \over 8} { {\partial }^{4}u \over \partial {x{}^{2}\tau }^{2}} (x_{m}{,\eta }^{(2)})\right ] -{ \Delta {x}^{2}a \over 12} { {\partial }^{4}u \over \partial {x}^{4}} (\xi {,\eta }^{(3)}).\end{array} }$$

In this case, the error satisfies

$$\displaystyle{\mathbf{A}{\mathbf{e}}^{n+1} = \mathbf{B}{\mathbf{e}}^{n} + \Delta \tau {\mathbf{R}}^{n}(\Delta {x}^{2},{\Delta \tau }^{2}),}$$

where e n and \({\mathbf{R}}^{n}(\Delta {x}^{2},{\Delta \tau }^{2})\) are two (M − 1)-dimensional vectors with \(e_{m}^{n}\) and \(R_{m}^{n}(\Delta {x}^{2},{\Delta \tau }^{2})\) as components, respectively, and A and B are given in the difference scheme (7.10). Just like in the case of the scheme (7.8), e n can also be written as \(\sum _{k=1}^{n}\mathbf{e}_{(k)}^{n}\). Here, for k = n,

$$\displaystyle{\mathbf{e}_{(n)}^{n} = \Delta \tau {\mathbf{A}}^{-1}{\mathbf{R}}^{n-1}(\Delta {x}^{2},{\Delta \tau }^{2})}$$

and for k = 1, 2, ⋯ , n − 1, \(\mathbf{e}_{(k)}^{n}\) is the solution of the following problem:

$$\displaystyle{\left \{\begin{array}{ll} \mathbf{A}\mathbf{e}_{(k)}^{\bar{n}+1} = \mathbf{B}\mathbf{e}_{(k)}^{\bar{n}},\quad \bar{n} = k,k + 1,\cdots \,,n - 1, \\ \mathbf{e}_{(k)}^{k} = \Delta \tau {\mathbf{A}}^{-1}{\mathbf{R}}^{k-1}(\Delta {x}^{2},{\Delta \tau }^{2}). \end{array} \right .}$$

The Crank–Nicolson scheme is stable with respect to the initial value. Thus, \(\vert \vert {\mathbf{e}}^{n}\vert \vert _{\mathrm{L}_{2}}\) does not exceed \(\sum _{k=1}^{n}\Delta \tau \vert \vert {\mathbf{A}}^{-1}{\mathbf{R}}^{k-1}(\Delta {x}^{2},{\Delta \tau }^{2})\vert \vert _{\mathrm{L}_{2}}\). Because

$$\displaystyle{\mathbf{A}\mathbf{e}_{\omega _{k}} = \left (1 + {2\alpha \sin }^{2}{ \omega _{k} \over 2} \right )\mathbf{e}_{\omega _{k}},}$$

we see that \(1 + {2\alpha \sin }^{2}(\omega _{k}/2)\) is an eigenvalue of A. Thus, \(1/[1 + {2\alpha \sin }^{2}(\omega _{k}/2)]\) is an eigenvalue of A  − 1. This means that A  − 1 always exists and that its norm is bounded for any case. Consequently, \(\vert \vert {\mathbf{e}}^{n}\vert \vert _{\mathrm{L}_{2}}\) goes to zero as Δx and Δτ tend to zero. In this case, we say that this scheme is convergent. Furthermore, because \(\vert \vert {\mathbf{e}}^{n}\vert \vert _{\mathrm{L}_{2}}\) is of the order O(Δx 2, Δτ 2), we say that the scheme has a second-order convergence or possesses a second-order accuracy.

For schemes with variable coefficients, from the stability with respect to initial values and the consistency of a scheme, we also can have its convergence. Here, we say that a scheme is consistent with the partial differential equation if the truncation error of the scheme goes to zero as Δx and Δτ tend to zero. In the paper [79] by Sun, Yan, and Zhu, some results on this issue are given.

3 Extrapolation of Numerical Solutions

When a partial differential equation problem is discretized, a truncation error is introduced that causes the numerical solution to have an error. What is the relation between the truncation error and the error of the numerical solution? Intuitively, the answer should be that a term of \(O(\Delta {x}^{k_{1}},{\Delta \tau }^{k_{2}})\) in the truncation error causes an error of \(O(\Delta {x}^{k_{1}},{\Delta \tau }^{k_{2}})\) in the numerical solution. Here \(O(\Delta {x}^{k_{1}},{\Delta \tau }^{k_{2}})\) denotes a term less than \(C\left (\Delta {x}^{k_{1}} + \Delta {x}^{k_{2}}\right )\), where C is a constant. Let us illustrate this fact.

Consider the following problem

$$\displaystyle{\left \{\begin{array}{ll} { \partial u \over \partial \tau } = a(x,\tau ){ {\partial }^{2}u \over \partial {x}^{2}} & + b(x,\tau ){ \partial u \over \partial x} + c(x,\tau )u + g(x,\tau ), \\ &0 \leq x \leq 1,\quad 0 \leq \tau \leq T, \\ u(x,0) = f(x), &0 \leq x \leq 1, \end{array} \right .}$$

where \(b(0,\tau ) = a(0,\tau ) = a_{x}(0,\tau ) = b(1,\tau ) = a(1,\tau ) = a_{x}(1,\tau ) = 0\) and a(x, τ) ≥ 0. This problem can be approximated by

$$\displaystyle{ \left \{\begin{array}{ll} \delta _{\tau }u_{m}^{n+1/2} = a_{m}^{n+1/2}\delta _{x}^{2}u_{m}^{n+1/2}+&b_{m}^{n+1/2}\delta _{0x}u_{m}^{n+1/2} + c_{m}^{n+1/2}u_{m}^{n+1/2} + g_{m}^{n+1/2}, \\ &\qquad \qquad 0 \leq m \leq M,\quad 0 \leq n \leq N - 1, \\ u_{m}^{0} = f(x_{m}), &\qquad \qquad 0 \leq m \leq M. \end{array} \right . }$$
(7.27)

Here,

$$\displaystyle\begin{array}{rcl} \delta _{\tau }u_{m}^{n+1/2}& =&{ u_{m}^{n+1} - u_{ m}^{n} \over \Delta \tau } , {}\\ \delta _{x}^{2}u_{ m}^{n+1/2}& =&{ 1 \over 2} \left ({ u_{m+1}^{n+1} - 2u_{m}^{n+1} + u_{m-1}^{n+1} \over \Delta {x}^{2}} +{ u_{m+1}^{n} - 2u_{m}^{n} + u_{m-1}^{n} \over \Delta {x}^{2}} \right ), {}\\ \delta _{0x}u_{m}^{n+1/2}& =&{ 1 \over 2} \left ({ u_{m+1}^{n+1} - u_{m-1}^{n+1} \over 2\Delta x} +{ u_{m+1}^{n} - u_{m-1}^{n} \over 2\Delta x} \right ), {}\\ f_{m}^{n+1/2}& =&{ 1 \over 2} \left (f_{m}^{n+1} + f_{ m}^{n}\right ),\quad f\,\,\mbox{ being }\,\,u,a,b,c,g, {}\\ \end{array}$$

and the same notation will be used for other functions in what follows. The truncation error of this scheme is \(O(\Delta {x}^{2}) + O({\Delta \tau }^{2})\) everywhere; more accurately, it is in the form

$$\displaystyle{P_{m}^{n+1/2}\Delta {x}^{2} + R_{ m}^{n+1/2}{\Delta \tau }^{2} + O(\Delta {x}^{4} + {\Delta \tau }^{4}),}$$

where \(P_{m}^{n+1/2}\) and \(R_{m}^{n+1/2}\) denote the values of two functions P(x, τ) and R(x, τ) at x = x m and τ = τ n + 1 ∕ 2. That is, the exact solution satisfies the following equation:

$$\displaystyle{\left \{\begin{array}{rcl} &&\delta _{\tau }U_{m}^{n+1/2} = a_{m}^{n+1/2}\delta _{x}^{2}U_{m}^{n+1/2}\! +\! b_{m}^{n+1/2}\delta _{0x}U_{m}^{n+1/2}\! +\! c_{m}^{n+1/2}U_{m}^{n+1/2}\! +\! g_{m}^{n+1/2} \\ & & \qquad \qquad \qquad + P_{m}^{n+1/2}\Delta {x}^{2} + R_{m}^{n+1/2}{\Delta \tau }^{2} + O(\Delta {x}^{4} + {\Delta \tau }^{4}), \\ &&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad 0 \leq m \leq M,0 \leq n \leq N - 1, \\ &&u_{m}^{0} = f(x_{m}),\qquad \qquad \qquad \qquad \qquad \qquad 0 \leq m \leq M,\end{array} \right .}$$

where \(U_{m}^{n}\) stands for \(u(x_{m}{,\tau }^{n})\). Suppose \(v_{1}\) and v 2 are the solutions of the problems

$$\displaystyle{\left \{\begin{array}{ll} { \partial v_{1} \over \partial \tau } = a(x,\tau ){ {\partial }^{2}v_{1} \over \partial {x}^{2}} + b(x,\tau ){ \partial v_{1} \over \partial x} & + c(x,\tau )v_{1} + P(x,\tau ), \\ &0 \leq x \leq 1,\quad 0 \leq \tau \leq T, \\ v_{1}(x,0) = 0, &0 \leq x \leq 1 \end{array} \right .}$$

and

$$\displaystyle{\left \{\begin{array}{ll} { \partial v_{2} \over \partial \tau } = a(x,\tau ){ {\partial }^{2}v_{2} \over \partial {x}^{2}} + b(x,\tau ){ \partial v_{2} \over \partial x} +&c(x,\tau )v_{2} + R(x,\tau ), \\ &0 \leq x \leq 1,\quad 0 \leq \tau \leq T, \\ v_{2}(x,0) = 0, &0 \leq x \leq 1, \end{array} \right .}$$

respectively. Let \(V _{1,m}^{n}\) and \(V _{2,m}^{n}\) denote \(v_{1}(x_{m}{,\tau }^{n})\) and \(v_{2}(x_{m}{,\tau }^{n})\). Then,

$$\displaystyle{\left \{\begin{array}{rcl} &&\delta _{\tau }V _{1,m}^{n+1/2} = a_{m}^{n+1/2}\delta _{x}^{2}V _{1,m}^{n+1/2}\! +\! b_{m}^{n+1/2}\delta _{0x}V _{1,m}^{n+1/2}\! +\! c_{m}^{n+1/2}V _{1,m}^{n+1/2}\! +\! P_{m}^{n+1/2} \\ & & \qquad \qquad \qquad \qquad + O(\Delta {x}^{2} + {\Delta \tau }^{2}),\qquad 0 \leq m \leq M,\quad 0 \leq n \leq N - 1, \\ &&V _{1,m}^{0} = 0,\qquad \qquad \qquad \qquad \qquad \qquad \qquad 0 \leq m \leq M, \end{array} \right .}$$

and

$$\displaystyle{\left \{\begin{array}{rcl} &&\delta _{\tau }V _{2,m}^{n+1/2} = a_{m}^{n+1/2}\delta _{x}^{2}V _{2,m}^{n+1/2}\! +\! b_{m}^{n+1/2}\delta _{0x}V _{2,m}^{n+1/2}\! +\! c_{m}^{n+1/2}V _{2,m}^{n+1/2}\! +\! R_{m}^{n+1/2} \\ & & \qquad \qquad \qquad \qquad + O(\Delta {x}^{2} + {\Delta \tau }^{2}),\qquad 0 \leq m \leq M,\quad 0 \leq n \leq N - 1, \\ &&V _{2,m}^{0} = 0,\qquad \qquad \qquad \qquad \qquad \qquad \qquad 0 \leq m \leq M. \end{array} \right .}$$

Let us define

$$\displaystyle{W_{m}^{n} = U_{ m}^{n} - u_{ m}^{n} - V _{ 1,m}^{n}\Delta {x}^{2} - V _{ 2,m}^{n}{\Delta \tau }^{2}.}$$

It is clear that \(W_{m}^{n}\) satisfies

$$\displaystyle{\left \{\begin{array}{rcl} &&\delta _{\tau }W_{m}^{n+1/2} = a_{m}^{n+1/2}\delta _{x}^{2}W_{m}^{n+1/2} + b_{m}^{n+1/2}\delta _{0x}W_{m}^{n+1/2} + c_{m}^{n+1/2}W_{m}^{n+1/2} \\ & & \qquad \qquad \qquad + O(\Delta {x}^{4} + \Delta {x}^{2}{\Delta \tau }^{2} + {\Delta \tau }^{4}),\qquad 0 \leq m \leq M,\quad 0 \leq n \leq N - 1, \\ &&W_{m}^{0} = 0,\qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \qquad 0 \leq m \leq M. \end{array} \right .}$$

Because the scheme is stable with respect to the initial value and the nonhomogeneous term (see the paper [76] by Sun and the paper [79] by Sun, Yan, and Zhu for the details of the proof) and \(O(\Delta {x}^{2}{\Delta \tau }^{2})\) can be expressed as \(O(\Delta {x}^{4} + {\Delta \tau }^{4})\), we have

$$\displaystyle{\vert U_{m}^{n} - u_{ m}^{n} - V _{ 1,m}^{n}\Delta {x}^{2} - V _{ 2,m}^{n}{\Delta \tau }^{2}\vert \leq O(\Delta {x}^{4} + {\Delta \tau }^{4}),}$$

or we can write this relation as

$$\displaystyle{u(x_{m}{,\tau }^{n}) - u_{ m}^{n}(\Delta x,\Delta \tau ) = v_{ 1}(x_{m}{,\tau }^{n})\Delta {x}^{2} + v_{ 2}(x_{m}{,\tau }^{n}){\Delta \tau }^{2} + O(\Delta {x}^{4} + {\Delta \tau }^{4}),}$$

that is,

$$\displaystyle{ \begin{array}{ll} u_{m}^{n}(\Delta x,\Delta \tau ) =&u(x_{m}{,\tau }^{n}) - v_{1}(x_{m}{,\tau }^{n})\Delta {x}^{2} - v_{2}(x_{m}{,\tau }^{n}){\Delta \tau }^{2} \\ & + O(\Delta {x}^{4} + {\Delta \tau }^{4}). \end{array} }$$
(7.28)

Here, we write \(u_{m}^{n}\) as \(u_{m}^{n}(\Delta x,\Delta \tau )\) in order to indicate that the approximate solution is obtained on a mesh with mesh sizes Δx and Δτ. For this case, the error of a numerical solution is in the form

$$\displaystyle{v_{1}(x_{m}{,\tau }^{n})\Delta {x}^{2} + v_{ 2}(x_{m}{,\tau }^{n}){\Delta \tau }^{2} + O(\Delta {x}^{4} + {\Delta \tau }^{4}),}$$

which has the same form as the truncation error given above. Similarly, if the truncation error of a numerical scheme, including the algorithms for boundary conditions, is

$$\displaystyle{P\Delta {x}^{2} + Q\Delta x\Delta \tau + R{\Delta \tau }^{2} + O({\Delta \tau }^{3}),}$$

i.e., the scheme is second order and stable, then the numerical solution can be expressed as

$$\displaystyle{ \begin{array}{lll} u_{m}^{n}(\Delta x,\Delta \tau )& =&u(x_{m}{,\tau }^{n}) - v_{1}(x_{m}{,\tau }^{n})\Delta {x}^{2} - v_{12}(x_{m}{,\tau }^{n})\Delta x\Delta \tau \\ & & - v_{2}(x_{m}{,\tau }^{n})\Delta {x}^{2} + O({\Delta \tau }^{3}),\end{array} }$$
(7.29)

where O(Δτ 3) means \(O(\Delta {x}^{3} + \Delta {x}^{2}\Delta \tau + \Delta x{\Delta \tau }^{2} + {\Delta \tau }^{3})\) for simplicity.

Here, the approximate value is given only at the nodes. Now let us generate a function defined on the domain [0, 1] ×[0, T] by some type of interpolation. We assume that the interpolation function generated from the values on the nodes by an interpolation method is an approximation to f(x, τ) with an error of O(Δτ 3) for any smooth enough function f(x, τ). For example, if we use quadratic interpolation, then the interpolation function generated has such a property. Let u(x, τ; Δx, Δτ) denote such a function generated by \(u(x_{m}{,\tau }^{n};\Delta x,\Delta \tau )\). Because \(u(x_{m}{,\tau }^{n};\Delta x,\Delta \tau )\) consists of \(u(x_{m}{,\tau }^{n}) - v_{1}(x_{m}{,\tau }^{n})\Delta {x}^{2} - v_{12}(x_{m}{,\tau }^{n})\Delta x\Delta \tau - v_{2}(x_{m}{,\tau }^{n}){\Delta \tau }^{2}\) and O(Δτ 3), the interpolation function also has two parts. One part is the interpolation function generated by \(u(x_{m}{,\tau }^{n}) - v_{1}(x_{m}{,\tau }^{n})\Delta {x}^{2} - v_{12}(x_{m}{,\tau }^{n})\Delta x\Delta \tau - v_{2}(x_{m}{,\tau }^{n}){\Delta \tau }^{2}\), which we call u 1(x, τ; Δx, Δτ). The other part is generated by the term O(Δτ 3), which is denoted by u 2(x, τ; Δx, Δτ). Clearly,

$$\displaystyle{u_{1}(x,\tau ;\Delta x,\Delta \tau ) - u(x,\tau ) + v_{1}(x,\tau )\Delta {x}^{2} + v_{ 12}(x,\tau )\Delta x\Delta \tau + v_{2}(x,\tau ){\Delta \tau }^{2}}$$

is a term of O(Δτ 3). The function u 2(x, τ; Δx, Δτ) is also a term of O(Δτ 3). Consequently, we have

$$\displaystyle{\begin{array}{lll} u(x,\tau ;\Delta x,\Delta \tau )& =&u_{1}(x,\tau ;\Delta x,\Delta \tau ) + u_{2}(x,\tau ;\Delta x,\Delta \tau ) \\ & =&u(x,\tau ) - v_{1}(x,\tau )\Delta {x}^{2} - v_{12}(x,\tau )\Delta x\Delta \tau - v_{2}(x,\tau ){\Delta \tau }^{2} \\ & & + O({\Delta \tau }^{3}). \end{array} }$$

In this case, we can use the following technique to eliminate the error of \(O(\Delta {x}^{2} + \Delta x\Delta \tau + {\Delta \tau }^{2})\) if we have numerical solutions on a mesh with mesh sizes Δx and Δτ and on a mesh with mesh sizes 2Δx and 2Δτ. Let us consider a linear combination of the solutions on the two different meshes, which are denoted by u(x, τ; Δx, Δτ) and u(x, τ; 2Δx, 2Δτ):

$$\displaystyle{\begin{array}{ll} &(1 - d) \times u(x,\tau ;\Delta x,\Delta \tau ) + d \times u(x,\tau ;2\Delta x,2\Delta \tau ) \\ =&u(x,\tau ) - v_{1}(x,\tau )(1 - d + 4d)\Delta {x}^{2} - v_{12}(x,\tau )(1 - d + 4d)\Delta x\Delta \tau \\ & - v_{2}(x,\tau )(1 - d + 4d){\Delta \tau }^{2} + O({\Delta \tau }^{3}).\end{array} }$$

If we choose d such that 1 − d + 4d = 0, that is, \(d = -{ 1 \over 3}\), then

$$\displaystyle{(1 - d) \times u(x,\tau ;\Delta x,\Delta \tau ) + d \times u(x,\tau ;2\Delta x,2\Delta \tau ) = u(x,\tau ) + O({\Delta \tau }^{3}).}$$

Therefore,

$$\displaystyle{{ 1 \over 3} [4u(x,\tau ;\Delta x,\Delta \tau ) - u(x,\tau ;2\Delta x,2\Delta \tau )] }$$
(7.30)

is an approximate to u(x, τ) with an error of O(Δτ 3).

However, for the approximation (7.27), the expression of the numerical solution is in the form (7.28), and the extrapolation formula of numerical solutions (7.30) gives an approximation to u(x, τ) with an error of O(Δτ 4). This is a special case. Generally speaking, if for a second-order scheme we have three solutions \(u_{m}^{n}(\Delta x,\Delta \tau ),u_{m}^{n}(2\Delta x,2\Delta \tau ),\) and \(u_{m}^{n}(4\Delta x,4\Delta \tau )\), then we can have an approximation with an error of O(Δτ 4). In order to do that, we first generate an interpolation function from the values at these nodes and require the interpolation with an error of O(Δτ 4). This can be done, for example, by cubic interpolation. Let u(x, τ; Δx, Δτ), u(x, τ; 2Δx, 2Δτ), and u(x, τ; 4Δx, 4Δτ) represent these functions. Then, consider a linear combination of them:

$$\displaystyle{(1 - d_{1} - d_{2})u(x,\tau ;\Delta x,\Delta \tau ) + d_{1}u(x,\tau ;2\Delta x,2\Delta \tau ) + d_{2}u(x,\tau ;4\Delta x,4\Delta \tau ).}$$

If we choose d 1and d 2 such that

$$\displaystyle{\left \{\begin{array}{l} 1 - d_{1} - d_{2} + {2}^{2}d_{1} + {4}^{2}d_{2} = 0, \\ 1 - d_{1} - d_{2} + {2}^{3}d_{1} + {4}^{3}d_{2} = 0,\end{array} \right .}$$

which gives

$$\displaystyle{\left \{\begin{array}{l} d_{1} = -{ 12 \over 21} , \\ d_{2} ={ 1 \over 21} , \end{array} \right .}$$

then all the terms of O(Δτ 2) and the terms of O(Δτ 3) in

$$\displaystyle{(1 - d_{1} - d_{2})u(x,\tau ;\Delta x,\Delta \tau ) + d_{1}u(x,\tau ;2\Delta x,2\Delta \tau ) + d_{2}u(x,\tau ;4\Delta x,4\Delta \tau )}$$

are eliminated. Therefore

$$\displaystyle{{ 1 \over 21} [32u(x,\tau ;\Delta x,\Delta \tau ) - 12u(x,\tau ;2\Delta x,2\Delta \tau ) + u(x,\tau ;4\Delta x,4\Delta \tau )] }$$
(7.31)

gives an approximation to u(x, τ) with an error of O(Δτ 4) for any second-order scheme.

Here, we need to point out that in order to obtain an approximate solution with an error of O(Δτ 3), it is not necessary for both \(\Delta x_{1}/\Delta x_{2}\) and \(\Delta \tau _{1}/\Delta \tau _{2}\) to equal two, where \(\Delta x_{1},\Delta \tau _{1}\) are mesh sizes for one mesh and \(\Delta x_{2},\Delta \tau _{2}\) for the other. For example, if we have a solution on a 12 ×16 mesh and a solution on a 9 ×12 mesh, then we still can obtain an approximate solution with an error of O(Δτ 3) by using extrapolation. Furthermore, if there exist solutions on 15 ×20, 12 ×16, and 9 ×12 meshes, then we can have an approximate solution with an error of O(Δτ 4) by using extrapolation. These are left as a problem for the reader to prove. Generally speaking, when a scheme has an error of \(\Delta {x}^{k_{1}}\) and \({\Delta \tau }^{k_{2}}\) and we know solutions on two meshes, the extrapolation can be used if \({ \Delta x_{1}^{k_{1}} \over \Delta \tau _{1}^{k_{2}}} ={ \Delta x_{2}^{k_{1}} \over \Delta \tau _{2}^{k_{2}}}\), where Δx i and Δτ i , i = 1, 2, are mesh sizes used in order to obtain the two solutions. For example, if k 1 = 2 and k 2 = 1, then when solutions on a 20 ×20 mesh and a 40 ×80 mesh are obtained, this technique can also be used because \({ {\left ({ 1 \over 20} \right )}^{2} \over { 1 \over 20} } ={ {({ 1 \over 40} )}^{2} \over { 1 \over 80} }\) (see Problem 16).

The technique of generating more accurate results by combining several numerical results, which is similar to Richardson’s extrapolation in numerical methods for ordinary differential equations, is referred to as the extrapolation technique of numerical solutions in next few chapters. Finally we need to point out that this technique works if the solution is smooth, but may not work if the solution is not smooth enough.

4 Two-Dimensional Degenerate Parabolic Equations

Generally speaking, the coefficients of PDEs are variable, and so the difference equations also have variable coefficients. For such a case, the theoretical analysis of numerical methods is more complicated. In this section, for some type of two-dimensional degenerate parabolic equations and for a special but popular scheme, a complete theoretical analysis of numerical methods is given.

Consider the following two-dimensional degenerate parabolic partial differential equation:

$$\displaystyle\begin{array}{rcl} \frac{\partial u} {\partial \tau } & =& a_{11}(x,y,\tau )\frac{{\partial }^{2}u} {\partial {x}^{2}} + 2a_{12}(x,y,\tau ) \frac{{\partial }^{2}u} {\partial x\partial y} + a_{22}(x,y,\tau )\frac{{\partial }^{2}u} {\partial {y}^{2}} + b_{1}(x,y,\tau )\frac{\partial u} {\partial x} \\ & & \qquad +b_{2}(x,y,\tau )\frac{\partial u} {\partial y}+c(x,y,\tau )u+g(x,y,\tau ),\quad (x,y)\in \Omega ,\ 0\leq \tau \leq T, {}\end{array}$$
(7.32)

with the initial condition

$$\displaystyle\begin{array}{rcl} u(x,y,0) = f(x,y),\quad (x,y) \in \Omega ,& &{}\end{array}$$
(7.33)

where

$$\displaystyle{\Omega =\{ (x,y)\;\vert \;x_{l} \leq x \leq x_{u},y_{l} \leq y \leq y_{u}\},}$$
$$\displaystyle\begin{array}{rcl} & & a_{11}(x,y,\tau )\Big\vert _{x=x_{l}\mbox{ or }x_{u}} = 0,a_{22}(x,y,\tau )\Big\vert _{y=y_{l}\mbox{ or }y_{u}} = 0,{}\end{array}$$
(7.34)
$$\displaystyle\begin{array}{rcl} & & b_{1}(x,y,\tau )\Big\vert _{x=x_{l}\mbox{ or }x_{u}} = 0,b_{2}(x,y,\tau )\Big\vert _{y=y_{l}\mbox{ or }y_{u}} = 0,{}\end{array}$$
(7.35)
$$\displaystyle\begin{array}{rcl} & & \frac{\partial a_{11}(x,y,\tau )} {\partial x} \Big\vert _{x=x_{l}\mbox{ or }x_{u}} = 0,\qquad \frac{\partial a_{22}(x,y,\tau )} {\partial y} \Big\vert _{y=y_{l}\mbox{ or },y_{u}} = 0,{}\end{array}$$
(7.36)

and the matrix

$$\displaystyle{\left (\begin{array}{cc} a_{11}(x,y,\tau )&a_{12}(x,y,\tau ) \\ a_{12}(x,y,\tau )&a_{22}(x,y,\tau ) \end{array} \right )}$$

is semi-positive (nonnegative); i.e., for any \(X \in \mathcal{R}\) and \(Y \in \mathcal{R},\) we have

$$\displaystyle\begin{array}{rcl} a_{11}(x,y,\tau ){X}^{2} + 2a_{ 12}(x,y,\tau )XY + a_{22}(x,y,\tau ){Y }^{2} \geq 0.& &{}\end{array}$$
(7.37)

The matrix of the coefficients of second derivatives is semi-positive, so \(a_{12}^{2} \leq a_{11}a_{22}.\) Thus, when a 11 = 0 or a 22 = 0, we have a 12 = 0. Thus, from the expression (7.34), we have

$$\displaystyle\begin{array}{rcl} & & a_{12}(x,y,\tau )\Big\vert _{x=x_{l}\mbox{ or }x_{u}} = 0,a_{12}(x,y,\tau )\Big\vert _{y=y_{l}\mbox{ or }y_{u}} = 0.{}\end{array}$$
(7.38)

Taking the partial derivative of the first and second relations in the result (7.38) with respect to y and x, respectively, we can further have

$$\displaystyle\begin{array}{rcl} & & \frac{\partial a_{12}(x,y,\tau )} {\partial y} \Big\vert _{x=x_{l}\mbox{ or }x_{u}} = 0, \frac{\partial a_{12}(x,y,\tau )} {\partial x} \Big\vert _{y=y_{l}\mbox{ or }y_{u}} = 0.{}\end{array}$$
(7.39)

Denote

$$\displaystyle\begin{array}{rcl} & & c_{1} =\max _{(x,y,\tau )\in \Omega \times [0,T]}\left \vert \frac{{\partial }^{2}a_{11}(x,y,\tau )} {\partial {x}^{2}} \right \vert ,c_{2} =\max _{(x,y,\tau )\in \Omega \times [0,T]}\left \vert \frac{{\partial }^{2}a_{12}(x,y,\tau )} {\partial x\partial y} \right \vert , {}\\ & & {}\\ \end{array}$$
$$\displaystyle\begin{array}{rcl} & & c_{3} =\max _{(x,y,\tau )\in \Omega \times [0,T]}\left \vert \frac{{\partial }^{2}a_{22}(x,y,\tau )} {\partial {y}^{2}} \right \vert ,c_{4} =\max _{(x,y,\tau )\in \Omega \times [0,T]}\left \vert \frac{\partial b_{1}(x,y,\tau )} {\partial x} \right \vert , {}\\ & & c_{5} =\max _{(x,y,\tau )\in \Omega \times [0,T]}\left \vert \frac{\partial b_{2}(x,y,\tau )} {\partial y} \right \vert ,c_{6} =\max _{(x,y,\tau )\in \Omega \times [0,T]}\left \vert c(x,y,\tau )\right \vert , {}\\ \end{array}$$

and set

$$\displaystyle\begin{array}{rcl} c = c_{1} + 2c_{2} + c_{3} + c_{4} + c_{5} + 2c_{6}.& &{}\end{array}$$
(7.40)

In Sect. 2.4.3, for more general problems we have obtained the following inequality:

$$\displaystyle\begin{array}{rcl} \iint _{\Omega }{u}^{2}(x,y,\tau )dxdy& \leq & {e}^{\bar{c}T}\left [\iint _{ \Omega }{f}^{\,2}(x,y)dxdy\right . {}\\ & & +\left .\int _{0}^{\,\tau }\Big(\iint _{ \Omega }{g}^{2}(x,y,s)dxdy\Big)ds\right ],\quad 0 \leq \tau \leq T, {}\\ \end{array}$$

where \(\bar{c}\) is a constant determined by the bounds of the coefficients of the PDE and their derivatives. Of course, for the problem here, such an inequality holds. In this section, we are going to prove that for the numerical solutions obtained by a special but popular scheme, such an inequality still holds.

4.1 The Crank–Nicolson Difference Scheme and a Preliminary Lemma

Take three positive integers M, N, and K. Set \(h_{1} = (x_{u} - x_{l})/M,h_{2} = (y_{u} - y_{l})/N,\Delta \tau = T/K,\) and denote

$$\displaystyle\begin{array}{rcl} & & x_{m} = x_{l} + mh_{1},\quad 0 \leq m \leq M, {}\\ & & y_{n} = y_{l} + nh_{2},\quad 0 \leq n \leq N, {}\\ & & {\tau }^{k} = k\Delta \tau ,\quad 0 \leq k \leq K, {}\\ & & \Omega _{h} =\{ (x_{m},y_{n})\;\vert \;0 \leq m \leq M,0 \leq n \leq N\}, {}\\ & & \Omega _{\Delta \tau } {=\{\tau }^{k}\;\vert \;0 \leq k \leq K\}. {}\\ \end{array}$$

Let \(\mathcal{V} = \left \{u\;\vert \;u =\{ u_{mn},0 \leq m \leq M,0 \leq n \leq N\}\right \}\) be the grid function space on Ω h . If \(u \in \mathcal{V},\) we introduce the following notation:

$$\displaystyle{\begin{array}{ll} \delta _{x}u_{m+\frac{1} {2} ,n} = \frac{1} {h_{1}} (u_{m+1,n} - u_{mn}),\; &\Delta _{x}u_{mn} = \frac{1} {2h_{1}} (u_{m+1,n} - u_{m-1,n}), \\ \delta _{y}u_{m,n+\frac{1} {2} } = \frac{1} {h_{2}} (u_{m,n+1} - u_{mn}),\; &\Delta _{y}u_{mn} = \frac{1} {2h_{2}} (u_{m,n+1} - u_{m,n-1}), \\ \delta _{x}^{2}u_{mn} = \frac{1} {h_{1}^{2}} (u_{m+1,n} - 2u_{mn} + u_{m-1,n}),\quad \\ \delta _{y}^{2}u_{mn} = \frac{1} {h_{2}^{2}} (u_{m,n+1} - 2u_{mn} + u_{m,n-1}). \end{array} }$$

It is obvious that

$$\displaystyle\begin{array}{rcl} & & \Delta _{x}u_{mn} = \frac{1} {2}(\delta _{x}u_{m+\frac{1} {2} ,n} +\delta _{x}u_{m-\frac{1} {2} ,n}),\;\delta _{x}^{2}u_{ mn} = \frac{1} {h_{1}}(\delta _{x}u_{m+\frac{1} {2} ,n} -\delta _{x}u_{m-\frac{1} {2} ,n}), {}\\ & & \Delta _{y}u_{mn} = \frac{1} {2}(\delta _{y}u_{m,n+\frac{1} {2} } +\delta _{y}u_{m,n-\frac{1} {2} }),\;\delta _{y}^{2}u_{ mn} = \frac{1} {h_{2}}(\delta _{y}u_{m,n+\frac{1} {2} } -\delta _{y}u_{m,n-\frac{1} {2} }). {}\\ \end{array}$$

For any \(u \in \mathcal{V},\) and \(v \in \mathcal{V},\) their inner product is defined by

$$\displaystyle\begin{array}{rcl} (u,v)& =& h_{1}h_{2}\left [\sum _{m=1}^{M -1}\sum _{ n=1}^{N -1}u_{ mn}v_{mn} + \frac{1} {2}\sum _{m=1}^{M -1}(u_{ m0}v_{m0} + u_{mN}v_{mN})\right . \\ & & \left .+\frac{1} {2}\sum _{n=1}^{N -1}(u_{ 0n}v_{0n} + u_{Mn}v_{Mn}) + \frac{1} {4}(u_{00}v_{00} + u_{M0}v_{M0} + u_{0N}v_{0N} + u_{MN}v_{MN})\right ] \\ & & {}\end{array}$$
(7.41)

and the norm of a grid function is defined by

$$\displaystyle{\|u\| = \sqrt{(u, u)}.}$$

It is also obvious that the definition of the inner product can also be written in another form:

$$\displaystyle\begin{array}{rcl} (u,v)& =& \frac{1} {4}h_{1}h_{2}\sum _{m=0}^{M -1}\sum _{ n=0}^{N -1}\Big(u_{ mn}v_{mn} + u_{m+1,n}v_{m+1,n} \\ & & +u_{m,n+1}v_{m,n+1} + u_{m+1,n+1}v_{m+1,n+1}\Big). {}\end{array}$$
(7.42)

We also define the grid function U on \(\Omega _{h} \times \Omega _{\Delta \tau }\) as follows:

$$\displaystyle{U_{mn}^{k} = u(x_{ m},y_{n}{,\tau }^{k}),\quad 0 \leq m \leq M,\quad 0 \leq n \leq N,\quad 0 \leq k \leq K.}$$

In what follows, we use the following notations:

$$\displaystyle{U_{mn}^{k+\frac{1} {2} } = \frac{1} {2}(U_{mn}^{k+1} + U_{ mn}^{k}),{\quad \tau }^{k+\frac{1} {2} } = \frac{1} {2}{(\tau }^{k} {+\tau }^{k+1})}$$

and

$$\displaystyle{\begin{array}{ll} (a_{11})_{mn}^{k+\frac{1} {2} } = a_{11}(x_{m},y_{n}{,\tau }^{k+\frac{1} {2} }),\qquad &(a_{12})_{mn}^{k+\frac{1} {2} } = a_{12}(x_{m},y_{n}{,\tau }^{k+\frac{1} {2} }), \\ (a_{22})_{mn}^{k+\frac{1} {2} } = a_{22}(x_{m},y_{n}{,\tau }^{k+\frac{1} {2} }),\quad &(b_{1})_{mn}^{k+\frac{1} {2} } = b_{1}(x_{m},y_{n}{,\tau }^{k+\frac{1} {2} }), \\ (b_{2})_{mn}^{k+\frac{1} {2} } = b_{2}(x_{m},y_{n}{,\tau }^{k+\frac{1} {2} }),\quad &c_{mn}^{k+\frac{1} {2} } = c(x_{m},y_{n}{,\tau }^{k+\frac{1} {2} }), \\ g_{mn}^{k+\frac{1} {2} } = g(x_{m},y_{n}{,\tau }^{k+\frac{1} {2} }),\quad &f_{mn} = f(x_{m},y_{n}). \end{array} }$$

Suppose problem (7.32)–(7.33) has a smooth solution u(x, y, τ). Applying the Taylor expansion, we can obtain

$$\displaystyle\begin{array}{rcl} & & \frac{1} {\Delta \tau }(U_{mn}^{k+1} - U_{ mn}^{k}) = (a_{ 11})_{mn}^{k+\frac{1} {2} }\delta _{x}^{2}U_{mn}^{k+\frac{1} {2} } + 2(a_{12})_{mn}^{k+\frac{1} {2} }\Delta _{x}\Delta _{y}U_{mn}^{k+\frac{1} {2} } \\ & & \qquad + (a_{22})_{mn}^{k+\frac{1} {2} }\delta _{y}^{2}U_{mn}^{k+\frac{1} {2} } + (b_{1})_{mn}^{k+\frac{1} {2} }\Delta _{x}U_{mn}^{k+\frac{1} {2} } + (b_{2})_{mn}^{k+\frac{1} {2} }\Delta _{y}U_{mn}^{k+\frac{1} {2} } \\ & & \qquad + c_{mn}^{k+\frac{1} {2} }U_{mn}^{k+\frac{1} {2} } + g_{mn}^{k+\frac{1} {2} } + R_{mn}^{k+\frac{1} {2} }, \\ & & 0 \leq m \leq M,\quad 0 \leq n \leq N,\quad 0 \leq k \leq K - 1 {}\end{array}$$
(7.43)

and there exists a constant c 0 such that

$$\displaystyle\begin{array}{rcl} \vert R_{mn}^{k+\frac{1} {2} }\vert & \leq & c_{0}(h_{1}^{2} + h_{2}^{2} + {\Delta \tau }^{2}), \\ & & 0 \leq m \leq M,\quad 0 \leq n \leq N,\quad 0 \leq k \leq K - 1.{}\end{array}$$
(7.44)

Omitting the small term \(R_{mn}^{k+\frac{1} {2} }\) in the expression (7.43) and writing down the initial condition on Ω h :

$$\displaystyle\begin{array}{rcl} U_{mn}^{0} = f_{ mn},\quad 0 \leq m \leq M,\quad 0 \leq n \leq N,& &{}\end{array}$$
(7.45)

we have for the problem (7.32)–(7.33) the following difference scheme:

$$\displaystyle\begin{array}{rcl} & & \frac{1} {\Delta \tau }(u_{mn}^{k+1} - u_{ mn}^{k}) = (a_{ 11})_{mn}^{k+\frac{1} {2} }\delta _{x}^{2}u_{mn}^{k+\frac{1} {2} } + 2(a_{12})_{mn}^{k+\frac{1} {2} }\Delta _{x}\Delta _{y}u_{mn}^{k+\frac{1} {2} } \\ & & \qquad + (a_{22})_{mn}^{k+\frac{1} {2} }\delta _{y}^{2}u_{mn}^{k+\frac{1} {2} } + (b_{1})_{mn}^{k+\frac{1} {2} }\Delta _{x}u_{mn}^{k+\frac{1} {2} } + (b_{2})_{mn}^{k+\frac{1} {2} }\Delta _{y}u_{mn}^{k+\frac{1} {2} } + c_{mn}^{k+\frac{1} {2} }u_{mn}^{k+\frac{1} {2} } \\ & & \qquad + g_{mn}^{k+\frac{1} {2} },\quad 0 \leq m \leq M,\quad 0 \leq n \leq N,\quad 0 \leq k \leq K - 1, {}\end{array}$$
(7.46)
$$\displaystyle\begin{array}{rcl} & & u_{mn}^{0} = f_{ mn},\quad 0 \leq m \leq M,\quad 0 \leq n \leq N.{}\end{array}$$
(7.47)

The following lemma will be used for the analysis of the difference scheme.

Lemma 7.1.

Let \(u \in \mathcal{V}.\) Then we have

$$\displaystyle\begin{array}{rcl} & & \left (a_{11}^{k+\frac{1} {2} }\delta _{x}^{2}u,u\right ) + 2\left (a_{12}^{k+\frac{1} {2} }\Delta _{x}\Delta _{y}u,u\right ) + \left (a_{22}^{k+\frac{1} {2} }\delta _{y}^{2}u,u\right ) \\ & & +\left (b_{1}^{k+\frac{1} {2} }\Delta _{x}u,u\right ) + \left (b_{2}^{k+\frac{1} {2} }\Delta _{y}u,u\right ) + \left ({c}^{k+\frac{1} {2} }u,u\right ) \leq \frac{c} {2}\|{u\|}^{2},{}\end{array}$$
(7.48)

where c is defined by the expression (7.40) .

Section 7.4.2 is devoted to the proof of this lemma.

4.2  ‡ The Proof of the Preliminary Lemma

We will estimate each term in the inequality (7.48). For simplicity, we omit the superscript.

Proposition 7.1

For \(\left (a_{11}\delta _{x}^{2}u,u\right )\) and \(\left (a_{22}\delta _{y}^{2}u,u\right )\) , we have the following inequalities:

$$\displaystyle\begin{array}{rcl} B_{1}& \equiv & \left (a_{11}\delta _{x}^{2}u,u\right ) \\ & \leq & -h_{1}h_{2}\left [\sum _{m=1}^{M -1}\sum _{ n=1}^{N -1}(a_{ 11})_{mn}\frac{{(\delta _{x}u_{m-\frac{1} {2} ,n})}^{2} + {(\delta _{x}u_{m+\frac{1} {2} ,n})}^{2}} {2} \right . \\ & & +\frac{1} {2}\sum _{m=1}^{M -1}(a_{ 11})_{m0}\frac{{(\delta _{x}u_{m-\frac{1} {2} ,0})}^{2} + {(\delta _{x}u_{m+\frac{1} {2} ,0})}^{2}} {2} \\ & & \left .+\frac{1} {2}\sum _{m=1}^{M -1}(a_{ 11})_{mN}\frac{{(\delta _{x}u_{m-\frac{1} {2} ,N})}^{2} + {(\delta _{x}u_{m+\frac{1} {2} ,N})}^{2}} {2} \right ] + \frac{1} {2}c_{1}\|{u\|}^{2} \\ & \leq & -h_{1}h_{2}\sum _{m=1}^{M -1}\sum _{ n=1}^{N -1}\left [(a_{ 11})_{mn}\frac{{(\delta _{x}u_{m-\frac{1} {2} ,n})}^{2} + {(\delta _{x}u_{m+\frac{1} {2} ,n})}^{2}} {2} \right ] + \frac{1} {2}c_{1}\|{u\|}^{2}. \\ & & {}\end{array}$$
(7.49)

and

$$\displaystyle\begin{array}{rcl} B_{3}& \equiv & \left (a_{22}\delta _{y}^{2}u,u\right ) \\ & \leq & -h_{1}h_{2}\left [\sum _{m=1}^{M -1}\sum _{ n=1}^{N -1}(a_{ 22})_{mn}\frac{{(\delta _{y}u_{m,n-\frac{1} {2} })}^{2} + {(\delta _{y}u_{m,n+\frac{1} {2} })}^{2}} {2} \right . \\ & & +\frac{1} {2}\sum _{n=1}^{N -1}(a_{ 22})_{0n}\frac{{(\delta _{y}u_{0,n-\frac{1} {2} })}^{2} + {(\delta _{y}u_{0,n+\frac{1} {2} })}^{2}} {2} \\ & & \left .+\frac{1} {2}\sum _{n=1}^{N -1}(a_{ 22})_{Mn}\frac{{(\delta _{y}u_{M,n-\frac{1} {2} })}^{2} + {(\delta _{y}u_{M,n+\frac{1} {2} })}^{2}} {2} \right ] + \frac{1} {2}c_{3}\|{u\|}^{2} \\ & \leq & -h_{1}h_{2}\sum _{m=1}^{M -1}\sum _{ n=1}^{N -1}\left [(a_{ 22})_{mn}\frac{{(\delta _{y}u_{m,n-\frac{1} {2} })}^{2} + {(\delta _{y}u_{m,n+\frac{1} {2} })}^{2}} {2} \right ] + \frac{1} {2}c_{3}\|{u\|}^{2}. \\ & & {}\end{array}$$
(7.50)

Proof. Because \((a_{11})_{0n} = (a_{11})_{Mn} = 0\) for n = 0, 1, ⋯ , N, some terms in the inner product are zero. Thus, the expression of \(\left (a_{11}\delta _{x}^{2}u,u\right )\) is

$$\displaystyle\begin{array}{rcl} B_{1}& =& \left (a_{11}\delta _{x}^{2}u,u\right ) = h_{ 1}h_{2}\left [\sum _{m=1}^{M -1}\sum _{ n=1}^{N -1}(a_{ 11})_{mn}\;\delta _{x}^{2}u_{ mn}\;u_{mn}\right . \\ & & \left .+\frac{1} {2}\sum _{m=1}^{M -1}(a_{ 11})_{m0}\;\delta _{x}^{2}u_{ m0}\;u_{m0} + \frac{1} {2}\sum _{m=1}^{M -1}(a_{ 11})_{mN}\;\delta _{x}^{2}u_{ mn}\;u_{mN}\right ]. \\ & & {}\end{array}$$
(7.51)

Averaging the following two equalities:

$$\displaystyle\begin{array}{rcl} & & \quad h_{1}\sum _{m=1}^{M -1}(a_{ 11})_{mn}\;\delta _{x}^{2}u_{ mn}\;u_{mn} {}\\ & & =\sum _{ m=1}^{M -1}(a_{ 11})_{mn}(\delta _{x}u_{m+\frac{1} {2} ,n} -\delta _{x}u_{m-\frac{1} {2} ,n})u_{mn} {}\\ & & =\sum _{ m=1}^{M -1}(a_{ 11})_{mn}\;\delta _{x}u_{m+\frac{1} {2} ,n}\;u_{mn} -\sum _{m=0}^{M -2}(a_{ 11})_{m+1,n}\;\delta _{x}u_{m+\frac{1} {2} ,n}\;u_{m+1,n} {}\\ & & =\sum _{ m=0}^{M -1}(a_{ 11})_{mn}\;\delta _{x}u_{m+\frac{1} {2} ,n}\;u_{mn} -\sum _{m=0}^{M -1}(a_{ 11})_{m+1,n}\;\delta _{x}u_{m+\frac{1} {2} ,n}\;u_{m+1,n} {}\\ & & =\sum _{ m=0}^{M -1}(a_{ 11})_{mn}\;\delta _{x}u_{m+\frac{1} {2} ,n}\;(u_{mn} - u_{m+1,n}) {}\\ & & \quad +\sum _{ m=0}^{M -1}[(a_{ 11})_{mn} - (a_{11})_{m+1,n}]\;\delta _{x}u_{m+\frac{1} {2} ,n}\;u_{m+1,n} {}\\ & & = -h_{1}\sum _{m=1}^{M -1}(a_{ 11})_{mn}{(\delta _{x}u_{m+\frac{1} {2} ,n})}^{2} - h_{ 1}\sum _{m=0}^{M -1}(\delta _{ x}a_{11})_{m+\frac{1} {2} ,n}\;\delta _{x}u_{m+\frac{1} {2} ,n}\;u_{m+1,n}{}\\ \end{array}$$

and

$$\displaystyle\begin{array}{rcl} & & \quad h_{1}\sum _{m=1}^{M -1}(a_{ 11})_{mn}\;\delta _{x}^{2}u_{ mn}\;u_{mn} {}\\ & & =\sum _{ m=1}^{M -1}(a_{ 11})_{mn}(\delta _{x}u_{m+\frac{1} {2} ,n} -\delta _{x}u_{m-\frac{1} {2} ,n})u_{mn} {}\\ & & =\sum _{ m=2}^{M }(a_{11})_{m-1,n}\;\delta _{x}u_{m-\frac{1} {2} ,n}\;u_{m-1,n} -\sum _{m=1}^{M -1}(a_{ 11})_{mn}\;\delta _{x}u_{m-\frac{1} {2} ,n}\;u_{mn} {}\\ & & =\sum _{ m=1}^{M }(a_{11})_{m-1,n}\;\delta _{x}u_{m-\frac{1} {2} ,n}\;u_{m-1,n} -\sum _{m=1}^{M }(a_{11})_{mn}\;\delta _{x}u_{m-\frac{1} {2} ,n}\;u_{mn} {}\\ & & =\sum _{ m=1}^{M }(a_{11})_{mn}\;\delta _{x}u_{m-\frac{1} {2} ,n}\;(u_{m-1,n} - u_{mn}) {}\\ & & \quad +\sum _{ m=1}^{M }[(a_{11})_{m-1,n} - (a_{11})_{mn}]\;\delta _{x}u_{m-\frac{1} {2} ,n}\;u_{m-1,n} {}\\ & & = -h_{1}\sum _{m=1}^{M -1}(a_{ 11})_{mn}{(\delta _{x}u_{m-\frac{1} {2} ,n})}^{2} - h_{ 1}\sum _{m=0}^{M -1}(\delta _{ x}a_{11})_{m+\frac{1} {2} ,n}\;\delta _{x}u_{m+\frac{1} {2} ,n}\;u_{mn}, {}\\ \end{array}$$

we have

$$\displaystyle\begin{array}{rcl} & & \quad h_{1}\sum _{m=1}^{M -1}(a_{ 11})_{mn}\;\delta _{x}^{2}u_{ mn}\;u_{mn} {}\\ & & = -h_{1}\sum _{m=1}^{M -1}(a_{ 11})_{mn}\frac{{(\delta _{x}u_{m-\frac{1} {2} ,n})}^{2} + {(\delta _{x}u_{m+\frac{1} {2} ,n})}^{2}} {2} {}\\ & & \quad - h_{1}\sum _{m=0}^{M -1}(\delta _{ x}a_{11})_{m+\frac{1} {2} ,n}\;\delta _{x}u_{m+\frac{1} {2} ,n}\;u_{m+\frac{1} {2} ,n} {}\\ & & = -h_{1}\sum _{m=1}^{M -1}(a_{ 11})_{mn}\frac{{(\delta _{x}u_{m-\frac{1} {2} ,n})}^{2} + {(\delta _{x}u_{m+\frac{1} {2} ,n})}^{2}} {2} {}\\ & & \quad -\frac{1} {2}\sum _{m=0}^{M -1}(\delta _{ x}a_{11})_{m+\frac{1} {2} ,n}\left (u_{m+1,n}^{2} - u_{ m,n}^{2}\right ) {}\\ & & = -h_{1}\sum _{m=1}^{M -1}(a_{ 11})_{mn}\frac{{(\delta _{x}u_{m-\frac{1} {2} ,n})}^{2} + {(\delta _{x}u_{m+\frac{1} {2} ,n})}^{2}} {2} {}\\ & & \quad + \frac{1} {2}\Big[\sum _{m=1}^{M -1}\left ((\delta _{ x}a_{11})_{m+\frac{1} {2} ,n} - (\delta _{x}a_{11})_{m-\frac{1} {2} ,n}\right )u_{mn}^{2} {}\\ & & +(\delta _{x}a_{11})_{\frac{1} {2} ,n}u_{0n}^{2} - (\delta _{ x}a_{11})_{M-\frac{1} {2} ,n}u_{Mn}^{2}\Big] {}\\ & & \leq -h_{1}\sum _{m=1}^{M -1}(a_{ 11})_{mn}\frac{{(\delta _{x}u_{m-\frac{1} {2} ,n})}^{2} + {(\delta _{x}u_{m+\frac{1} {2} ,n})}^{2}} {2} {}\\ & & \quad + \frac{1} {2}c_{1}h_{1}\left (\frac{1} {2}u_{0n}^{2} +\sum _{ m=1}^{M -1}u_{ mn}^{2} + \frac{1} {2}u_{Mn}^{2}\right ). {}\\ \end{array}$$

Here we have used the relations

$$\displaystyle{\left \vert (\delta _{x}a_{11})_{m+\frac{1} {2} ,n} - (\delta _{x}a_{11})_{m-\frac{1} {2} ,n}\right \vert \leq c_{1}h_{1},}$$
$$\displaystyle{\vert (\delta _{x}a_{11})_{\frac{1} {2} ,n}\vert \leq \frac{1} {2}c_{1}h_{1},\quad \vert (\delta _{x}a_{11})_{M-\frac{1} {2} ,n}\vert \leq \frac{1} {2}c_{1}h_{1},}$$

which hold because of

$$\displaystyle{c_{1} =\max _{(x,y,\tau )\in \Omega \times [0,T]}\left \vert \frac{{\partial }^{2}a_{11}(x,y,\tau )} {\partial {x}^{2}} \right \vert _{t}\quad \mbox{ and}\quad \frac{\partial a_{11}(x,y,\tau )} {\partial x} \Big\vert _{x=x_{l}\mbox{ or }x_{u}} = 0.}$$

Inserting the above equality into the equality (7.51), we obtain the inequality (7.49).

It is clear that for the second inequality in Proposition 7.1, the proof is almost the same as the proof for the first one. The concrete proof is omitted here. _

Proposition 7.2

$$\displaystyle\begin{array}{rcl} & & \quad B_{2} \equiv \left (a_{12}\Delta _{x}\Delta _{y}u,u\right ) \\ & & \leq -\frac{1} {4}h_{1}h_{2}\sum _{m=1}^{M -1}\sum _{ n=1}^{N -1}(a_{ 12})_{mn}\left [\delta _{x}u_{m+\frac{1} {2} ,n}\;\delta _{y}u_{m,n-\frac{1} {2} } +\delta _{x}u_{m-\frac{1} {2} ,n}\;\delta _{y}u_{m,n-\frac{1} {2} }\right . \\ & & \quad \left .+\delta _{x}u_{m+\frac{1} {2} ,n}\;\delta _{y}u_{m,n+\frac{1} {2} } +\delta _{x}u_{m-\frac{1} {2} ,n}\;\delta _{y}u_{m,n+\frac{1} {2} }\right ] + \frac{1} {2}c_{2}\|{u\|}^{2}. {}\end{array}$$
(7.52)

Proof. Because a 12 = 0 on all the boundary points, the expression of \(\left (a_{12}\Delta _{x}\Delta _{y}u,u\right )\) can be written as follows:

$$\displaystyle\begin{array}{rcl} & &B_{2} = h_{1}h_{2}\sum _{m=1}^{M -1}\sum _{ n=1}^{N -1}(a_{ 12})_{mn}(\Delta _{x}\Delta _{y}u)_{mn}u_{mn} \\ & & = \frac{1} {4}h_{1}h_{2}\sum _{m=1}^{M -1}\sum _{ n=1}^{N -1}(a_{ 12})_{mn}\left (\delta _{x}\delta _{y}u_{m-\frac{1} {2} ,n-\frac{1} {2} } +\delta _{x}\delta _{y}u_{m+\frac{1} {2} ,n-\frac{1} {2} }\right . \\ & & \qquad \qquad \qquad \qquad \qquad \qquad \left .+\delta _{x}\delta _{y}u_{m-\frac{1} {2} ,n+\frac{1} {2} } +\delta _{x}\delta _{y}u_{m+\frac{1} {2} ,n+\frac{1} {2} }\right )u_{mn} \\ & & = \frac{1} {4}\Big[h_{1}h_{2}\sum _{m=1}^{M -1}\sum _{ n=1}^{N -1}(a_{ 12})_{mn}\;\delta _{x}\delta _{y}u_{m-\frac{1} {2} ,n-\frac{1} {2} }\;u_{mn} \\ & & \quad + h_{1}h_{2}\sum _{m=1}^{M -1}\sum _{ n=1}^{N -1}(a_{ 12})_{mn}\;\delta _{x}\delta _{y}u_{m+\frac{1} {2} ,n-\frac{1} {2} }\;u_{mn} \\ & & \quad + h_{1}h_{2}\sum _{m=1}^{M -1}\sum _{ n=1}^{N -1}(a_{ 12})_{mn}\;\delta _{x}\delta _{y}u_{m-\frac{1} {2} ,n+\frac{1} {2} }\;u_{mn} \\ & & \quad + h_{1}h_{2}\sum _{m=1}^{M -1}\sum _{ n=1}^{N -1}(a_{ 12})_{mn}\;\delta _{x}\delta _{y}u_{m+\frac{1} {2} ,n+\frac{1} {2} }\;u_{mn}\Big] \\ & & \equiv \frac{1} {4}(B_{21} + B_{22} + B_{23} + B_{24}). {}\end{array}$$
(7.53)

For B 21, we have

$$\displaystyle\begin{array}{rcl} B_{21}& =& h_{1}h_{2}\sum _{m=1}^{M -1}\sum _{ n=1}^{N -1}(a_{ 12})_{mn}\;\delta _{x}\delta _{y}u_{m-\frac{1} {2} ,n-\frac{1} {2} }\;u_{mn} {}\\ & =& h_{2}\sum _{n=1}^{N -1}\sum _{ m=1}^{M -1}(a_{ 12})_{mn}(\delta _{y}u_{m,n-\frac{1} {2} } -\delta _{y}u_{m-1,n-\frac{1} {2} })u_{mn} {}\\ & =& h_{2}\sum _{n=1}^{N -1}\left [\sum _{ m=1}^{M -1}(a_{ 12})_{mn}\;\delta _{y}u_{m,n-\frac{1} {2} }\;u_{mn}-\sum _{m=0}^{M -2}(a_{ 12})_{m+1,n}\;\delta _{y}u_{m,n-\frac{1} {2} }\;u_{m+1,n}\right ] {}\\ & =& h_{2}\sum _{n=1}^{N -1}\left [\sum _{ m=0}^{M -1}(a_{ 12})_{mn}\;\delta _{y}u_{m,n-\frac{1} {2} }\;u_{mn}-\sum _{m=0}^{M -1}(a_{ 12})_{m+1,n}\;\delta _{y}u_{m,n-\frac{1} {2} }\;u_{m+1,n}\right ] {}\\ & & {}\\ \end{array}$$
$$\displaystyle\begin{array}{rcl} & =& h_{2}\sum _{n=1}^{N -1}\Big[\sum _{ m=0}^{M -1}(a_{ 12})_{mn}\;\delta _{y}u_{m,n-\frac{1} {2} }\;(u_{mn} - u_{m+1,n}) \\ & & +\sum _{m=0}^{M -1}\left [(a_{ 12})_{mn} - (a_{12})_{m+1,n}\right ]\;\delta _{y}u_{m,n-\frac{1} {2} }\;u_{m+1,n}\Big] \\ & =& -h_{1}h_{2}\sum _{m=1}^{M -1}\sum _{ n=1}^{N -1}(a_{ 12})_{mn}\;\delta _{x}u_{m+\frac{1} {2} ,n}\;\delta _{y}u_{m,n-\frac{1} {2} } \\ & & -h_{1}h_{2}\sum _{m=0}^{M -1}\sum _{ n=1}^{N -1}\left (\delta _{ x}a_{12}\right )_{m+\frac{1} {2} ,n}\;\delta _{y}u_{m,n-\frac{1} {2} }\;u_{m+1,n}\;.{}\end{array}$$
(7.54)

For B 22, we have

$$\displaystyle\begin{array}{rcl} B_{22}& =& h_{1}h_{2}\sum _{m=1}^{M -1}\sum _{ n=1}^{N -1}(a_{ 12})_{mn}\;\delta _{x}\delta _{y}u_{m+\frac{1} {2} ,n-\frac{1} {2} }\;u_{mn} {}\\ & =& h_{2}\sum _{n=1}^{N -1}\sum _{ m=1}^{M -1}(a_{ 12})_{mn}(\delta _{y}u_{m+1,n-\frac{1} {2} } -\delta _{y}u_{m,n-\frac{1} {2} })u_{mn} {}\\ & =& h_{2}\sum _{n=1}^{N -1}\left [\sum _{ m=0}^{M -1}(a_{ 12})_{mn}\;\delta _{y}u_{m+1,n-\frac{1} {2} }\;u_{mn}\right . {}\\ & & \left .-\sum _{m=1}^{M }(a_{12})_{mn}\;\delta _{y}u_{m,n-\frac{1} {2} }\;u_{m,n}\right ] {}\\ & =& h_{2}\sum _{n=1}^{N -1}\left [\sum _{ m=1}^{M }\left [(a_{12})_{m-1,n} - (a_{12})_{m,n}\right ]\;\delta _{y}u_{m,n-\frac{1} {2} }\;u_{m-1,n}\right . {}\\ & & \left .-\sum _{m=1}^{M -1}(a_{ 12})_{mn}\;\delta _{y}u_{m,n-\frac{1} {2} }\;(u_{m,n} - u_{m-1,n})\right ] {}\\ & =& h_{2}\sum _{n=1}^{N -1}\left [-h_{ 1}\sum _{m=1}^{M }\left (\delta _{x}a_{12}\right )_{m-\frac{1} {2} ,n}\;\delta _{y}u_{m,n-\frac{1} {2} }\;u_{m-1,n}\right . {}\\ & & \left .-h_{1}\sum _{m=1}^{M -1}(a_{ 12})_{mn}\;\delta _{x}u_{m-\frac{1} {2} ,n}\;\delta _{y}u_{m,n-\frac{1} {2} }\;\right ] {}\\ & =& -h_{1}h_{2}\sum _{n=1}^{N -1}\left [\sum _{ m=1}^{M -1}(a_{ 12})_{mn}\;\delta _{x}u_{m-\frac{1} {2} ,n}\;\delta _{y}u_{m,n-\frac{1} {2} }\;\right . {}\\ & & \left .+\sum _{m=0}^{M -1}\left (\delta _{ x}a_{12}\right )_{m+\frac{1} {2} ,n}\;\delta _{y}u_{m+1,n-\frac{1} {2} }\;u_{m,n}\right ] {}\\ & & {}\\ \end{array}$$
$$\displaystyle\begin{array}{rcl} & =& -h_{1}h_{2}\sum _{m=1}^{M -1}\sum _{ n=1}^{N -1}(a_{ 12})_{mn}\;\delta _{x}u_{m-\frac{1} {2} ,n}\;\delta _{y}u_{m,n-\frac{1} {2} } \\ & & -h_{1}h_{2}\sum _{m=0}^{M -1}\sum _{ n=1}^{N -1}\left (\delta _{ x}a_{12}\right )_{m+\frac{1} {2} ,n}\;\delta _{y}u_{m+1,n-\frac{1} {2} }\;u_{m,n}\;.{}\end{array}$$
(7.55)

We can see that during deriving the equalities (7.54) and (7.55), the subscripts n and \(n -\frac{1} {2}\) are unchanged. Thus, from the equalities (7.54) and (7.55), for B 23 and B 24, we can have

$$\displaystyle\begin{array}{rcl} B_{23}& =& -h_{1}h_{2}\sum _{m=1}^{M -1}\sum _{ n=1}^{N -1}(a_{ 12})_{mn}\;\delta _{x}u_{m+\frac{1} {2} ,n}\;\delta _{y}u_{m,n+\frac{1} {2} }\; \\ & & -h_{1}h_{2}\sum _{m=0}^{M -1}\sum _{ n=1}^{N -1}\left (\delta _{ x}a_{12}\right )_{m+\frac{1} {2} ,n}\;\delta _{y}u_{m,n+\frac{1} {2} }\;u_{m+1,n}\;;{}\end{array}$$
(7.56)
$$\displaystyle\begin{array}{rcl} B_{24}& =& -h_{1}h_{2}\sum _{m=1}^{M -1}\sum _{ n=1}^{N -1}(a_{ 12})_{mn}\;\delta _{x}u_{m-\frac{1} {2} ,n}\;\delta _{y}u_{m,n+\frac{1} {2} }\; \\ & & -h_{1}h_{2}\sum _{m=0}^{M -1}\sum _{ n=1}^{N -1}\left (\delta _{ x}a_{12}\right )_{m+\frac{1} {2} ,n}\;\delta _{y}u_{m+1,n+\frac{1} {2} }\;u_{mn}\;.{}\end{array}$$
(7.57)

Putting the second terms in the last expressions of \(B_{21},B_{22},B_{23},\) and B 24 in the expressions (7.54)–(7.57) together yields

$$\displaystyle\begin{array}{rcl} & & -h_{1}h_{2}\sum _{m=0}^{M -1}\sum _{ n=1}^{N -1}\left (\delta _{ x}a_{12}\right )_{m+\frac{1} {2} ,n}(\delta _{y}u_{m,n-\frac{1} {2} } +\delta _{y}u_{m,n+\frac{1} {2} })u_{m+1,n} {}\\ & & -h_{1}h_{2}\sum _{m=0}^{M -1}\sum _{ n=1}^{N -1}\left (\delta _{ x}a_{12}\right )_{m+\frac{1} {2} ,n}(\delta _{y}u_{m+1,n-\frac{1} {2} } +\delta _{y}u_{m+1,n+\frac{1} {2} })u_{mn} {}\\ & =& -h_{1}\sum _{m=0}^{M -1}\sum _{ n=1}^{N -1}\left (\delta _{ x}a_{12}\right )_{m+\frac{1} {2} ,n}[(u_{m,n+1} - u_{m,n-1})u_{m+1,n} {}\\ & & \qquad \qquad \qquad \qquad \qquad \qquad + (u_{m+1,n+1} - u_{m+1,n-1})u_{mn}] {}\\ & =& -h_{1}\sum _{m=0}^{M -1}\sum _{ n=1}^{N -1}\left (\delta _{ x}a_{12}\right )_{m+\frac{1} {2} ,n}\left (u_{m+1,n+1}u_{mn} + u_{m,n+1}u_{m+1,n}\right . {}\\ & & \qquad \qquad \qquad \qquad \qquad \qquad \left .-u_{m+1,n-1}u_{mn} - u_{m,n-1}u_{m+1,n}\right ) {}\\ & =& -h_{1}\sum _{m=0}^{M -1}\left [\sum _{ n=0}^{N -1}\left (\delta _{ x}a_{12}\right )_{m+\frac{1} {2} ,n}\left (u_{m+1,n+1}u_{mn} + u_{m,n+1}u_{m+1,n}\right )\right . {}\\ & & \qquad \qquad \quad \left .-\sum _{n=0}^{N -1}\left (\delta _{ x}a_{12}\right )_{m+\frac{1} {2} ,n+1}\left (u_{m+1,n}u_{m,n+1} + u_{mn}u_{m+1,n+1}\right )\right ] {}\\ & & {}\\ \end{array}$$
$$\displaystyle\begin{array}{rcl} & =& h_{1}h_{2}\sum _{m=0}^{M -1}\sum _{ n=0}^{N -1}\left (\delta _{ y}\delta _{x}a_{12}\right )_{m+\frac{1} {2} ,n+\frac{1} {2} }\left (u_{m+1,n+1}u_{mn} + u_{m,n+1}u_{m+1,n}\right ) \\ & \leq & \frac{1} {2}c_{2}h_{1}h_{2}\sum _{m=0}^{M -1}\sum _{ n=0}^{N -1}\left (u_{ m+1,n+1}^{2} + u_{ mn}^{2} + u_{ m,n+1}^{2} + u_{ m+1,n}^{2}\right ) \\ & =& 2c_{2}\|{u\|}^{2}. {}\end{array}$$
(7.58)

Here we have used \(\left (\delta _{x}a_{12}\right )_{m+\frac{1} {2} ,0} = \left (\delta _{x}a_{12}\right )_{m+\frac{1} {2} ,N} = 0\) and another form of the definition of inner product (7.42).

Thus, inserting the equalities (7.54)–(7.57) into the expression (7.53) and using the inequality (7.58), we get

$$\displaystyle\begin{array}{rcl} B_{2}& =& -\frac{1} {4}h_{1}h_{2}\sum _{m=1}^{M -1}\sum _{ n=1}^{N -1}(a_{ 12})_{mn}\left (\delta _{x}u_{m+\frac{1} {2} ,n}\;\delta _{y}u_{m,n-\frac{1} {2} } +\delta _{x}u_{m-\frac{1} {2} ,n}\;\delta _{y}u_{m,n-\frac{1} {2} }\right . {}\\ & & \left .+\delta _{x}u_{m+\frac{1} {2} ,n}\;\delta _{y}u_{m,n+\frac{1} {2} } +\delta _{x}u_{m-\frac{1} {2} ,n}\;\delta _{y}u_{m,n+\frac{1} {2} }\right ) {}\\ & & -\frac{1} {4}h_{1}h_{2}\sum _{m=0}^{M -1}\sum _{ n=1}^{N -1}\left (\delta _{ x}a_{12}\right )_{m+\frac{1} {2} ,n}(\delta _{y}u_{m,n-\frac{1} {2} } +\delta _{y}u_{m,n+\frac{1} {2} })u_{m+1,n} {}\\ & & -\frac{1} {4}h_{1}h_{2}\sum _{m=0}^{M -1}\sum _{ n=1}^{N -1}\left (\delta _{ x}a_{12}\right )_{m+\frac{1} {2} ,n}(\delta _{y}u_{m+1,n-\frac{1} {2} } +\delta _{y}u_{m+1,n+\frac{1} {2} })u_{mn} {}\\ & \leq & -\frac{1} {4}h_{1}h_{2}\sum _{m=1}^{M -1}\sum _{ n=1}^{N -1}(a_{ 12})_{mn}\left (\delta _{x}u_{m+\frac{1} {2} ,n}\;\delta _{y}u_{m,n-\frac{1} {2} } +\delta _{x}u_{m-\frac{1} {2} ,n}\;\delta _{y}u_{m,n-\frac{1} {2} }\right . {}\\ & & \left .+\delta _{x}u_{m+\frac{1} {2} ,n}\;\delta _{y}u_{m,n+\frac{1} {2} } +\delta _{x}u_{m-\frac{1} {2} ,n}\;\delta _{y}u_{m,n+\frac{1} {2} }\right ) + \frac{1} {2}c_{2}\|{u\|}^{2}.\quad {}\\ \end{array}$$

Proposition 7.3

For \(\left (b_{1}\Delta _{x}u,u\right )\) and \(\left (b_{2}\Delta _{y}u,u\right )\) , we have

$$\displaystyle\begin{array}{rcl} B_{4} \equiv \left (b_{1}\Delta _{x}u,u\right ) \leq \frac{1} {2}c_{4}\|{u\|}^{2}& &{}\end{array}$$
(7.59)

and

$$\displaystyle\begin{array}{rcl} B_{5} \equiv \left (b_{2}\Delta _{y}u,u\right ) \leq \frac{1} {2}c_{5}\|{u\|}^{2}.& &{}\end{array}$$
(7.60)

Proof. Because \((b_{1})_{0,n} = (b_{1})_{M,n}\) for n = 0, 1, ⋯ , N, the concrete expression for \(\left (b_{1}\Delta _{x}u,u\right )\) is

$$\displaystyle\begin{array}{rcl} B_{4}& =& h_{1}h_{2}\left [\sum _{m=1}^{M -1}\sum _{ n=1}^{N -1}(b_{ 1})_{mn}\;\Delta _{x}u_{mn}\;u_{mn} + \frac{1} {2}\sum _{m=1}^{M -1}(b_{ 1})_{m0}\;\Delta _{x}u_{m0}\;u_{m0}\right . {}\\ & & \left .+\frac{1} {2}\sum _{m=1}^{M -1}(b_{ 1})_{mN}\;\Delta _{x}u_{mN}\;u_{mN}\right ]. {}\\ \end{array}$$

For any n, we have

$$\displaystyle\begin{array}{rcl} & & \quad h_{1}\sum _{m=1}^{M -1}(b_{ 1})_{mn}\;\Delta _{x}u_{mn}\;u_{mn} {}\\ & & = \frac{1} {2}\sum _{m=1}^{M -1}(b_{ 1})_{mn}(u_{m+1,n} - u_{m-1,n})u_{mn} {}\\ & & = \frac{1} {2}\left (\sum _{m=1}^{M -1}(b_{ 1})_{mn}u_{mn}u_{m+1,n} -\sum _{m=0}^{M -2}(b_{ 1})_{m+1,n}u_{mn}u_{m+1,n}\right ) {}\\ & & = -\frac{1} {2}h_{1}\sum _{m=0}^{M -1}\left (\delta _{ x}b_{1}\right )_{m+\frac{1} {2} ,n}u_{mn}u_{m+1,n} {}\\ & & \leq \frac{1} {2}c_{4}h_{1}\Big(\frac{1} {2}\;u_{0n}^{2} +\sum _{ m=1}^{M -1}u_{ mn}^{2} + \frac{1} {2}\;u_{Mn}^{2}\Big). {}\\ \end{array}$$

Adding them together yields

$$\displaystyle{B_{4} \leq \frac{1} {2}c_{4}\|{u\|}^{2}.}$$

It is easy to see that changing x to y and m to n during the derivation above, we can prove the second inequality in Proposition 7.3. Thus, we have proved the conclusion we need. _

Proposition 7.4

$$\displaystyle\begin{array}{rcl} B_{6} \equiv \left (cu,u\right ) \leq c_{6}\|{u\|}^{2}.& &{}\end{array}$$
(7.61)

Proof. Since \(\vert c_{mn}^{k}\vert \leq c_{6},\) it is easy to see the validity of the inequality (7.61). _

The proof of Lemma 7.1 Based on these inequalities and noticing the matrix

$$\displaystyle{\left (\begin{array}{cc} a_{11}(x,y,\tau )&a_{12}(x,y,\tau ) \\ a_{12}(x,y,\tau )&a_{22}(x,y,\tau ) \end{array} \right )}$$

is semi-positive, we can prove the lemma immediately. Adding the relations (7.49), (7.52), (7.50), (7.59), (7.60), and (7.61), then using the inequality (7.37), we get

$$\displaystyle\begin{array}{rcl} & & B_{1} + 2B_{2} + B_{3} + B_{4} + B_{5} + B_{6} {}\\ & \leq & \frac{1} {2}(c_{1} + 2c_{2} + c_{3} + c_{4} + c_{5} + 2c_{6})\|{u\|}^{2} {}\\ & & -\frac{1} {4}h_{1}h_{2}\sum _{m=1}^{M -1}\sum _{ n=1}^{N -1}\Big\{(a_{ 11})_{mn}\Big[2{(\delta _{x}u_{m-\frac{1} {2} ,n})}^{2} + 2{(\delta _{ x}u_{m+\frac{1} {2} ,n})}^{2}\Big] {}\\ \end{array}$$
$$\displaystyle\begin{array}{rcl} & & +2(a_{12})_{mn}\Big[\delta _{x}u_{m+\frac{1} {2} ,n}\;\delta _{y}u_{m,n-\frac{1} {2} } +\delta _{x}u_{m-\frac{1} {2} ,n}\;\delta _{y}u_{m,n-\frac{1} {2} } {}\\ & & +\delta _{x}u_{m+\frac{1} {2} ,n}\;\delta _{y}u_{m,n+\frac{1} {2} } +\delta _{x}u_{m-\frac{1} {2} ,n}\;\delta _{y}u_{m,n+\frac{1} {2} }\Big] {}\\ & & +(a_{22})_{mn}\Big[2{(\delta _{y}u_{m,n-\frac{1} {2} })}^{2} + 2{(\delta _{ y}u_{m,n+\frac{1} {2} })}^{2}\Big]\Big\} {}\\ \end{array}$$
$$\displaystyle\begin{array}{rcl} & =& \frac{c} {2}\|{u\|}^{2} -\frac{1} {4}h_{1}h_{2}\sum _{m=1}^{M -1}\sum _{ n=1}^{N -1}\bigg\{ {}\\ & & \Big[(a_{11})_{mn}{(\delta _{x}u_{m+\frac{1} {2} ,n})}^{2} + 2(a_{ 12})_{mn}\;\delta _{x}u_{m+\frac{1} {2} ,n}\;\delta _{y}u_{m,n-\frac{1} {2} } {}\\ & & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad+(a_{22})_{mn}{(\delta _{y}u_{m,n-\frac{1} {2} })}^{2}\Big] {}\\ & & +\Big[(a_{11})_{mn}{(\delta _{x}u_{m-\frac{1} {2} ,n})}^{2} + 2(a_{ 12})_{mn}\;\delta _{x}u_{m-\frac{1} {2} ,n}\;\delta _{y}u_{m,n-\frac{1} {2} } {}\\ & & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad+(a_{22})_{mn}{(\delta _{y}u_{m,n-\frac{1} {2} })}^{2}\Big] {}\\ & & +\Big[(a_{11})_{mn}{(\delta _{x}u_{m+\frac{1} {2} ,n})}^{2} + 2(a_{ 12})_{mn}\;\delta _{x}u_{m+\frac{1} {2} ,n}\;\delta _{y}u_{m,n+\frac{1} {2} } {}\\ & &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad +(a_{22})_{mn}{(\delta _{y}u_{m,n+\frac{1} {2} })}^{2}\Big] {}\\ & & +\Big[(a_{11})_{mn}{(\delta _{x}u_{m-\frac{1} {2} ,n})}^{2} + 2(a_{ 12})_{mn}\;\delta _{x}u_{m-\frac{1} {2} ,n}\;\delta _{y}u_{m,n+\frac{1} {2} } {}\\ & & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad+(a_{22})_{mn}{(\delta _{y}u_{m,n+\frac{1} {2} })}^{2}\Big]\bigg\} {}\\ & \leq & \frac{c} {2}\|{u\|}^{2}. {}\\ \end{array}$$

This completes the proof of Lemma 7.1. □ 

4.3  ‡ Solvability and Stability

In this subsection, we will prove the solvability and stability of the two-dimensional finite-difference scheme (7.46)–(7.47).

Theorem 7.1

If Δτ < 1∕c, then the difference scheme  (7.46) (7.47) is uniquely solvable.

Proof. Suppose \(\{u_{mn}^{k}\;\vert \;0 \leq m \leq M,0 \leq n \leq N\}\) has been determined. Then the difference scheme (7.46) is a linear system about \(\{u_{mn}^{k+1}\;\vert \;0 \leq m \leq M,0 \leq n \leq N\}.\) Consider its homogeneous system

$$\displaystyle\begin{array}{rcl} \frac{1} {\Delta \tau }u_{mn}^{k+1}& =& \frac{1} {2}(a_{11})_{mn}^{k+\frac{1} {2} }\delta _{x}^{2}u_{mn}^{k+1} + (a_{12})_{mn}^{k+\frac{1} {2} }\Delta _{x}\Delta _{y}u_{mn}^{k+1} + \frac{1} {2}(a_{22})_{mn}^{k+\frac{1} {2} }\delta _{y}^{2}u_{mn}^{k+1} \\ & & +\frac{1} {2}(b_{1})_{mn}^{k+\frac{1} {2} }\Delta _{x}u_{mn}^{k+1} + \frac{1} {2}(b_{2})_{mn}^{k+\frac{1} {2} }\Delta _{y}u_{mn}^{k+1} + \frac{1} {2}c_{mn}^{k+\frac{1} {2} }u_{mn}^{k+1}, \\ & & \qquad 0 \leq m \leq M,\quad 0 \leq n \leq N. {}\end{array}$$
(7.62)

Taking the inner product of equality (7.62) with \(2{u}^{k+1}\) and using Lemma 7.1, we have

$$\displaystyle\begin{array}{rcl} \frac{2} {\Delta \tau }\|{u{}^{k+1}\|}^{2}& & = \left ({(a_{ 11})}^{k+\frac{1} {2} }\delta _{x}^{2}{u}^{k+1},{u}^{k+1}\right ) + 2\left ({(a_{12})}^{k+\frac{1} {2} }\Delta _{x}\Delta _{y}{u}^{k+1},{u}^{k+1}\right ) {}\\ & & + \left ({(a_{22})}^{k+\frac{1} {2} }\delta _{y}^{2}{u}^{k+1},{u}^{k+1}\right ) + \left ({(b_{1})}^{k+\frac{1} {2} }\Delta _{x}{u}^{k+1},{u}^{k+1}\right ) {}\\ &&+ \left ({(b_{2})}^{k+\frac{1} {2} }\Delta _{y}{u}^{k+1},{u}^{k+1}\right ) + \left ({c}^{k+\frac{1} {2} }{u}^{k+1},{u}^{k+1}\right ) \\ & & \leq \frac{c} {2}\|{u{}^{k+1}\|}^{2}.\end{array}$$
(7.63)

If Δτ < 1 ∕ c, then \(\|{u}^{k+1}\| = 0.\) This completes the proof. _

Theorem 7.2

If Δτ ≤ 2∕[3(1 + c)], then the solution to the difference scheme  (7.46) (7.47) satisfies

$$\displaystyle\begin{array}{rcl} \|{u{}^{k+1}\|}^{2} \leq \mathrm{ {e}}^{3(c+1)T/2}\Big(\|{u{}^{0}\|}^{2} + \frac{3} {2}\Delta \tau \sum _{l=0}^{k}\|{g{}^{l+\frac{1} {2} }\|}^{2}\Big),\quad 0 \leq k \leq K - 1.\quad & &{}\end{array}$$
(7.64)

Proof. Taking the inner product of Eq. (7.46) with \({u}^{k+\frac{1} {2} }\) and using Lemma 7.1, we have

$$\displaystyle\begin{array}{rcl} & & \quad \frac{1} {2\Delta \tau }\left (\|{u{}^{k+1}\|}^{2} -\| {u{}^{k}\|}^{2}\right ) {}\\ & & = \left ({(a_{11})}^{k+\frac{1} {2} }\delta _{x}^{2}{u}^{k+\frac{1} {2} },{u}^{k+\frac{1} {2} }\right ) + 2\left ({(a_{12})}^{k+\frac{1} {2} }\Delta _{x}\Delta _{y}{u}^{k+\frac{1} {2} },{u}^{k+\frac{1} {2} }\right ) {}\\ & & \quad + \left ({(a_{22})}^{k+\frac{1} {2} }\delta _{y}^{2}{u}^{k+\frac{1} {2} },{u}^{k+\frac{1} {2} }\right ) + \left ({(b_{1})}^{k+\frac{1} {2} }\Delta _{x}{u}^{k+\frac{1} {2} },{u}^{k+\frac{1} {2} }\right ) {}\\ & & \quad + \left ({(b_{2})}^{k+\frac{1} {2} }\Delta _{y}{u}^{k+\frac{1} {2} },{u}^{k+\frac{1} {2} }\right ) + \left ({c}^{k+\frac{1} {2} }{u}^{k+1},{u}^{k+\frac{1} {2} }\right ) + \left ({g}^{k+\frac{1} {2} },{u}^{k+\frac{1} {2} }\right ) {}\\ & & \leq \frac{c} {2}\|{u{}^{k+\frac{1} {2} }\|}^{2} + \frac{1} {2}\|{g{}^{k+\frac{1} {2} }\|}^{2} + \frac{1} {2}\|{u{}^{k+\frac{1} {2} }\|}^{2},\quad 0 \leq k \leq K - 1, {}\\ \end{array}$$

from which we further obtain

$$\displaystyle\begin{array}{rcl} \|{u{}^{k+1}\|}^{2}& \leq & \|{u{}^{k}\|}^{2} + (1 + c)\Delta \tau \|{u{}^{k+\frac{1} {2} }\|}^{2} + \Delta \tau \|{g{}^{k+\frac{1} {2} }\|}^{2} {}\\ & \leq & \|{u{}^{k}\|}^{2} + \frac{1 + c} {2} \Delta \tau \Big(\|{u{}^{k}\|}^{2} +\| {u{}^{k+1}\|}^{2}\Big) + \Delta \tau \|{g{}^{k+\frac{1} {2} }\|}^{2}, {}\\ & & 0 \leq k \leq K - 1. {}\\ \end{array}$$

If \(1 -{ 1 + c \over 2} \Delta \tau > 0\), then the inequality can be rewritten as

$$\displaystyle{\|{u{}^{k+1}\|}^{2} \leq { 1 + \frac{1+c} {2} \Delta \tau \over 1 -\frac{1+c} {2} \Delta \tau } \|{u{}^{k}\|}^{2} +{ \Delta \tau \over 1 -\frac{1+c} {2} \Delta \tau } \|{g{}^{k+\frac{1} {2} }\|}^{2}.}$$

It is clear that for \(\bar{C} > 2\), when \(\Delta \tau\) is small enough, we can have \({ 1 + \frac{1+c} {2} \Delta \tau \over 1 -\frac{1+c} {2} \Delta \tau } \leq 1 +\bar{ C}\frac{1+c} {2} \Delta \tau .\) Let us take \(\bar{C} = 3\); then we can easily find that the corresponding condition for Δτ is Δτ ≤ 2 ∕ [3(c + 1)] and that in this case \(1 -\frac{1+c} {2} \Delta \tau \geq \frac{2} {3}\). Thus, when Δτ ≤ 2 ∕ [3(c + 1)], we have

$$\displaystyle{\|{u{}^{k+1}\|}^{2} \leq \Big (1 + \frac{3(c + 1)} {2} \Delta \tau \Big)\|{u{}^{k}\|}^{2} + \frac{3} {2}\Delta \tau \|{g{}^{k+\frac{1} {2} }\|}^{2},\quad 0 \leq k \leq K - 1.}$$

From this discrete Gronwall inequality, we finally arrive at

$$\displaystyle{\|{u{}^{k+1}\|}^{2} \leq \mathrm{ {e}}^{3(c+1)T/2}\Big[\|{u{}^{0}\|}^{2} + \frac{3} {2}\Delta \tau \sum _{l=0}^{k}\|{g{}^{l+\frac{1} {2} }\|}^{2}\Big],\quad 0 \leq k \leq K - 1.}$$

This completes the proof. _

The method used here to prove the stability is usually called the energy method for stability analysis.

4.4  ‡ Convergence

For the convergence of the finite-difference scheme (7.46)–(7.47), we have

Theorem 7.3

Let \(\{U_{mn}^{k}\}\) be the solution of the problem (7.32) (7.33) and \(\{u_{mn}^{k}\}\) be the solution of Eqs.  (7.46) (7.47) . Denote

$$\displaystyle{e_{mn}^{k} = U_{ mn}^{k} - u_{ mn}^{k},\quad 0 \leq m \leq M,\quad 0 \leq n \leq N,\quad 0 \leq k \leq K.}$$

If Δτ ≤ 2∕[3(c + 1)], then we have

$$\displaystyle{\|{e}^{k+1}\| \leq \mathrm{ {e}}^{3(c+1)T/4}\sqrt{\frac{3(x_{u } - x_{l } )(y_{u } - y_{l } )T} {2}} \,c_{0}\left (h_{1}^{2} + h_{ 2}^{2} + {\Delta \tau }^{2}\right ),}$$
$$\displaystyle{\quad \quad \quad 0 \leq k \leq K - 1.}$$

Proof. Subtracting the equalities (7.46) and (7.47) from the equalities (7.43) and (7.45), respectively, we obtain the error equations

$$\displaystyle\begin{array}{rcl} \frac{1} {\Delta \tau }(e_{mn}^{k+1} - e_{ mn}^{k})& =& (a_{ 11})_{mn}^{k+\frac{1} {2} }\delta _{x}^{2}e_{mn}^{k+\frac{1} {2} } + 2(a_{12})_{mn}^{k+\frac{1} {2} }\Delta _{x}\Delta _{y}e_{mn}^{k+\frac{1} {2} } \\ & & +(a_{22})_{mn}^{k+\frac{1} {2} }\delta _{y}^{2}e_{mn}^{k+\frac{1} {2} } + (b_{1})_{mn}^{k+\frac{1} {2} }\Delta _{x}e_{mn}^{k+\frac{1} {2} } \\ & & +(b_{2})_{mn}^{k+\frac{1} {2} }\Delta _{y}e_{mn}^{k+\frac{1} {2} } + c_{mn}^{k+\frac{1} {2} }e_{mn}^{k+\frac{1} {2} } + R_{mn}^{k+\frac{1} {2} }, \\ & &0 \leq m \leq M,\quad 0 \leq n \leq N,\quad 0 \leq k \leq K - 1, {}\end{array}$$
(7.65)
$$\displaystyle\begin{array}{rcl} e_{mn}^{0}& =& 0,\quad 0 \leq m \leq M,\quad 0 \leq n \leq N.{}\end{array}$$
(7.66)

Taking the inner product of the system (7.65) with \({e}^{k+\frac{1} {2} }\) and using Lemma 7.1, we have

$$\displaystyle\begin{array}{rcl} & & \quad \frac{1} {2\Delta \tau }\left (\|{e{}^{k+1}\|}^{2} -\| {e{}^{k}\|}^{2}\right ) {}\\ & & = \left ({(a_{11})}^{k+\frac{1} {2} }\delta _{x}^{2}{e}^{k+\frac{1} {2} },{e}^{k+\frac{1} {2} }\right ) + 2\left ({(a_{12})}^{k+\frac{1} {2} }\Delta _{x}\Delta _{y}{e}^{k+\frac{1} {2} },{e}^{k+\frac{1} {2} }\right ) {}\\ & & \quad + \left ({(a_{22})}^{k+\frac{1} {2} }\delta _{y}^{2}{e}^{k+\frac{1} {2} },{e}^{k+\frac{1} {2} }\right ) + \left ({(b_{1})}^{k+\frac{1} {2} }\Delta _{x}{e}^{k+\frac{1} {2} },{e}^{k+\frac{1} {2} }\right ) {}\\ & & \quad + \left ({(b_{2})}^{k+\frac{1} {2} }\Delta _{y}{e}^{k+\frac{1} {2} },{e}^{k+\frac{1} {2} }\right ) + \left ({c}^{k+\frac{1} {2} }{e}^{k+1},{e}^{k+\frac{1} {2} }\right ) + \left ({R}^{k+\frac{1} {2} },{e}^{k+\frac{1} {2} }\right ) {}\\ & & \leq \frac{c} {2}\|{e{}^{k+\frac{1} {2} }\|}^{2} + \frac{1} {2}\|{R{}^{k+\frac{1} {2} }\|}^{2} + \frac{1} {2}\|{e{}^{k+\frac{1} {2} }\|}^{2},\quad 0 \leq k \leq K - 1, {}\\ \end{array}$$

from which we further get

$$\displaystyle\begin{array}{rcl} \|{e{}^{k+1}\|}^{2}& \leq & \|{e{}^{k}\|}^{2} + (1 + c)\Delta \tau \|{e{}^{k+\frac{1} {2} }\|}^{2} + \Delta \tau \|{R{}^{k+\frac{1} {2} }\|}^{2} {}\\ & \leq & \|{e{}^{k}\|}^{2} + \frac{1 + c} {2} \Delta \tau \Big(\|{e{}^{k}\|}^{2} +\| {e{}^{k+1}\|}^{2}\Big) + \Delta \tau \|{R{}^{k+\frac{1} {2} }\|}^{2}, {}\\ & & 0 \leq k \leq K - 1. {}\\ \end{array}$$

Using the condition (7.44) and when Δτ ≤ 2 ∕ [3(c + 1)], we can rewrite this inequality as

$$\displaystyle\begin{array}{rcl} \|{e{}^{k+1}\|}^{2}& \leq & \Big(1 + \frac{3(c + 1)} {2} \Delta \tau \Big)\|{e{}^{k}\|}^{2} + \frac{3} {2}\Delta \tau \|{R{}^{k+\frac{1} {2} }\|}^{2} {}\\ & \leq & \Big(1 + \frac{3(c + 1)} {2} \Delta \tau \Big)\|{e{}^{k}\|}^{2} {}\\ & & +\frac{3} {2}(x_{u} - x_{l})(y_{u} - y_{l})c_{0}^{2}\Delta \tau \Big{(h_{ 1}^{2} + h_{ 2}^{2} + {\Delta \tau }^{2}\Big)}^{2}, {}\\ & & 0 \leq k \leq K - 1. {}\\ \end{array}$$

The Gronwall inequality gives

$$\displaystyle{\|{e{}^{k+1}\|}^{2} \leq \mathrm{ {e}}^{3(c+1)T/2}\frac{3(x_{u} - x_{l})(y_{u} - y_{l})T} {2} c_{0}^{2}\Big{(h_{ 1}^{2} +h_{ 2}^{2} +{\Delta \tau }^{2}\Big)}^{2},\;0 \leq k \leq K -1,}$$

or

$$\displaystyle{\|{e}^{k+1}\| \leq \mathrm{ {e}}^{3(c+1)T/4}\sqrt{3\frac{(x_{u } - x_{l } )(y_{u } - y_{l } )T} {2}} \,c_{0}\left (h_{1}^{2} + h_{ 2}^{2} + {\Delta \tau }^{2}\right ),}$$
$$\displaystyle{0 \leq k \leq K - 1.}$$

This completes the proof. _

For the solution to the difference scheme (7.46)–(7.47), we can also use the extrapolation technique to improve the accuracy of the numerical solutions when solutions are smooth. The idea is the same as what is described in Sect. 7.3. Based on the results given in this subsection, some theoretical conclusions on the extrapolation technique can be obtained. For details, see the paper [78] by Sun and Zhu.

5 Problems

Table 7.1 Problems and Sections
  1. 1.

    *Let \(f_{m}^{n}\) denote f(mΔx, nΔτ). Find the truncation error of the explicit difference scheme

    $$\displaystyle\begin{array}{rcl}{ u_{m}^{n+1} - u_{m}^{n} \over \Delta \tau } & =& a_{m}^{n}{ u_{m+1}^{n} - 2u_{ m}^{n} + u_{ m-1}^{n} \over \Delta {x}^{2}} {}\\ & & +b_{m}^{n}{ u_{m+1}^{n} - u_{ m-1}^{n} \over 2\Delta x} + c_{m}^{n}u_{ m}^{n} {}\\ \end{array}$$

    to the parabolic partial differential equation

    $$\displaystyle{{ \partial u \over \partial \tau } = a(x,\tau ){ {\partial }^{2}u \over \partial {x}^{2}} + b(x,\tau ){ \partial u \over \partial x} + c(x,\tau )u.}$$
  2. 2.

    Show that the truncation error of the Crank–Nicolson scheme for the heat equation at the point \((x_{m}{,\tau }^{n+1/2})\) is in the following form:

    $$\displaystyle{{\Delta \tau }^{2}\left [{ 1 \over 24} { {\partial }^{3}u \over {\partial \tau }^{3}} (x_{m}{,\eta }^{(1)}) -{ a \over 8} { {\partial }^{4}u \over \partial {x}^{2}{\partial \tau }^{2}} (x_{m}{,\eta }^{(2)})\right ] -{ \Delta {x}^{2}a \over 12} { {\partial }^{4}u \over \partial {x}^{4}} (\xi {,\eta }^{(3)}),}$$

    where \(\xi \in (x_{m-1},x_{m+1})\), \({\eta }^{(k)} \in {(\tau }^{n}{,\tau }^{n+1})\), k = 1, 2, 3, and a is the conductivity coefficient in the heat equation.

  3. 3.

    *Let \(f_{m}^{n}\) denote f(mΔx, nΔτ). Find the truncation error of the implicit difference scheme

    $$\displaystyle\begin{array}{rcl}{ u_{m}^{n+1} - u_{m}^{n} \over \Delta \tau } & =&{ a_{m}^{n+1/2} \over 2} \left ({ u_{m+1}^{n+1} - 2u_{m}^{n+1} + u_{m-1}^{n+1} \over \Delta {x}^{2}} +{ u_{m+1}^{n} - 2u_{m}^{n} + u_{m-1}^{n} \over \Delta {x}^{2}} \right ) {}\\ & & +{ b_{m}^{n+1/2} \over 2} \left ({ u_{m+1}^{n+1} - u_{m-1}^{n+1} \over 2\Delta x} +{ u_{m+1}^{n} - u_{m-1}^{n} \over 2\Delta x} \right ) {}\\ & & +{ c_{m}^{n+1/2} \over 2} (u_{m}^{n+1} + u_{ m}^{n}) {}\\ \end{array}$$

    to the parabolic partial differential equation

    $$\displaystyle{{ \partial u \over \partial \tau } = a(x,\tau ){ {\partial }^{2}u \over \partial {x}^{2}} + b(x,\tau ){ \partial u \over \partial x} + c(x,\tau )u.}$$
  4. 4.

    The heat equation

    $$\displaystyle{{ \partial u \over \partial \tau } ={ {\partial }^{2}u \over \partial {x}^{2}} }$$

    can also be discretized by

    $$\displaystyle{{ u_{m}^{n+1} - u_{m}^{n} \over \Delta \tau } =\theta \left (\!{ u_{m+1}^{n+1} - 2u_{m}^{n+1} + u_{m-1}^{n+1} \over \Delta {x}^{2}} \!\right )+(1-\theta )\left (\!{ u_{m+1}^{n} - 2u_{m}^{n} + u_{m-1}^{n} \over \Delta {x}^{2}} \!\right )}$$

    or

    $$\displaystyle{u_{m}^{n+1}-\theta \alpha (u_{ m+1}^{n+1}-2u_{ m}^{n+1}+u_{ m-1}^{n+1}) = u_{ m}^{n}+(1-\theta )\alpha (u_{ m+1}^{n}-2u_{ m}^{n}+u_{ m-1}^{n}),}$$

    where 0 ≤ θ ≤ 1 and α = Δτ ∕ Δx 2. This scheme is called the θ–scheme. It is clear that when θ = 0, the scheme reduces to the explicit scheme and when θ = 1 ∕ 2, the scheme becomes the Crank–Nicolson scheme. Show that the order of truncation error of the θ–scheme is

    $$\displaystyle{O\left (\left (1 - 2\theta \right )\Delta \tau + {\Delta \tau }^{2} + \Delta {x}^{2}\right ).}$$

    (Hint: Discretize the partial differential equation at x = x m and τ = τ n + θ.)

  5. 5.

    Consider the parabolic partial differential equation

    $$\displaystyle{{ \partial u \over \partial \tau } = a(x,\tau ){ {\partial }^{2}u \over \partial {x}^{2}} + b(x,\tau ){ \partial u \over \partial x} + c(x,\tau )u,}$$

    which is defined for x ∈ [0, 1] and τ ≥ 0. Here a(x, τ) ≥ 0 holds and we suppose that \({ \partial a \over \partial x}\) is bounded. Assuming that u(x, τ) is given, we want to determine u(x, τ + Δτ) with Δτ > 0 for x ∈ [0, 1].

    1. (a)

      Under what conditions on a(x, τ) and b(x, τ) a boundary condition is needed and under what conditions no boundary condition is needed at x = 0 and x = 1?

    2. (b)

      Suppose that an explicit scheme will be used. How do we determine u(0, τ + Δτ) and u(1, τ + Δτ) if no boundary condition should be given?

  6. 6.

    *Consider the three-point explicit finite-difference scheme:

    $$\displaystyle{u_{m}^{n+1} = a_{ m}u_{m-1}^{n} + b_{ m}u_{m}^{n} + c_{ m}u_{m+1}^{n},\quad m = 1,2,\cdots \,,M - 1,}$$

    where a m  ≥ 0, \(b_{m} = 1 - a_{m} - c_{m} \geq 0\), c m  ≥ 0 and \(a_{0} = c_{M} = 0\). Show

    $$\displaystyle{\max _{1\leq m\leq M-1}\vert u_{m}^{n+1}\vert \leq \max _{ 1\leq m\leq M-1}\vert u_{m}^{n}\vert .}$$

    This means that the numerical procedure is stable under the maximum norm.

  7. 7.

    Consider the equation

    $$\displaystyle{\lambda \mathbf{A}\mathbf{x} = \mathbf{B}\mathbf{x}\quad \mbox{ or }\quad {\mathbf{A}}^{-1}\mathbf{B}\mathbf{x} =\lambda \mathbf{x},}$$

    where A and B are (M − 1) ×(M − 1) matrices and their concrete expressions are

    $$\displaystyle{\mathbf{A} = \left [\begin{array}{ccccc} a_{0} & a_{1} & 0 & \cdots & 0\\ a_{ 1} & a_{0} & a_{1} & \ddots & \vdots \\ 0 &a_{1} & a_{0} & \ddots & 0\\ \vdots & \ddots & \ddots & \ddots &a_{ 1} \\ 0 & \cdots & 0 &a_{1} & a_{0}\end{array} \right ]}$$

    and

    $$\displaystyle{\mathbf{B} = \left [\begin{array}{ccccc} b_{0} & b_{1} & 0 & \cdots & 0 \\ b_{1} & b_{0} & b_{1} & \ddots & \vdots \\ 0 &b_{1} & b_{0} & \ddots & 0 \\ \vdots & \ddots & \ddots & \ddots &b_{1} \\ 0 & \cdots & 0 &b_{1} & b_{0}\end{array} \right ].}$$

    Find M − 1 linearly independent eigenvectors of A  − 1 B and their associated eigenvalues.

  8. 8.

    Consider the equation

    $$\displaystyle{\lambda \mathbf{A}_{2}\mathbf{x} = \mathbf{B}_{2}\mathbf{x}}$$

    or

    $$\displaystyle{\mathbf{A}_{2}^{-1}\mathbf{B}_{ 2}\mathbf{x} =\lambda \mathbf{x},}$$

    where A 2 and B 2 are M ×M matrices and their concrete expressions are

    $$\displaystyle{\mathbf{A}_{2} = \left [\begin{array}{ccccc} a_{0} & a_{1} & 0 & \cdots &a_{-1} \\ a_{-1} & a_{0} & a_{1} & \ddots & \vdots \\ 0 &a_{-1} & a_{0} & \ddots & 0\\ \vdots & \ddots & \ddots & \ddots & a_{ 1} \\ a_{1} & \cdots & 0 &a_{-1} & a_{0}\end{array} \right ]}$$

    and

    $$\displaystyle{\mathbf{B}_{2} = \left [\begin{array}{ccccc} b_{0} & b_{1} & 0 & \cdots &b_{-1} \\ b_{-1} & b_{0} & b_{1} & \ddots & \vdots \\ 0 &b_{-1} & b_{0} & \ddots & 0 \\ \vdots & \ddots & \ddots & \ddots & b_{1} \\ b_{1} & \cdots & 0 &b_{-1} & b_{0}\end{array} \right ].}$$

    Find M linearly independent eigenvectors of \({\mathbf{A}_{2}}^{-1}\mathbf{B}_{2}\) and their associated eigenvalues.

  9. 9.
    1. (a)

      Consider an M ×M matrix

      $$\displaystyle{\mathbf{A} = \left (\begin{array}{ccccccc} a& b &0 &\cdots &\cdots &0 & b\\ b & a & b & 0 &\cdots &\cdots & 0 \\ 0& \ddots & \ddots & \ddots & \ddots & & \vdots\\ \vdots & \ddots & \ddots & \ddots & \ddots & \ddots &\vdots\\ \vdots & & \ddots & \ddots & \ddots & \ddots & 0 \\ 0&\cdots &\cdots &0 & b &a& b\\ b & 0 &\cdots &\cdots & 0 & b &a\end{array} \right ).}$$

      Suppose a = q + 2 ∕ h 2 and b =  − 1 ∕ h 2. Show that its eigenvalues are \(\lambda _{j} = q +{{ 4 \over {h}^{2}} \sin }^{2}{ \theta _{j} \over 2} ,\;j = 0,1,\cdots \,,M - 1\), where \(\theta _{j} = j \frac{2\pi } {M}\), and the corresponding eigenvectors are

      $$\displaystyle{\mathbf{v}_{j} = \left (\begin{array}{c} 1\\ \cos \theta _{j } \\ \cos 2\theta _{j}\\ \vdots \\ \cos \left (M - 1\right )\theta _{j}\end{array} \right ),\quad j = 0,1,\cdots \,,\mbox{ int}\left (\frac{M} {2} \right ),}$$

      and

      $$\displaystyle{\mathbf{v}_{j} = \left (\begin{array}{c} 0\\ \sin \theta _{j } \\ \sin 2\theta _{j}\\ \vdots \\ \sin \left (M - 1\right )\theta _{j}\end{array} \right ),\quad j = \mbox{ int}\left (\frac{M} {2} \right )+1,\cdots \,,M-1,}$$

      respectively, where \(\mbox{ int}\left ({ M \over 2} \right )\) is the integer part of \({ M \over 2} .\)

    2. (b)

      Find the eigenvalues and eigenvectors of A  − 1. 

    3. (c)

      Suppose \(a ={ q \over 2} +{ 2 \over {h}^{2}}\) and \(b ={ q \over 4} -{ 1 \over {h}^{2}}\), find the eigenvalues and eigenvectors of A and A  − 1. 

  10. 10.

    *Consider the explicit scheme

    $$\displaystyle\begin{array}{rcl}{ u_{m}^{n+1} - u_{m}^{n} \over \Delta \tau } & =& a{ u_{m+1}^{n} - 2u_{m}^{n} + u_{m-1}^{n} \over \Delta {x}^{2}} ,\quad m = 1,2,\cdots \,,M - 1 {}\\ \end{array}$$

    with \(u_{0}^{n+1} = f_{l}{(\tau }^{n+1})\) and \(u_{M}^{n+1} = f_{u}{(\tau }^{n+1})\). Determine when it is stable with respect to initial values in L2 norm and when it is unstable. (Suppose a > 0.)

  11. 11.

    *Consider the implicit scheme

    $$\displaystyle\begin{array}{rcl} & &{ u_{m}^{n+1} - u_{m}^{n} \over \Delta \tau } ={ a \over 2} \left ({ u_{m+1}^{n+1} - 2u_{m}^{n+1} + u_{m-1}^{n+1} \over \Delta {x}^{2}} +{ u_{m+1}^{n} - 2u_{m}^{n} + u_{m-1}^{n} \over \Delta {x}^{2}} \right ), {}\\ & & m = 1,2,\cdots \,,M - 1 {}\\ \end{array}$$

    with \(u_{0}^{n+1} = f_{l}{(\tau }^{n+1})\) and \(u_{M}^{n+1} = f_{u}{(\tau }^{n+1})\). Show that it is always stable with respect to initial values in L2 norm. (Suppose a > 0.)

  12. 12.

    By using the von Neumann method, show that for periodic problems, the θ–scheme for the heat equation

    $$\displaystyle\begin{array}{rcl} & & u_{m}^{n+1} -\theta \alpha \left (u_{ m+1}^{n+1} - 2u_{ m}^{n+1} + u_{ m-1}^{n+1}\right ) {}\\ & =& u_{m}^{n} + \left (1-\theta \right )\alpha \left (u_{ m+1}^{n} - 2u_{ m}^{n} + u_{ m-1}^{n}\right ) {}\\ \end{array}$$

    is stable for all α > 0 if \({ 1 \over 2} \leq \theta \leq 1\) and that it is stable for \(0 <\alpha \leq { 1 \over 2(1 - 2\theta )}\) if \(0 <\theta <{ 1 \over 2}\) .

  13. 13.

    Consider the following parabolic partial differential equation:

    $$\displaystyle{\frac{\partial u} {\partial \tau } = a_{11}\frac{{\partial }^{2}u} {\partial {x}^{2}} + 2a_{12} \frac{{\partial }^{2}u} {\partial x\partial y} + a_{22}\frac{{\partial }^{2}u} {\partial {y}^{2}} + b_{1}\frac{\partial u} {\partial x} + b_{2}\frac{\partial u} {\partial y},}$$

    where \(a_{11}\left (x,y,\tau \right ) \geq 0,a_{22}\left (x,y,\tau \right ) \geq 0,a_{12}\left (x,y,\tau \right ) =\rho _{12}\left (x,y,\tau \right )\sqrt{a_{11 } a_{22}}\) with \(\rho _{12} \in \left [-1,1\right ]\), and \(b_{1},b_{2}\) are any functions of x, y, τ. This equation can be approximated by

    1. (i)
      $$\displaystyle\begin{array}{rcl} & & \frac{u_{m,n}^{k+1} - u_{m,n}^{k}} {\Delta \tau } {}\\ & \!=& \frac{a_{11,m,n}^{k+\frac{1} {2} }} {2} \!\left (\!\frac{u_{m+1,n}^{k+1} - 2u_{m,n}^{k+1} + u_{m-1,n}^{k+1}} {\Delta {x}^{2}} + \frac{u_{m+1,n}^{k} - 2u_{m,n}^{k} + u_{m-1,n}^{k}} {\Delta {x}^{2}} \!\right ){}\\ \end{array}$$
      $$\displaystyle\begin{array}{rcl} & & +a_{12,m,n}^{k+\frac{1} {2} }\left (\frac{u_{m+1,n+1}^{k+1} - u_{m+1,n-1}^{k+1} - u_{m-1,n+1}^{k+1} + u_{m-1,n-1}^{k+1}} {4\Delta x\Delta y} \right . {}\\ & & \left .+\frac{u_{m+1,n+1}^{k} - u_{m+1,n-1}^{k} - u_{m-1,n+1}^{k} + u_{m-1,n-1}^{k}} {4\Delta x\Delta y} \right ) {}\\ & & +\frac{a_{22,m,n}^{k+\frac{1} {2} }} {2} \!\left (\!\frac{u_{m,n+1}^{k+1}\! -\! 2u_{m,n}^{k+1} + u_{m,n-1}^{k+1}} {\Delta {y}^{2}} \! +\! \frac{u_{m,n+1}^{k}\! -\! 2u_{m,n}^{k} + u_{m,n-1}^{k}} {\Delta {y}^{2}} \!\right ) {}\\ & & +\frac{b_{1,m,n}^{k+\frac{1} {2} }} {2} \left (\frac{u_{m+1,n}^{k+1} - u_{m-1,n}^{k+1}} {2\Delta x} + \frac{u_{m+1,n}^{k} - u_{m-1,n}^{k}} {2\Delta x} \right ) {}\\ & & +\frac{b_{2,m,n}^{k+\frac{1} {2} }} {2} \left (\frac{u_{m,n+1}^{k+1} - u_{m,n-1}^{k+1}} {2\Delta y} + \frac{u_{m,n+1}^{k} - u_{m,n-1}^{k}} {2\Delta y} \right )\quad \mbox{ or } {}\\ \end{array}$$
    2. (ii)
      $$\displaystyle\begin{array}{rcl} & & \frac{u_{m,n}^{k+1} - u_{m,n}^{k}} {\Delta \tau } {}\\ & \!=& \!\frac{a_{11,m,n}^{k+\frac{1} {2} }} {2} \!\left (\!\frac{u_{m+1,n}^{k+1} - 2u_{m,n}^{k+1} + u_{m-1,n}^{k+1}} {\Delta {x}^{2}} + \frac{u_{m+1,n}^{k} - 2u_{m,n}^{k} + u_{m-1,n}^{k}} {\Delta {x}^{2}} \!\right ) {}\\ & & +a_{12,m,n}^{k+\frac{1} {2} }\left (\frac{u_{m+1,n+1}^{k+1} - u_{m+1,n-1}^{k+1} - u_{m-1,n+1}^{k+1} + u_{m-1,n-1}^{k+1}} {4\Delta x\Delta y} \right . {}\\ & & \left .+\frac{u_{m+1,n+1}^{k} - u_{m+1,n-1}^{k} - u_{m-1,n+1}^{k} + u_{m-1,n-1}^{k}} {4\Delta x\Delta y} \right ) {}\\ & & \!+\frac{a_{22,m,n}^{k+\frac{1} {2} }} {2} \!\left (\!\frac{u_{m,n+1}^{k+1}\! -\! 2u_{m,n}^{k+1} + u_{m,n-1}^{k+1}} {\Delta {y}^{2}} \! +\! \frac{u_{m,n+1}^{k}\! -\! 2u_{m,n}^{k} + u_{m,n-1}^{k}} {\Delta {y}^{2}} \!\right ) {}\\ & & +\frac{b_{1,m,n}^{k+\frac{1} {2} }} {2} \left (\frac{-u_{m+2,n}^{k+1} + 4u_{m+1,n}^{k+1} - 3u_{m,n}^{k+1}} {2\Delta x} \right . {}\\ & & \left .+\frac{-u_{m+2,n}^{k} + 4u_{m+1,n}^{k} - 3u_{m,n}^{k}} {2\Delta x} \right ) {}\\ & & \!+\frac{b_{2,m,n}^{k+\frac{1} {2} }} {2} \!\left (\!\frac{3u_{m,n}^{k+1}\! -\! 4u_{m,n-1}^{k+1} + u_{m,n-2}^{k+1}} {2\Delta y} \! +\! \frac{3u_{m,n}^{k}\! -\! 4u_{m,n-1}^{k} + u_{m,n-2}^{k}} {2\Delta y} \!\right ){}\\ \end{array}$$

      if \(b_{1}\left (x,y,\tau \right ) \geq 0\) and \(b_{2}\left (x,y,\tau \right ) \leq 0\). By the von Neumann method, show that they are stable.

    (Hint:

    1. (a)

      First show that the amplification factor λ can be written as \(\lambda ={ 1 + a + ib \over 1 - a - ib}\).

    2. (b)

      Then show that | λ | 2 ≤ 1 is equivalent to \(\vert 1 - a - ib{\vert }^{2} -\vert 1 + a + ib{\vert }^{2} = -4a \geq 0\).

    3. (c)

      Finally show − 4a ≥ 0 by using the following inequalities: (i) \({A}^{2} + {B}^{2} + 2\rho AB ={ \left (A +\rho B\right )}^{2} + {B}^{2}\left (1 {-\rho }^{2}\right ) \geq 0\) if \(\left \vert \rho \right \vert \leq 1\); (ii) \(\cos 2\theta - 4\cos \theta + 3 = 2{\left (\cos \theta -1\right )}^{2} \geq 0.\))

  14. 14.

    *Show that if

    $$\displaystyle{\max _{0\leq m\leq M}{ x_{m}^{2}{(1 - x_{m})}^{2}\bar{\sigma }_{m}^{2} \over 2} { \Delta \tau \over \Delta {x}^{2}} \leq { 1 \over 2} ,}$$

    then for the scheme with variable coefficients

    $$\displaystyle\begin{array}{rcl}{ u_{m}^{n+1} - u_{m}^{n} \over \Delta \tau } & =&{ 1 \over 2} {[x_{m}(1 - x_{m})\bar{\sigma }_{m}]}^{2}{ u_{m+1}^{n} - 2u_{ m}^{n} + u_{ m-1}^{n} \over \Delta {x}^{2}} {}\\ & & +\;(r - D_{0})x_{m}(1 - x_{m}){ u_{m+1}^{n} - u_{m-1}^{n} \over 2\Delta x} {}\\ & & -\left [r\left (1 - x_{m}\right ) + D_{0}x_{m}\right ]u_{m}^{n}, {}\\ \end{array}$$

    the condition \(\vert \lambda _{\theta }(x_{m}{,\tau }^{n})\vert \leq 1 + O(\Delta \tau )\) is satisfied for any x m  = m ∕ M ∈ [0, 1]. (When you prove this result, you should derive the stability condition for explicit schemes by yourself.)

  15. 15.

    For the scheme with variable coefficients

    $$\displaystyle\begin{array}{rcl} \!\!& & \!\!{ u_{m}^{n+1} - u_{m}^{n} \over \Delta \tau } {}\\ \!\!& =& \!\!{ 1 \over 4} {[x_{m}(1 - x_{m})\bar{\sigma }_{m}]}^{2}\left (\!{ u_{m+1}^{n+1} - 2u_{ m}^{n+1} + u_{ m-1}^{n+1} \over \Delta {x}^{2}} +{ u_{m+1}^{n} - 2u_{m}^{n} + u_{m-1}^{n} \over \Delta {x}^{2}} \!\right ) {}\\ \!\!& & \!\!+\;{ 1 \over 2} (r - D_{0})x_{m}(1 - x_{m})\left (\!{ u_{m+1}^{n+1} - u_{m-1}^{n+1} \over 2\Delta x} +{ u_{m+1}^{n} - u_{m-1}^{n} \over 2\Delta x} \!\right ) {}\\ \!\!& & -\;{ 1 \over 2} \left [r\left (1 - x_{m}\right ) + D_{0}x_{m}\right ](u_{m}^{n+1} + u_{ m}^{n}), {}\\ \end{array}$$

    show that the condition \(\vert \lambda _{\theta }(x_{m}{,\tau }^{n})\vert \leq 1 + O(\Delta \tau )\) is satisfied for any x m  ∈ [0, 1].

  16. 16.
    1. (a)

      Consider the explicit difference scheme

      $$\displaystyle{{ u_{m}^{n+1} - u_{m}^{n} \over \Delta \tau } = a_{m}^{n}{ u_{m+1}^{n} - 2u_{ m}^{n} + u_{ m-1}^{n} \over \Delta {x}^{2}} +b_{m}^{n}{ u_{m+1}^{n} - u_{ m-1}^{n} \over 2\Delta x} +c_{m}^{n}u_{ m}^{n}}$$

      to the parabolic partial differential equation

      $$\displaystyle{{ \partial u \over \partial \tau } = a(x,\tau ){ {\partial }^{2}u \over \partial {x}^{2}} + b(x,\tau ){ \partial u \over \partial x} + c(x,\tau )u.}$$

      Assume that its stability with respect to initial value and nonhomogeneous term is proved under certain conditions. Show that for its solution, under these conditions there is the following relation: \(u\left (x,\tau ;\Delta x,\Delta \tau \right ) = u(x,\tau ) + a\left (x,\tau ;{ \Delta {x}^{2} \over \Delta \tau } \right )\Delta \tau + O({\Delta \tau }^{2})\), where \(\left \vert O({\Delta \tau }^{2})\right \vert \leq c{\Delta \tau }^{2}\), c being bounded as Δτ → 0 with \({ \Delta {x}^{2} \over \Delta \tau } = constant\).

    2. (b)

      Suppose we have two such approximate solutions \(u\left (x,\tau ;\Delta x,\Delta \tau \right )\) and \(u\left (x,\tau ;\Delta x/2,\Delta \tau /4\right )\). Find a linear combination

      $$\displaystyle{(1 - d) \times u\left (x,\tau ;\Delta x,\Delta \tau \right ) + d \times u\left (x,\tau ;\Delta x/2,\Delta \tau /4\right )}$$

      such that it is an approximate solution with an error of O(Δτ 2).

  17. 17.
    1. (a)

      Assume that an approximate solution \(u\left (x,\tau ;\Delta x,\Delta \tau \right )\) has the following expression:

      $$\displaystyle\begin{array}{rcl} & & u\left (x,\tau ;\Delta x,\Delta \tau \right ) {}\\ & =& u\left (x,\tau \right ) + a\left (x,\tau ; \frac{\Delta x} {\Delta \tau } \right ){\Delta \tau }^{2} + b\left (x,\tau ; \frac{\Delta x} {\Delta \tau } \right ){\Delta \tau }^{3} + O\left ({\Delta \tau }^{4}\right ), {}\\ \end{array}$$

      where \(u\left (x,\tau \right )\) is the exact solution. Suppose that we have two approximate solutions: \(u\left (x,\tau ;{ 1 \over 12} ,{ T \over 16} \right )\) and \(u\left (x,\tau ;{ 1 \over 9} ,{ T \over 12} \right )\). Find a linear combination

      $$\displaystyle{(1 - d) \times u\left (x,\tau ;{ 1 \over 12} ,{ T \over 16} \right ) + d \times u\left (x,\tau ;{ 1 \over 9} ,{ T \over 12} \right )}$$

      such that it is an approximate solution with an error of O(Δτ 3).

    2. (b)

      Suppose that there is another approximate solution \(u\left (x,\tau ;{ 1 \over 15} ,{ T \over 20} \right )\). Find a linear combination

      $$\displaystyle{d_{0} \times u\left (x,\tau ;{ 1 \over 15} ,{ T \over 20} \right ) + d_{1} \times u\left (x,\tau ;{ 1 \over 12} ,{ T \over 16} \right ) + d_{2} \times u\left (x,\tau ;{ 1 \over 9} ,{ T \over 12} \right )}$$

      such that it is an approximate solution with an error of O(Δτ 4), where \(d_{0} = 1 - d_{1} - d_{2}\).

  18. 18.

    *Explain why, how and when the extrapolation technique will improve the accuracy of numerical solutions.

  19. 19.

    Let \(\mathcal{V} =\{ u\,\vert \,u = (u_{0},u_{1},\cdots \,,u_{M-1},u_{M})\}\) be the grid function space on \(\Omega _{h} =\{ x_{m}\;\vert \;x_{m} = x_{l} + mh,0 \leq m \leq M,h = (x_{u} - x_{l})/M\}.\) For any \(u \in \mathcal{V},\) and \(v \in \mathcal{V},\) introduce the inner product

    $$\displaystyle\begin{array}{rcl} & & (u,v) = h\left (\frac{1} {2}u_{0}v_{0} +\sum _{ m=1}^{M-1}u_{ m}v_{m} + \frac{1} {2}u_{M}v_{M}\right ) {}\\ \end{array}$$

    and norm

    $$\displaystyle{\|u\| = \sqrt{(u, u)}.}$$

    In addition, denote

    $$\displaystyle\begin{array}{rcl} & & \Delta _{x}u_{m} = \frac{1} {2h}(u_{m+1} - u_{m-1}),\quad \delta _{x}^{2}u_{ m} = \frac{1} {{h}^{2}}(u_{m+1} - 2u_{m} + u_{m-1}). {}\\ \end{array}$$
    1. (a)

      Suppose

      $$\displaystyle{a(x) \in {C}^{(2)}[x_{ l},x_{u}],\quad a(x) \geq 0,\quad a(x_{l}) = a(x_{u}) = {a}^{{\prime}}(x_{ l}) = {a}^{{\prime}}(x_{ u}) = 0}$$

      and

      $$\displaystyle{\max _{x_{l}\leq x\leq x_{u}}\vert {a}^{{\prime\prime}}(x)\vert = c_{ 1}.}$$

      Prove

      $$\displaystyle{\left (a\delta _{x}^{2}u,u\right ) \leq \frac{1} {2}c_{1}\|{u\|}^{2}.}$$
    2. (b)

      Suppose

      $$\displaystyle{b(x) \in {C}^{(1)}[x_{ l},x_{u}],\qquad b(x_{l}) = b(x_{u}) = 0,\qquad \max _{x_{l}\leq x\leq x_{u}}\vert {b}^{{\prime}}(x)\vert = c_{ 2}.}$$

      Prove

      $$\displaystyle{\left (b\Delta _{x}u,u\right ) \leq \frac{1} {2}c_{2}\|{u\|}^{2}.}$$
  20. 20.

    Suppose that \((a_{12})_{0n} = (a_{12})_{Mn} = (a_{12})_{m0} = (a_{12})_{mN} = 0\). Show

    $$\displaystyle\begin{array}{rcl} & & h_{1}h_{2}\sum _{m=1}^{M -1}\sum _{ n=1}^{N -1}(a_{ 12})_{mn}\;\delta _{x}\delta _{y}u_{m-\frac{1} {2} ,n+\frac{1} {2} }\;u_{mn} {}\\ & =& -h_{1}h_{2}\sum _{m=1}^{M -1}\sum _{ n=1}^{N -1}(a_{ 12})_{mn}\;\delta _{x}u_{m+\frac{1} {2} ,n}\;\delta _{y}u_{m,n+\frac{1} {2} }\; {}\\ & & -h_{1}h_{2}\sum _{m=0}^{M -1}\sum _{ n=1}^{N -1}\left (\delta _{ x}a_{12}\right )_{m+\frac{1} {2} ,n}\;\delta _{y}u_{m,n+\frac{1} {2} }\;u_{m+1,n} {}\\ \end{array}$$

    and

    $$\displaystyle\begin{array}{rcl} & & h_{1}h_{2}\sum _{m=1}^{M -1}\sum _{ n=1}^{N -1}(a_{ 12})_{mn}\;\delta _{x}\delta _{y}u_{m+\frac{1} {2} ,n+\frac{1} {2} }\;u_{mn} {}\\ & =& -h_{1}h_{2}\sum _{m=1}^{M -1}\sum _{ n=1}^{N -1}(a_{ 12})_{mn}\;\delta _{x}u_{m-\frac{1} {2} ,n}\;\delta _{y}u_{m,n+\frac{1} {2} }\; {}\\ & & -h_{1}h_{2}\sum _{m=0}^{M -1}\sum _{ n=1}^{N -1}\left (\delta _{ x}a_{12}\right )_{m+\frac{1} {2} ,n}\;\delta _{y}u_{m+1,n+\frac{1} {2} }\;u_{mn} {}\\ \end{array}$$

    by a direct calculation.

  21. 21.

    Suppose \(\{u_{m}^{k}\}\) is the solution of the difference scheme

    $$\displaystyle\begin{array}{rcl} \frac{1} {\Delta \tau }(u_{m}^{k+1} - u_{ m}^{k})& =& a(x_{ m})\delta _{x}^{2}u_{ m}^{k+\frac{1} {2} } + b(x_{m})\Delta _{x}u_{m}^{k+\frac{1} {2} } + c(x_{m})u_{m}^{k+\frac{1} {2} } {}\\ & & +\,g(x_{m}{,\tau }^{k+\frac{1} {2} }),\quad 0 \leq m \leq M,\quad 0 \leq k \leq K - 1, {}\\ u_{m}^{0}& =& f(x_{ m}),\quad 0 \leq m \leq M, {}\\ \end{array}$$

    where \(u_{m}^{k+\frac{1} {2} } = \frac{1} {2}\left (u_{m}^{k} + u_{m}^{k+1}\right )\) and

    $$\displaystyle\begin{array}{rcl} & & a(x) \in {C}^{(2)}[x_{ l},x_{u}],\qquad b(x) \in {C}^{(1)}[x_{ l},x_{u}], {}\\ & & a(x) \geq 0,\quad a(x_{l}) = a(x_{u}) = {a}^{{\prime}}(x_{ l}) = {a}^{{\prime}}(x_{ u}) = b(x_{l}) = b(x_{u}) = 0, {}\\ & & \max _{x_{l}\leq x\leq x_{u}}\vert {a}^{{\prime\prime}}(x)\vert = c_{ 1},\quad \max _{x_{l}\leq x\leq x_{u}}\vert {b}^{{\prime}}(x)\vert = c_{ 2},\quad \max _{x_{l}\leq x\leq x_{u}}\vert c(x)\vert = c_{3}, {}\\ & & c = c_{1} + c_{2} + 2c_{3},\qquad \Delta \tau \leq 2/[3(c + 1)]. {}\\ \end{array}$$

    Prove

    $$\displaystyle{\|{u{}^{k+1}\|}^{2} \leq \mathrm{ {e}}^{3(c+1)T/2}\left (\|{f\|}^{2} + \frac{3} {2}\Delta t\sum _{l=0}^{k}\|{g{}^{l+\frac{1} {2} }\|}^{2}\right ),\quad 0 \leq k \leq K - 1.}$$