1 Introduction

Perturbation theory comprises a large collection of mathematical techniques devoted to achieving an estimated solution to problems that have no closed-form of the exact solutions. These types of problems contain small positive parameters which make that the solution changes abruptly in some regions of the problem domain and gradually in other parts. The internal region where the solution changes quickly is called the inner region. It is a familiar fact that the singularly perturbed boundary value problem presents boundary and/or interior layers. Therefore, the development of efficient numerical procedures for solving singularly perturbed differential equations is a computational challenge.

Those kind of problems arise in various branches of applied areas related to Chemistry. In fact, these types of problems occur in the kinetics of a catalyzed reaction, in Enzyme Kinetics, in the Belousov-Zhabotinskii reaction in Chemical and Biochemical Reaction Theory [1,2,3,4]. In fluid mechanics, they can serve to model a noticeable performance at the Viscous Boundary Layer of a Flat Plate, Viscous Flow Past a Sphere, a piston problem, a Variable-Depth Korteweg–de Vries Equations for Water Waves [5,6,7]. Further, it plays an important role in Semi and Superconductors theory, Light Propagation through a Slowly Varying Medium, Raman Scattering, Quantum Jumps in the Ion Trap, Low-Pressure Gas Glow through a Long Tube, Drilling by Laser, Meniscus on a Circular Tube, Van Der Pol (Rayleigh) Oscillator, a Diode Oscillator with a Current Pump, Klein–Gordon Equation, Slow Decay of a Satellite Orbit, Einstein Equation for Mercury, Planetary Rings or Thermal Runaway, among others areas [8].

In Vigo-Aguiar and Natesan [9] proposed a numerical scheme for solving singularly perturbed two-point boundary-value problems. They handled the adaptation of multistep algorithms with the combination of a classical finite difference scheme and the exponentially fitted difference scheme, which are applied to solve a converted initial value problem system. Natesan et al. [10] implemented a parallel domain decomposition method to solve a class of singularly perturbed two-point boundary value problems. Moreover, Natesan et al. [11] proposed an appropriate piecewise uniform mesh and applied the classical finite-difference scheme to solve the turning point problem. All these works refer to singularly perturbed problems, with the presence of a perturbation parameter.

In the literature, there are different works on ordinary singularly perturbed problems involving two perturbation parameters. Riordan and Pickett [12] discretized a singularly perturbed problem with two parameters using the classical upwind differences. They showed that the scaled derivatives were parameter uniformly convergent to the scaled first derivatives of the solution, measuring the sharpness of the numerical estimates in a suitably weighted \({\mathbf {C}}^1\) norm. In Prabha et al. [13], the numerical technique handled the combination of the five-point second-order scheme at the interior layer and the central, midpoint and upwind standard difference schemes for separate regions, to produce a nearly second-order convergence for a two-parameter singularly perturbed convection–diffusion equation with a discontinuous source term. Chandru et al. [14], utilized a hybrid monotone difference scheme and the method of averaging technique at the point of discontinuity to obtain a parameter-uniform error bound for the numerical approximations. In [15] an adaptive Shishkin-type mesh is considered for solving a parabolic problem with discontinuities in the convection and source terms. Moreover, other works scrutinized the two-parameter singularly perturbed ordinary differential equation with smooth-data [16,17,18] and non-smooth data [19, 20].

Riordan et al. [21] developed a parameter uniform numerical method to solve a class of singularly perturbed parabolic equations with two small parameters and provided parameter explicit theoretical bounds on the derivatives of the solutions. Das et al. [22] introduced a new mesh-adaptive upwind scheme for 1-D convection-diffusion-reaction problems and conferred the parameter uniform convergence despite the parameters being zero. Motivated by the above works we have constructed a numerical technique for the two-parameter parabolic type of two-coupled system of singularly perturbed differential equations with discontinuous source terms.

The framework of the article is as follows: Sect. 2 derives a minimum principle, presents a stability theorem and a priori bounds for the solution and its derivatives considering the decomposition of the solution into the interior and layer components. In Sect. 3, a numerical scheme for solving the problem presented and the theoretical analysis of the order of convergence are addressed. Finally, in Sect. 4, a numerical example is presented to confirm the theoretical results.

The maximum norm defined as

$$\begin{aligned} \Vert \varvec{\upsilon } \Vert _{{\bar{G}}} = \max \limits _{(x,t) \in {\bar{G}}} \left\{ \vert \upsilon _1(x,t) \vert ,\vert \upsilon _2(x,t) \vert \right\} , \end{aligned}$$
(1)

where \(\varvec{\upsilon }\) is any function defined on a domain \({\bar{G}}\subset {\mathbb {R}}^2\), will be used in the theoretical analysis. The corresponding discrete maximum norm is denoted as

$$\begin{aligned} \Vert \mathbf {U} \Vert _{{\bar{G}}^{N,M}} = \max \limits _{(x_i,t_j) \in {\bar{G}}^{N,M}} \left\{ \vert U_1(x_i,t_j) \vert , \vert U_2(x_i,t_j) \vert \right\} , \end{aligned}$$

where, \({\bar{G}}^{N,M}\) denotes the discretized version of \({\bar{G}}\) and \(U_k(x_i,t_j),i=1,2\) stand for discrete approximations of the components of \(\varvec{\upsilon }\). If the set on which any of the norms is applied is clear enough, we will simply use the notation \(\Vert \cdot \Vert \).

As it is usual, the notation \(\mathbf {u}\le \mathbf {v}\) means that \(u_i\le v_i, i=1,2.\) Throughout the article, C will be used to indicate a general positive constant independent of the parameters \(\epsilon , \mu \) and of the discrete dimensions NM.

2 Continuous problem

Consider that \(\mathbf {u}(x,t) = (u_1(x,t), u_2(x,t)) \in {\mathbf {C}}^{2,4}(G)\). Our primary intention is to construct a numerical method which generates \(\epsilon ,\mu \)-uniformly convergent approximate solutions on the domain \(G = \left( \varOmega ^{-}\cup \varOmega ^{+}\right) \times (0,T]\), being \(G^- = \varOmega ^{-}\times (0,T]\), \(G^+ = \varOmega ^{+} \times (0,T]\), \(\varOmega = (0,1)\), \(\varOmega ^{-}=(0,d),\,\, \varOmega ^{+}=(d,1)\), with \(d\in (0,1)\), \(G^*=\varOmega \times (0,T]\) and \({\bar{G}}=[0,1]\times [0,T]\), of the problem given by

$$\begin{aligned} L\mathbf {u}(x,t)\equiv & {} \epsilon \mathbf {u}_{xx}(x,t)+ \mu A \mathbf {u}_x(x,t)- B \mathbf {u}(x,t)-{D}\,\mathbf {u_{t}}(x,t) = \mathbf {f}(x,t) \end{aligned}$$
(2)
$$\begin{aligned} \mathbf {u}(x,0)= & {} \mathbf {g}_{1}(x), \quad \forall x \in \varOmega ^{-}\cup \varOmega ^{+},\nonumber \\ \mathbf {u}(0,t)= & {} \mathbf {g}_{2}(t), \quad \mathbf {u}(1,t) = \mathbf {g}_{3}(t) \quad \forall t \in [0,T]. \end{aligned}$$
(3)

where

$$\begin{aligned}&A=\begin{pmatrix} a_{1}(x,t) &{}\quad 0\\ 0 &{}\quad a_{2}(x,t) \end{pmatrix} ,\quad B= \begin{pmatrix} b_{11}(x,t) &{}\quad b_{12}(x,t)\\ b_{21}(x,t) &{}\quad b_{22}(x,t) \end{pmatrix} ,\quad \\&D= \begin{pmatrix} d_{1}(x,t) &{}\quad 0\\ 0 &{}\quad d_{2}(x,t) \end{pmatrix}. \end{aligned}$$

Here, \(\epsilon \) and \(\mu \) are two small parameters such that \(0< \epsilon \ll 1\), \(0 < \mu \le 1\), the source term \(\mathbf {f}(x,t)=\left( f_1(x,t), f_2(x,t) \right) ^T\) and the components of matrices A, B and D are assumed to be sufficiently smooth on the domains G and \({\bar{G}}\) respectively. A single jump discontinuity is supposed to be located at a point \(d\in \varOmega \). It is beneficial to add the usual notation for a jump of any function at a given point, and thus we have \([\mathbf {u}](d,t) = \mathbf {u}(d^+,t) - \mathbf {u}(d^-,t)\), \([\mathbf {u}_x](d,t) = \mathbf {u}_x(d^+,t) - \mathbf {u}_x(d^-,t)\).

The differential operator L may be expressed as

$$\begin{aligned} L \equiv \epsilon \,\begin{pmatrix} \dfrac{\partial ^{2}}{\partial x^{2}} &{}\quad 0\\ 0 &{}\quad \dfrac{\partial ^{2}}{\partial x^{2}} \end{pmatrix} + \mu \,A \begin{pmatrix} \dfrac{\partial }{\partial x} &{}\quad 0\\ 0 &{}\quad \dfrac{\partial }{\partial x}\\ \end{pmatrix}\, - B - D \begin{pmatrix} \dfrac{\partial }{\partial t} &{}\quad 0\\ 0 &{}\quad \dfrac{\partial }{\partial t}\\ \end{pmatrix}, \end{aligned}$$

while the boundary conditions can be written in matrix form as

$$\begin{aligned} \mathbf {u}(x,0)= & {} \begin{pmatrix} u_{1}(x,0)\\ u_{2}(x,0) \end{pmatrix} = \begin{pmatrix} g_{11}(x)\\ g_{12}(x) \end{pmatrix}, \\ \mathbf {u}(0,t)= & {} \begin{pmatrix} u_{1}(0,t)\\ u_{2}(0,t) \end{pmatrix} = \begin{pmatrix} g_{21}(t)\\ g_{22}(t) \end{pmatrix} ,\quad \mathbf {u}(1,t) = \begin{pmatrix} u_{1}(1,t)\\ u_{2}(1,t) \end{pmatrix} = \begin{pmatrix} g_{31}(t)\\ g_{32}(t) \end{pmatrix}\,. \end{aligned}$$

On the other hand, we consider natural assumptions of positivity and diagonal dominance of the entries as follows

$$\begin{aligned}&{{\left\{ \begin{array}{ll} a_1(x,t)> \alpha _1> 0, \\ a_2(x,t)> \alpha _2> 0, \end{array}\right. }}&{{\left\{ \begin{array}{ll} d_1(x,t)> \gamma _1> 0, \\ d_2(x,t)> \gamma _2 > 0, \end{array}\right. }} \end{aligned}$$
(4)
$$\begin{aligned}&{{\left\{ \begin{array}{ll} b_{11}(x,t) \ge b_{12}(x,t) \ge 0, \\ b_{11}(x,t) + b_{12}(x,t) \ge \beta _1(x,t)> 0, \end{array}\right. }}&{{\left\{ \begin{array}{ll} b_{22}(x,t)\ge b_{21}(x,t) \ge 0, \\ b_{21}(x,t) + b_{22}(x,t) \ge \beta _2(x,t) > 0. \end{array}\right. }} \end{aligned}$$
(5)

If \(\mu = 1\), the problem in (2) behaves like the well known convection-diffusion problem, and when \(\mu = 0\), it behaves like the reaction-diffusion problem (see [12, 19, 21, 23]).

We denote

$$\begin{aligned} \alpha= & {} \min \{\alpha _1, \alpha _2\}, \quad \beta = \min _{(x,t)\in {\bar{G}}} \{\beta _1(x,t), \beta _2(x,t)\}, \\ \rho _1= & {} \min _{(x,t)\in {\bar{G}} } \left\{ \dfrac{b_{11} (x,t) + b_{12} (x,t)}{a_1(x,t)} \right\} ,\quad \rho _2 = \min _{(x,t)\in {\bar{G}}} \left\{ \dfrac{b_{21} (x,t) + b_{22} (x,t)}{a_2(x,t)}\right\} , \end{aligned}$$

and \(\rho = \min \{\rho _1, \rho _2\}\).

In the present analysis, we will consider two excluding cases to study the problem, as in [21, 23]:

  • If \(\sqrt{\alpha }\mu \le \sqrt{\rho \epsilon }\), then layers of width \({\mathcal {O}}(\sqrt{\epsilon })\) appear in the neighborhoods of \(x = 0, x = 1\) and to both sides of \(x =d\).

  • If \(\sqrt{\alpha }\mu > \sqrt{\rho \epsilon }\), then layers of width \({\mathcal {O}}(\frac{\epsilon }{\mu })\) in a neighborhood of \(x = 0\) and to the right side of \( x = d\) and layers of width \({\mathcal {O}}(\mu )\) in a neighborhood of \(x = 1\) and to the left of \(x = d\), can be predicted.

The following symbolic notations are introduced to specify the boundaries: \(\varGamma _{0} = \{(x,0)\vert x \in \varOmega ^{-}\cup \varOmega ^{+} \}\), \(\varGamma _{1} = \{(0,t)\vert t \in [0,T]\}\), \(\varGamma _{2} = \{(1,t)\vert t \in [0,T]\}\), \(\varGamma = \varGamma _{0}\cup \varGamma _{1} \cup \varGamma _{2}\).

3 Bounds on the solution and its derivatives

This section establishes some a priori bounds for the solution and its derivatives. These bounds will be used in the error analysis part. Further, bounds for the regular and singular components of the continuous solution will be derived separately.

Lemma 1

The problem (2)–(3) has a solution

$$\begin{aligned} \mathbf {u}(x,t) \in {\mathbf {C}}^{1,4}(G) \cap {\mathbf {C}}^{2,4}(G^- \cup G^+). \end{aligned}$$

Proof

Let us consider the same procedure formulated in [14] and construct a function \(\mathbf {u}\) satisfying the statement. We take \(\mathbf {u}^*\) and \(\mathbf {u}^{**}\) whose components satisfy respectively the following equations

$$\begin{aligned}&{\left\{ \begin{array}{ll} \epsilon u_{1xx}^* + \mu a_1 u_{1x}^* - b_{11} u_1^* + b_{12} u_2^* -d_1 u_{1t}^* = f_{11}, \qquad \forall (x,t) \in G^-,\\ \epsilon u_{1xx}^{**} + \mu a_1 u_{1x}^{**} - b_{11} u_1^{**} + b_{12} u_2^{**} -d_1 u_{1t}^{**} = f_{21}, \quad \forall (x,t) \in G^-, \end{array}\right. } \end{aligned}$$
(6)
$$\begin{aligned}&{\left\{ \begin{array}{ll} \epsilon u_{2xx}^* + \mu a_2 u_{2x}^* + b_{21} u_1^* + b_{22} u_2^* -d_2 u_{2t}^* = f_{12}, \qquad \forall (x,t) \in G^+,\\ \epsilon u_{2xx}^{**} + \mu a_2 u_{2x}^{**} + b_{21} u_1^{**} + b_{22} u_2^{**} -d_2 u_{2t}^{**} = f_{22}, \quad \forall (x,t) \in G^+. \end{array}\right. } \end{aligned}$$
(7)

Let us consider the vector function \(\mathbf {{u}}\), whose components are given by

$$\begin{aligned} {u}_1(x,t)&= {\left\{ \begin{array}{ll} u_1^{*}(x,t) + (u_1(0,t) - u_1^{*}(0,t)) \zeta _1^{*} (x,t) + \varrho _1^{*} \zeta _2^{*}(x,t),&{} \forall (x,t) \in G^{-},\\ u_2^{*} + \varrho _2^{*} \zeta _1^{*}(x,t) + (u_1(1,t) - u_2^{*}(1,t)) \zeta _2^{*} (x,t), &{}\forall (x,t)\in G^{+}, \end{array}\right. } \\ {u}_2(x,t)&= {\left\{ \begin{array}{ll} u_1^{**}(x,t) + (u_1(0,t) - u_1^{**}(0,t)) \zeta _1^{**} (x,t) + \varrho _1^{**} \zeta _2^{**}(x,t), &{} \forall (x,t) \in G^{-},\\ u_2^{**} + \varrho _2^{**} \zeta _1^{**}(x,t) + (u_1(1,t) - u_2^{**}(1,t)) \zeta _2^{**} (x,t), &{} \forall (x,t)\in G^{+}, \end{array}\right. } \end{aligned}$$

where \(\varvec{\zeta }^{*} = \)\(\begin{pmatrix} \zeta _1^{*}\\ \zeta _2^{*} \end{pmatrix}\) and \(\varvec{\zeta }^{**} = \)\(\begin{pmatrix} \zeta _1^{**}\\ \zeta _2^{**} \end{pmatrix}\) are the solutions of the following two-parameter singularly perturbed boundary value problems

$$\begin{aligned}&{\left\{ \begin{array}{ll} \epsilon \varvec{\zeta }^{*}_{xx}(x,t)+ \mu A \varvec{\zeta }^{*}_{x}(x,t)- B \varvec{\zeta }^{*}(x,t)-{D}\,\varvec{\zeta }_{t}^{*}(x,t) = 0, \quad \forall (x,t) \in G,\\ \varvec{\zeta }^{*}(0,t) = 1, \quad \varvec{\zeta }^{*}(1,t) = 0, \quad \varvec{\zeta }^{*}(x,0) = 0, \end{array}\right. } \end{aligned}$$
(8)
$$\begin{aligned}&{\left\{ \begin{array}{ll} \epsilon \varvec{\zeta }^{**}_{xx}(x,t)+ \mu A \varvec{\zeta }^{**}_{x}(x,t)- B \varvec{\zeta }^{**}(x,t)-{D}\,\varvec{\zeta }_{t}^{**}(x,t) = 0, \quad \forall (x,t) \in G\\ \varvec{\zeta }^{**}(0,t) = 1, \quad \varvec{\zeta }^{**}(1,t) = 0, \quad \varvec{\zeta }^{**}(x,0) = 0. \end{array}\right. } \end{aligned}$$
(9)

The constants \(\varrho _1^*\), \(\varrho _2^*\), \(\varrho _1^{**}\) and \(\varrho _2^{**}\) are chosen in such a way that the solution \(\mathbf {{u}} \in {\mathbf {C}}^{1,4}(G)\). Also, \(0< \zeta ^*_i(x,t) < 1\) and \(0< \zeta ^{**}_i(x,t) < 1\) for \(i = 1,2\) on G. Hence, \(\zeta _1^*\), \(\zeta _2^*\), \(\zeta _1^{**}\) and \(\zeta _2^{**}\) cannot have a maximum or a minimum at the interior points of the domain. Therefore, the first derivative with respect to the space variable of \(\zeta _1^*\), \(\zeta _2^*\), \(\zeta _1^{**}\) and \(\zeta _2^{**}\) can never be zero.

In order to assure that \(\mathbf {u}(x,t) \in {\mathbf {C}}^{1,4}(G)\) it is imposed that

$$\begin{aligned} \mathbf {u}(d^-,t) = \mathbf {u}(d^+,t)\quad \hbox {and}\quad \mathbf {u}_x(d^-,t) = \mathbf {u}_x(d^+,t), \end{aligned}$$

and the following relations are true for the existence of the constants \(\varrho _1^*\) and \(\varrho _2^*\)

$$\begin{aligned} \begin{vmatrix} \zeta ^*_2(d,t)&-\zeta ^*_1(d,t) \\ \zeta ^{*}_{{2}_{x}}(d,t)&- \zeta ^{*}_{{1}_{x}}(d,t) \end{vmatrix} \ne 0 \quad \text {and} \quad \begin{vmatrix} \zeta ^{**}_2(d,t)&-\zeta ^{**}_1(d,t) \\ \zeta ^{**}_{{2}_{x}}(d,t)&- \zeta ^{**}_{{1}_{x}}(d,t) \end{vmatrix} \ne 0. \end{aligned}$$

This follows from \(\zeta ^{*}_{{2}_{x}}(d,t) \, \zeta ^*_1(d,t) - \zeta ^{*}_{{1}_{x}}(d,t) \, \zeta ^*_2(d,t) > 0 \) and \(\zeta ^{**}_{{2}_{x}}(d,t) \, \zeta ^{**}_1(d,t) - \zeta ^{**}_{{1}_{x}}(d,t) \, \zeta ^{**}_2(d,t) > 0 \), and the proof is complete. \(\square \)

The differential operator L also satisfies the following continuous minimum principle on \({\bar{G}}\).

Lemma 2

(Minimum principle) Let us suppose that a function \(\mathbf {u} \in {\mathbf {C}}^0({\bar{G}}) \cap {\mathbf {C}}^2(G)\) satisfies \(\mathbf {u}(x,t)\ge \mathbf {0} \) on \(\varGamma \) and \(L\mathbf {u}(x,t) \le \mathbf {0}, \, \forall (x,t) \in G\) and \([\frac{\partial \mathbf {u}}{\partial x}](d,t) \le \mathbf {0}\). Also let \(b_{12}(x,t) \le 0\) and \(b_{21}(x,t) \le 0\) on \({\bar{G}}\). Then, if there exists a function \(\mathbf {p} = (p_1,p_2) \in {\mathbf {C}}^0({\bar{G}}) \cap {\mathbf {C}}^2 (G)\) such that \(\mathbf {p}(x,t) > \mathbf {0}\) on \(\varGamma \), \(L\mathbf {p}(x,t) \le \mathbf {0} \quad \forall (x,t) \in G\) and \([\frac{\partial \mathbf {p}}{\partial x}] (d,t) \le \mathbf {0},\) then \(\mathbf {u}(x,t) \ge \mathbf {0}\), \(\forall (x,t) \in {\bar{G}}\).

Proof

Let be

$$\begin{aligned} \psi _1 = \max \limits _{(x,t) \in {\bar{\varOmega }} \times (0,T]} \left( -\frac{u_1}{p_1}\right) (x,t), \quad \psi _2 = \max \limits _{(x,t) \in {\bar{\varOmega }} \times (0,T]} \left( -\frac{u_2}{p_2}\right) (x,t) \end{aligned}$$

and \(\psi = \max \bigg \{\psi _1, \psi _2 \bigg \} \).

Let, \((x_*,t_*) \in G^*\) such that \(\mathbf {u}(x_*,t_*)\) attains its minimum value in \({\bar{\varOmega }} \times (0,T]\) with the assumptions on the boundary values. It is clear that \((x_*,t_*) \in G\) or \((x_*,t_*) = (d,t)\). Assume that the lemma is not true. The proof is completed by showing that this leads to a contradiction. Let be \(\mathbf {u}(x,t) < 0\), then \(\psi (x,t) > 0\) and there exist a point \((x_*,t_*) \in G^*\) such that either \(\psi _1 = \psi \) or \(\psi _2 = \psi \) or \(\psi _1 = \psi _2 = \psi \) and \((\mathbf {u} + \psi \mathbf {p}) (x,t) \ge 0\), \(\forall (x,t) \in {\bar{\varOmega }}\times (0,T]\).

Case(i): Consider that \((x_*,t_*) \in G\)   and   \(\psi _1 = (- \frac{u_1}{p_1}) (x_*,t_*) = \psi \) and   \((u_1 + \psi p_1) (x_*,t_*) = 0.\) This shows that \((u_1 + \psi p_1)\) attains its minimum value at \((x,t) = (x_*,t_*)\). Hence, it is

$$\begin{aligned} L(\mathbf {u}+ \psi \mathbf {p}) (x_*,t_*)= & {} \epsilon \frac{\partial ^2}{\partial x^2} (u_1 + \psi p_1) (x_*,t_*) + \mu a_1(x_*,t_*) \frac{\partial }{\partial x} (u_1 + \psi p_1) (x_*,t_*)\\&-\, b_{11}(x_*,t_*) (u_1 + \psi p_1) (x_*,t_*) - b_{12} (x_*,t_*) (u_2 + \psi p_2) (x_*,t_*) \\&-\, d_1(x_*,t_*) \frac{\partial }{\partial t} (\mathbf {u}+\psi p_1) (x_*,t_*) \ge 0, \end{aligned}$$

which is contradiction. Similarly, a contradiction would be reached if we consider \((x_*,t_*) \in G^*\) and \(\psi _2 = (- \frac{u_2}{p_2}) (x_*,t_*) = \psi \).

Case (ii): Consider that \((x_*,t_*) = (d,t_1)\) and \(\psi _1 = (-\frac{u_1}{p_1}) (x_*,t_*) = \psi \). Here again, it is \((u_1 + \psi p_1) (x_*,t_*) = 0,\) and \( (u_1+\psi p_1)\) attains its minimum value at \((x,t) = (x_*,t_*)\). Hence, we have

$$\begin{aligned} 0 < \bigg [\frac{\partial }{\partial x}(u_1 + \psi p_1)\bigg ](x_*,t_*) = \bigg [\frac{\partial u_1}{\partial x}\bigg ] (d,t_*) + \psi \bigg [\frac{\partial }{\partial x} p_1\bigg ] (d,t_*) \le 0, \end{aligned}$$

which is a contradiction. Similarly, a contradiction is reached if we choose \(\psi _2 =(-\frac{u_2}{p_2})(x_*,t_*) = \psi \) and \((x_*,t_*) = (d,t_1)\).

Hence, it is \(\mathbf {u} (x,t) \ge \mathbf {0} \quad \forall (x,t) \in {\bar{\varOmega }} \times (0,T]\).

Considering similar arguments as those presented in [21] and the above Lemmas 1 and 2, the following results about the boundedness of the solution and its derivatives can be established. \(\square \)

Theorem 1

(Stability result) Let \(\mathbf {u}(x,t)\) be the solution of (2)–(3). Then

$$\begin{aligned} \parallel {\mathbf {u}}\parallel _{{\bar{G}}} \le \max \left\{ \parallel \mathbf {u}\parallel _{\varGamma },\parallel \frac{t}{\beta }\mathbf {f}\parallel _{G^*}\right\} . \end{aligned}$$

Lemma 3

The derivatives of the solution \(\mathbf {u}(x,t)\) of (2)–(3) satisfy the following bounds for all non-negative integers km,  such that \(1 \le k + 2m \le 3\):

  • If \( \mu ^2 \le C\epsilon \), then

    $$\begin{aligned} \left\| \dfrac{\partial ^{k+m}\mathbf {u}}{\partial x^k \partial t^m}\right\|\le & {} \dfrac{C}{\sqrt{\epsilon }^k}\max \left\{ \left\| \mathbf {u}\right\| , \sum _{i+2j = 0}^2(\sqrt{\epsilon })^i \left\| \dfrac{\partial ^{i+j}\mathbf {f}}{\partial x^i\partial t^j}\right\| , \sum _{i=0}^4 \left\| \dfrac{d^i\mathbf {g}_1}{dx^i}\right\| _{\varGamma _0} \right. \\&\left. + \bigg [\left\| \dfrac{d^i\mathbf {g}_2}{dt^i} \right\| + \left\| \dfrac{d^i\mathbf {g}_3}{dt^i} \right\| \bigg ]_{\varGamma _1 \cup \varGamma _2}\right\} , \end{aligned}$$
  • If \( \mu ^2 \ge C\epsilon \), then

    $$\begin{aligned} \left\| \dfrac{\partial ^{k+m}\mathbf {u}}{\partial x^k \partial t^m}\right\|\le & {} C \left( \dfrac{\mu }{\epsilon }\right) ^k\left( \dfrac{\mu ^2}{\epsilon }\right) ^m\\&\max \left\{ \left\| \mathbf {u}\right\| , \sum _{i+2j = 0}^2\left( \dfrac{\epsilon }{\mu }\right) ^i\left( \dfrac{\epsilon }{\mu ^2}\right) ^{j+1} \left\| \dfrac{\partial ^{i+j}\mathbf {f}}{\partial x^i\partial t^j}\right\| , \sum _{i=0}^4 \left\| \dfrac{d^i\mathbf {g}_1}{dx^i}\right\| _{\varGamma _0} \right. \\&\quad \left. + \bigg [\left\| \dfrac{d^i\mathbf {g}_2}{dt^i} \right\| +\left\| \dfrac{d^i\mathbf {g}_3}{dt^i} \right\| \bigg ]_{\varGamma _1 \cup \varGamma _2} \right\} , \end{aligned}$$

for \( p=1,2,\) where C depends only on the coefficients ABD and their derivatives.

Corollary 1

The second order time derivative of the solution of (2)–(3) satisfies the bound

$$\begin{aligned} \Vert \mathbf {u}_{tt}(x,t) \Vert \le {\left\{ \begin{array}{ll} C, &{}\mathrm{if} \quad \mu \le C \sqrt{\epsilon },\\ C \frac{\mu ^4}{\epsilon ^2}, &{}\mathrm{if} \quad \mu > C \sqrt{\epsilon }. \end{array}\right. } \end{aligned}$$

Proof

It can be readily obtained by using the results in [21] and in Lemma 3. \(\square \)

3.1 Decomposition of the solution

To obtain sharper bounds in the error analysis, the solution \(\mathbf {u}(x,t)\) will be decomposed into a regular component \(\mathbf {r}(x,t)\), and two singular components, \(\mathbf {s}_l(x,t)\) and \(\mathbf {s}_r(x,t)\), as follows

$$\begin{aligned} \mathbf {u}(x,t) =\mathbf {r}(x,t)+\mathbf {s}_l(x,t)+\mathbf {s}_r(x,t). \end{aligned}$$

We will use the following notations in order to differentiate if \(\mathbf {r}(x,t)\) is considered defined to the left or to the right of \(x=d\):

$$\begin{aligned}&(\mathbf {r})^-(x,t)=\mathbf {r}(x,t), \quad \forall (x,t) \in G^-,\\&(\mathbf {r})^+(x,t)=\mathbf {r}(x,t), \quad \forall (x,t) \in G^+. \end{aligned}$$

Similarly, we will use the following notations to differentiate when the singular components are defined to the left or to the right of \(x=d\), respectively:

$$\begin{aligned}&(\mathbf {s}_l)^-(x,t)=\mathbf {s}_l(x,t), \quad \forall (x,t) \in G^-,\\&(\mathbf {s}_l)^+(x,t)=\mathbf {s}_l(x,t), \quad \forall (x,t) \in G^+, \end{aligned}$$

and

$$\begin{aligned}&(\mathbf {s}_r)^-(x,t)=\mathbf {s}_r(x,t), \quad \forall (x,t) \in G^-,\\&(\mathbf {s}_r)^+(x,t)=\mathbf {s}_r(x,t), \quad \forall (x,t) \in G^+. \end{aligned}$$

It is inevitable to split the analysis into two cases, depending on the ratio of \(\mu \) to \(\sqrt{\epsilon }\) according to \(\sqrt{\alpha }\mu \le \sqrt{\rho \epsilon }\) or \(\sqrt{\alpha }\mu > \sqrt{\rho \epsilon }\), which is related to the presence of layers of different widths. In what follows, we analyze these two possibilities.

  1. 1.

    Case\(\sqrt{\alpha }\mu \le \sqrt{\rho \epsilon }\): In this case the regular component \(\mathbf {r}(x,t)\) satisfies

    $$\begin{aligned}&L\mathbf {r}(x,t) = \mathbf {f}(x,t)\quad \quad \forall (x,t) \in G,\nonumber \\&\mathbf {r}(x,t) = \mathbf {u}(x,t)\quad \text { on } \, \varGamma _0, \quad \mathbf {r}(x,t) = \mathbf {0} \quad \text { on } \varGamma _1 \cup \varGamma _2, \nonumber \\&[\mathbf {r}](d,t) = \mathbf {0},\quad \quad [\mathbf {r}_x](d,t) = \mathbf {0}. \end{aligned}$$
    (10)

    which according to [25, 26] can be written in the form \(\mathbf {r}=\mathbf {r}_{0} + \sqrt{\epsilon }\, \mathbf {r}_{1} + \sqrt{\epsilon }\,\mathbf {r}_{2} \) where the \( \mathbf {r}_{i}\) verify respectively the following problems

    $$\begin{aligned} {\left\{ \begin{array}{ll} -B\mathbf {r}_{0}(x,t) - D\frac{\partial {\mathbf {r}_{0}}}{\partial {t}}(x,t) = \mathbf {f}(x,t),\quad \mathbf {r}_{0}(x,t) = \mathbf {u}(x,t) \quad \text {on} \quad \varGamma _{0}\\ -B \mathbf {r}_{1} -D \frac{\partial {\mathbf {r}_{1}}}{\partial {t}}(x,t) = \frac{\partial {\mathbf {r}_{0}}}{\partial {x^{2^{}}}}(x,t), \quad \mathbf {r}_{1}(x,t) =\mathbf {0} \quad \text {on} \quad \varGamma _{0}\\ L\mathbf {r}_{2}(x,t) = \frac{\partial \mathbf {r}_{1}}{\partial x^{2}}, \quad \mathbf {r}_{2}(x,t) = \mathbf {0}, \quad \forall (x,t)\in {\bar{G}}\backslash G^*.\\ \end{array}\right. } \end{aligned}$$
    (11)

Similarly, the singular components \(\mathbf {s}_{l}(x,t), \mathbf {s}_{r}(x,t) \) satisfy the following problems:

$$\begin{aligned} {\left\{ \begin{array}{ll} L\mathbf {s}_{l}(x,t) = \mathbf {0} \quad \text {on} \quad G^*, \quad \mathbf {s}_{l}(x,t) = \mathbf {0} \quad \text {on} \quad \varGamma _{0}\cup \varGamma _{2}, \\ \mathbf {s}_{l}(x,t) = \mathbf {u}(x,t) - \mathbf {r}(x,t) \quad \text {on} \quad \varGamma _{1}\\ L\mathbf {s}_{r}(x,t) = \mathbf {0} \quad \text {on} \quad G^*, \quad \mathbf {s}_{r}(x,t) = \mathbf {0} \quad \text {on} \quad \varGamma _{0}\cup \varGamma _{1}, \\ \mathbf {s}_{r}(x,t) = \mathbf {u}(x,t) - \mathbf {r}(x,t) \quad \text {on} \quad \varGamma _{2}.\\ \end{array}\right. } \end{aligned}$$
(12)

Considering the above we have that

$$\begin{aligned}{}[\mathbf {s}_r](d,t) = -[\mathbf {r}](d,t) - [\mathbf {s}_l](d,t) \end{aligned}$$

and

$$\begin{aligned} \left[ \frac{\partial }{\partial x}\mathbf {s}_r\right] (d,t) = -\left[ \frac{\partial }{\partial x}\mathbf {r}\right] (d,t) - \left[ \frac{\partial }{\partial x}\mathbf {s}_l\right] (d,t), \end{aligned}$$

while the solution of (2)–(3) may be expressed as

$$\begin{aligned} \mathbf {u}= {\left\{ \begin{array}{ll} (\mathbf {r})^{-} + ({\mathbf {s}_l})^{-} + ({\mathbf {s}_r})^- &{} \text {for} \quad x < d, \quad t\in (0,T], \\ ({\mathbf {r}})^{-} + ({\mathbf {s}_l})^{-} + ({\mathbf {s}_r})^- = ({\mathbf {r}})^{+} + ({\mathbf {s}_l})^{+} + ({\mathbf {s}_r})^+ &{} \text {for} \quad x=d, \quad t\in (0,T],\\ ({\mathbf {r}})^{+} + ({\mathbf {s}_l})^{+} + ({\mathbf {s}_r})^+ &{} \text {for} \quad x > d, \quad t \in (0,T]. \end{array}\right. } \end{aligned}$$

From (11)–(12) it can be quickly obtained the upper bounds of the derivatives of regular and singular components based on similar idea given in [21, 25]. These results are declared in the following lemmas.

Lemma 4

The derivatives of the regular component \(\mathbf {r}(x,t)\) satisfy the following bounds

$$\begin{aligned} \left\| \frac{\partial ^{k+m}\mathbf {r}}{\partial x^{k} \partial t^{m}}\right\| _{{\bar{G}}} \le C (1+ \epsilon ^{1-k/2}), \quad 0 \le k+2m \le 4, \end{aligned}$$
(13)

where C is a constant independent of \(\epsilon \), \(\mu \).

Lemma 5

The derivatives of the singular components \(\mathbf {s}_l(x,t)\) and \(\mathbf {s}_r(x,t)\) satisfy the following bounds with \(0 \le k+2m \le 4\),

figure a

where C is a constant independent of \(\epsilon \), \(\mu \).

  1. 2.

    Case\(\sqrt{\alpha }\mu > \sqrt{\rho \epsilon }\): Now, the regular component \(\mathbf {r}(x,t)\) satisfies

    $$\begin{aligned}&L\mathbf {r}(x,t) = \mathbf {f}(x,t)\quad \forall (x,t) \in G,\\&\mathbf {r}(x,t) = \mathbf {u}(x,t)\quad \text {on } \varGamma _0 \cup \varGamma _2, \quad \mathbf {r}(x,t) = \mathbf {0} \quad \text {on } \varGamma _1, \\&[\mathbf {r}](d,t) = \mathbf {0},\quad [\mathbf {r}_x](d,t) = \mathbf {0}, \end{aligned}$$

    which according to [25, 26] can be written in the form \(\mathbf {r}=\mathbf {r}_{0} + \epsilon \mathbf {r}_{1} + \epsilon ^2\mathbf {r}_{2}\), where the \( \mathbf {r}_{i}\) verify the following problems

    $$\begin{aligned} {\left\{ \begin{array}{ll} A\mathbf {r}_0 -B\mathbf {r}_{0}(x,t) - D\frac{\partial {\mathbf {r}_{0}}}{\partial {t}}(x,t) = \mathbf {f}(x,t),\quad \mathbf {r}_{0}(x,t) = \mathbf {u}(x,t) \quad \text {on} \quad \varGamma _{0}\cup \varGamma _2,\\ A\mathbf {r}_1-B \mathbf {r}_{1} -D \frac{\partial {\mathbf {r}_{1}}}{\partial {t}}(x,t) = \frac{\partial {\mathbf {r}_{0}}}{\partial {x^{2^{}}}}(x,t), \quad \mathbf {r}_{1}(x,t) =\mathbf {0} \quad \text {on} \quad \varGamma _{0}\cup \varGamma _2,\\ L\mathbf {r}_{2}(x,t) = \frac{\partial \mathbf {r}_{1}}{\partial x^{2}}, \quad \mathbf {r}_{2}(x,t) = \mathbf {0}, \quad \forall (x,t)\in {\bar{G}}\backslash G^*. \end{array}\right. } \end{aligned}$$
    (18)

Similarly, the singular components \(\mathbf {s}_{l}(x,t), \mathbf {s}_{r}(x,t) \) satisfy the following problems

$$\begin{aligned} {\left\{ \begin{array}{ll} L\mathbf {s}_{l}(x,t) = \mathbf {0} \quad \text {on} \quad G, \quad \mathbf {s}_{l}(x,t) = \mathbf {0} \quad \text {on} \quad \varGamma _{0}\cup \varGamma _{2}, \\ \mathbf {s}_{l}(x,t) = \mathbf {u}(x,t) - \mathbf {r}(x,t) \quad \text {on} \quad \varGamma _{1},\\ L\mathbf {s}_{r}(x,t) = \mathbf {0} \quad \text {on} \quad G^*, \quad \mathbf {s}_{r}(x,t) = \mathbf {0} \quad \text {on} \quad \varGamma _{0}\cup \varGamma _{1}\cup \varGamma _2. \end{array}\right. } \end{aligned}$$
(19)

Considering the above we have that

$$\begin{aligned}{}[\mathbf {s}_r](d,t) = -[\mathbf {r}](d,t) - [\mathbf {s}_l](d,t) \end{aligned}$$

and

$$\begin{aligned} \left[ \frac{\partial }{\partial x}\mathbf {s}_r\right] (d,t) = -\left[ \frac{\partial }{\partial x}\mathbf {r}\right] (d,t) - \left[ \frac{\partial }{\partial x}\mathbf {s}_l\right] (d,t). \end{aligned}$$

From (18)–(19), the results for the upper bounds of the derivatives of regular and singular components in the following lemmas can be proven easily, using a similar procedure to that used in [21, 25].

Lemma 6

The derivatives of the regular component \(\mathbf {r}(x,t)\) satisfy the following bounds

$$\begin{aligned} \left\| \frac{\partial ^{k+m}\mathbf {r}}{\partial x^{k} \partial t^{m}}\right\| _{G} \le C \left( \dfrac{\epsilon }{\mu }\right) ^{2-k}, \end{aligned}$$
(20)

where C is a constant independent of \(\epsilon , \mu \) and \(0 \le k + 2m \le 3 \).

Lemma 7

The derivatives of the singular components \(\mathbf {s}_l(x,t)\) and \(\mathbf {s}_r(x,t)\) satisfy the following bounds with \(0 \le k + 2m \le 4\)

figure b

where C is a constant independent of \(\epsilon , \mu \).

4 Numerical scheme

In this section, the numerical approximation of (2)–(3) on a discrete mesh specifically designed is addressed.

The interior points of the mesh in space are denoted by

$$\begin{aligned} \varOmega ^{N}= & {} (\varOmega ^-)^{N}\cup (\varOmega ^+)^{N}\cup \{d\}, \end{aligned}$$
(25)
$$\begin{aligned} (\varOmega ^-)^{N}= & {} \left\{ x_i : 1 \le i \le \frac{N}{2} - 1\right\} , \quad (\varOmega ^+)^{N} = \left\{ x_i : \frac{N}{2} + 1 \le i \le N-1\right\} ,\nonumber \\ \end{aligned}$$
(26)
$$\begin{aligned} {\bar{\varOmega }}^N= & {} \varOmega ^N\cup \{0,1\}, \end{aligned}$$
(27)

while the uniform mesh in time is denoted by

$$\begin{aligned} {\bar{\omega }}^{M} = \{k\tau , 0 \le k \le M, \tau = T/M\}. \end{aligned}$$
(28)

On \({\bar{\varOmega }}^N\) a piecewise uniform mesh of N mesh intervals is broken into parts resulting the space domain as

$$\begin{aligned}{}[0,1]= & {} [0,\sigma _1]\cup [\sigma _1,d-\sigma _2]\cup [d-\sigma _2,d]\cup [d,d+\sigma _1]\\&\cup [d+\sigma _1,1-\sigma _2]\cup [1-\sigma _2,1]. \end{aligned}$$

On each of the four sub-intervals \([0,\sigma _1],[d-\sigma _2,d],[d,d+\sigma _1], [1-\sigma _2,1]\) a uniform mesh with N / 8 mesh scale is considered, whereas the two sub-intervals \([\sigma _1,d-\sigma _2]\) and \([d+\sigma _1,1-\sigma _2]\) have a uniform mesh with N / 4 mesh scale.

Fig. 1
figure 1

Mesh description of the domain \({\bar{G}}^{N,M}\)

In this way we get a discrete domain \({\bar{G}}^{N,M} = {\bar{\varOmega }}^{N}\times {\bar{\omega }}^{M}\), as it is shown in Fig. 1. The transition points are taken to be

$$\begin{aligned} \sigma _1= & {} \left\{ \begin{array}{ll} \min \left\{ \dfrac{d}{4}, 2 \sqrt{\dfrac{\epsilon }{\rho \alpha }} \ln N \right\} , &{}\quad \text {if} \quad \alpha \mu ^2< \rho \epsilon \\ \min \left\{ \dfrac{d}{4}, \dfrac{2 \epsilon }{\mu \alpha } \ln N \right\} , &{}\quad \text {if} \quad \alpha \mu ^2 \ge \rho \epsilon \\ \end{array}\right. \\ \sigma _2= & {} \left\{ \begin{array}{ll} \min \left\{ \dfrac{d}{4}, 2 \sqrt{\dfrac{\epsilon }{\rho \alpha }} \ln N \right\} , &{} \quad \text {if} \quad \alpha \mu ^2 < \rho \epsilon \\ \min \left\{ \dfrac{d}{4}, \dfrac{2 \mu }{\rho } \ln N \right\} , &{}\quad \text {if} \quad \alpha \mu ^2 \ge \rho \epsilon \,.\\ \end{array}\right. \end{aligned}$$

In view of this, the mesh points of the spatial variable are given by

$$\begin{aligned} x_{i}= \left\{ \begin{array}{ll} ih_1 &{} \quad \text {for} \quad 0 \le i \le N/8\\ \sigma _1 + (i - N/8)H_1 &{} \quad \text {for} \quad N/8\le i \le 3N/8\\ (d-\sigma _2)+(i-3N/8)h_2 &{} \quad \text {for} \quad 3N/8 \le i \le N/2\\ d + (i - N/2)h_1 &{} \quad \text {for} \quad N/2 \le i \le 5N/8\\ d+\sigma _1 + (i-5N/8)H_2 &{} \quad \text {for} \quad 5N/8 \le i \le 7N/8\\ (1-\sigma _2)+(i-7N/8)h_2 &{} \quad \text {for} \quad 7N/8 \le i \le N \end{array} \right. \end{aligned}$$

where the step sizes are given by

$$\begin{aligned} h_1= \dfrac{8\sigma _1}{N},\quad H_1= \dfrac{4(d-\sigma _1-\sigma _2)}{N}, \quad h_2= \dfrac{8\sigma _2}{N},\quad H_2= \dfrac{4(1-\sigma _2-d-\sigma _1)}{N}.\nonumber \\ \end{aligned}$$
(29)

On this piecewise uniform mesh, we use the following finite difference scheme

$$\begin{aligned} L^{N,M}\mathbf {U}_{i,j} \equiv (\epsilon \delta _{x}^2 + \mu A_{ij} D^+_x - B_{ij} - D_{ij} D_{t}^{-})\mathbf {U}_{i,j}= \mathbf {f}(x_{i},t_{j}), \quad \forall (x_{i},t_{j})\in G^{N,M},\nonumber \\ \end{aligned}$$
(30)

where \(\mathbf {f}(x_i,t_j)=\left( f_1(x_i,t_j), f_2(x_i,t_j) \right) ^T\), and \(\mathbf {U}_{i,j}\) denotes the approximations of the true values \(\mathbf {u}(x_{i},t_{j})\).

On the other hand, it is \(D^+_x\mathbf {U}(x_i,t_j) = D^-_x\mathbf {U}(x_i,t_j) \,\, \text {if} \,\, i = N/2, \, \) with

$$\begin{aligned} {\left\{ \begin{array}{ll} \mathbf {U}_{0,j}=\mathbf {g}_2({t_j}), \quad \mathbf {U}_{N,j}=\mathbf {g}_3({t_j}) ,&{} j=0,1,\ldots ,M ;\\ \mathbf {U}_{i,0}=\mathbf {g}_1({x_i}), &{} i=0,1,\ldots ,N, \end{array}\right. } \end{aligned}$$
(31)

where,

$$\begin{aligned}&A_{ij} = \begin{pmatrix} a_1(x_{i},t_{j}) &{} 0)\\ 0 &{} a_2(x_{i},t_{j})\\ \end{pmatrix} ,\quad B_{ij}= \begin{pmatrix} b_{11}(x_i,t_j) &{} b_{12}(x_i,t_j)\\ b_{21}(x_i,t_j) &{} b_{22}(x_i,t_j)\\ \end{pmatrix}, \\&D_{ij} = \begin{pmatrix} d_1(x_{i},t_{j}) &{} 0)\\ 0 &{} d_2(x_{i},t_{j})\\ \end{pmatrix}. \end{aligned}$$

We use the notations

$$\begin{aligned} G^{N,M}= & {} (\varOmega ^{N-} \cup \varOmega ^{N+}) \times {\bar{\omega }}^M \\= & {} \left\{ (x_i,t_j): 1 \le i \le N/2-1, N/2+1 \le i \le N-1, 1 \le j \le M \right\} ,\\ {\bar{G}}^{N,M}= & {} {\bar{\varOmega }}^N \times {\bar{\omega }}^M = \big \{(x_i,t_j) : 1 \le i \le N, 1 \le j \le M \big \},\\ \varOmega ^{N-}= & {} \big \{x_i : 1 \le i \le N/2-1\big \},\\ \varOmega ^{N+}= & {} \big \{x_i : N/2+1 \le i \le N-1\big \},\\ {\bar{\omega }}^{M}= & {} \big \{t_j : 1 \le j \le M\big \}, \end{aligned}$$

while the discrete differential operators \( D_{x}^{+}, D_{x}^{-}, D_{t}^{-}\), and \(\delta _{x}^{2}\) are defined as follows

$$\begin{aligned}&D_{t}^{-}\mathbf {U}_{i,j}= \dfrac{\mathbf {U}_{i,j}-\mathbf {U}_{i,j-1}}{\tau }, \quad D_{x}^{+}\mathbf {U}_{i,j}= \dfrac{\mathbf {U}_{i+1,j}-\mathbf {U}_{i,j}}{x_{i+1} - x_i}, \\&D_{x}^{-}\mathbf {U}_{i,j}= \dfrac{\mathbf {U}_{i,j}-\mathbf {U}_{i-1,j}}{x_{i} - x_{i-1}} , \quad \delta _{x}^{2}\mathbf {U}_{i,j}= \dfrac{\left( D_{x}^{+}\mathbf {U}_{i,j}-D_{x}^{-}\mathbf {U}_{i,j}\right) }{(x_{i+1} - x_{i-1})/2}. \end{aligned}$$

The following notations will be used for the boundaries:

$$\begin{aligned}&\varGamma _{0}^{N,M} = \big \{(x_i,0)\vert x_i \in (\varOmega ^{-}\cup \varOmega ^{+})^N \big \},\quad \varGamma _{1}^{N,M} =\big \{(0,t_j)\vert t_j \in {\bar{\omega }}^M\big \},\\&\varGamma _{2}^{N,M} = \big \{(1,t_j)\vert t_j \in {\bar{\omega }}^M \big \}, \quad \varGamma ^{N,M} = \varGamma _{0}^{N,M}\cup \varGamma _{1}^{N,M} \cup \varGamma _{2}^{N,M}. \end{aligned}$$

Lemma 8

(Discrete Minimum principle) Let, \(L^{N,M}\) be the discrete operator given in (30). Suppose \(\mathbf {u}(x_{i},t_{j}) \ge 0\) on \(\varGamma ^{N,M} \bigcap {\bar{G}}^{N,M}\), \(L^{N,M}\mathbf {u}(x_{i},t_{j})\le \mathbf {0}\) on \(G \bigcap {\bar{G}}^{N,M}\) and \(D_{x}^{+}\mathbf {u}(x_{N/2},t_{j})- D_{x}^{-}\mathbf {u}(x_{N/2},t_{j})\le 0\) for \(t_{j}\in {\bar{\omega }}^{M}\). Also let be \(b_{12}(x_i,t_j) \le 0\), \(b_{21}(x_i,t_j) \le 0\). Then, if there exist a mesh function \(\mathbf {p}(x_i,t_j)\) such that \(\mathbf {p}(x_{i},t_{j}) \ge 0\) on \(\varGamma ^{N,M} \bigcap {\bar{G}}^{N,M}\), \(L^{N,M}\mathbf {p}(x_{i},t_{j})\le \mathbf {0}\) in \(G\bigcap {\bar{G}}^{N,M}\) and \(D_{x}^{+}\mathbf {p}(x_{N/2},t_{j})- D_{x}^{-}\mathbf {p}(x_{N/2},t_{j})\le 0\) then \(\mathbf {u}(x_{i},t_{j}) \ge 0\) for all \((x_{i},t_{j})\in {\bar{G}}^{N,M}\).

Proof

$$\begin{aligned} \psi = \max \left\{ \psi _1 = \max \limits _{(x_i,t_j) \in {\bar{G}}^{N,M}} \left( -\frac{u_1}{p_1}\right) (x_i,t_j) , \quad \psi _2 = \max \limits _{(x_i,t_j) \in {\bar{G}}^{N,M}} \left( -\frac{u_2}{p_2}\right) (x_i,t_j) \right\} \end{aligned}$$

Let be \((x_i^*,t_j^*) \in G^{N,M}\) such that \(\mathbf {u}(x_i^*,t_j^*)\) attains its minimum value in \({\bar{G}}\). From the assumptions on the boundary values it is clear that \((x_i^*,t_j^*) \in G^{N,M}\) or \((x_i^*,t_j^*) = (d,t_j)\). Assume that the Lemma is not true. The proof is completed by showing that this leads to a contradiction. Let be \(\mathbf {u}(x_i,t_j) < 0\), then it is \(\psi (x_i,t_j) > 0\) and there exist a point \((x_i^*,t_j^*) \in G^{N,M}\) such that either \(\psi _1 = \psi \) or \(\psi _2 = \psi \) or \(\psi _1 = \psi _2 = \psi \) and \((\mathbf {u} + \psi \mathbf {p}) (x_i,t_j) \ge 0\), \(\forall (x_i,t_j) \in {\bar{G}}^{N,M}\).

Case(i): Consider, \((x_i^*,t_j^*) \in G^{N,M}\)    and \(\psi _1 = (- \frac{u_1}{p_1}) (x_i^*,t_j^*) = \psi \) and    \((u_1 + \psi p_1) (x_i^*,t_j^*) = 0.\) This shows that \((u_1 + \psi p_1)\) attains its minimum value at \((x_i,t_j) = (x_i^*,t_j^*)\). Hence, we have

$$\begin{aligned} L(\mathbf {u}+ \psi \mathbf {p}) (x_i^*,t_j^*)= & {} \epsilon \delta _{x}^2 (u_1 + \psi p_1) (x_i^*,t_j^*) + \mu a_1(x_i^*,t_j^*) D_x^+(u_1 + \psi p_1) (x_i^*,t_j^*)\\&-\, b_{11}(x_i^*,t_j^*) (u_1 + \psi p_1) (x_i^*,t_j^*) - b_{12} (x_i^*,t_j^*) (u_2 + \psi p_2) (x_i^*,t_j^*) \\&-\, d_1(x_i^*,t_j^*) D_t^- (\mathbf {u}+\psi p_1) (x_i^*,t_j^*) \ge 0, \end{aligned}$$

which is a contradiction. Similarly, a contradiction would be reached if we consider \((x_i^*,t_j^*) \in G^{N,M}\) and \(\psi _2 = (- \frac{u_2}{p_2}) (x_i^*,t_j^*) = \psi \).

Case (ii): Consider, \((x_i^*,t_j^*) = (d,t_1)\) and \(\psi _1 = (-\frac{u_1}{p_1}) (x_i^*,t_j^*) = \psi \). Here again, it is \((u_1 + \psi p_1) (x_i^*,t_j^*) = 0,\) and \( (u_1+\psi p_1)\) attains its minimum value at \((x_i,t_j) = (x_i^*,t_j^*)\). Hence, we have

$$\begin{aligned} 0 < \left[ \frac{\partial }{\partial x}(u_1 + \psi p_1)\right] \left( x_i^*,t_j^*\right) = \left[ \frac{\partial u_1}{\partial x}\right] \left( d,t_j^*\right) + \psi \left[ \frac{\partial }{\partial x} p_1\right] \left( d,t_j^*\right) \le 0, \end{aligned}$$

which is a contradiction. Similarly, a contradiction is reached if we choose \(\psi _2 =\left( -\frac{u_2}{p_2}\right) \left( x_i^*,t_j^*\right) = \psi \) and \((x_i^*,t_j^*) = (d,t_1)\).

Hence, it is \(\mathbf {u} (x_i,t_j) \ge \mathbf {0} \quad \forall (x_i,t_j) \in {\bar{G}}^{N,M}\).

The following stability result can obtained as an immediate consequence of the discrete maximum principle. \(\square \)

Theorem 2

(Discrete stability result) Let \(\mathbf {U}(x_{i},t_{j})\) be the solution of (30)–(31), then it holds that

$$\begin{aligned} \parallel \mathbf {U}(x_{i},t_{j})\parallel _{{\bar{G}}^{N,M}} \le \max \left\{ \parallel \mathbf {U}(x_{i},t_{j})\parallel _{\varGamma ^{N,M}}, \parallel \frac{t}{\beta }\mathbf {f}(x_{i},t_{j}) {\parallel _{G^{N,M}}}\right\} . \end{aligned}$$

4.1 Truncation error analysis

The solution \(\mathbf {U}(x_i,t_j)\) of the discrete problem is decomposed in an analogous manner to the above decomposition of the solution \(\mathbf {u}(x,t)\) of (2)–(3). Thus, we may write

$$\begin{aligned} \mathbf {U}(x_i,t_j) = \mathbf {V}(x_i,t_j) + \mathbf {W}_L(x_i,t_j) + \mathbf {W}_R(x_i,t_j), \quad \forall (x_i,t_j) \in {\bar{G}}^{N,M}, \end{aligned}$$

where \(\mathbf {V}(x_i,t_j)\) is the solution of the inhomogeneous problem

$$\begin{aligned} L^{N,M}\mathbf {V}(x_i,t_j) = \mathbf {f}(x_i,t_j) , \quad \mathbf {V}(x_i,t_j) = \mathbf {r}(x_i,t_j) \, \hbox {in} \, \varGamma ^{N,M}, \end{aligned}$$

with

$$\begin{aligned}{}[\mathbf {V}](x_{N/2},t_j) = [\mathbf {r}](x_{N/2},t_j), \end{aligned}$$

while \(\mathbf {W}_L(x_i,t_j)\) and \(\mathbf {W}_R(x_i,t_j)\) are respectively solutions of the homogeneous problems

$$\begin{aligned}&L^{N,M}\mathbf {W}_L(x_i,t_j) = \mathbf {0} \quad \text {in} \quad G^{N,M}; \quad \mathbf {W}_{L}(x_i,t_j) = \mathbf {s}_{l}(x_i,t_j) \quad \text {in} \quad \varGamma ^{N,M}, \quad \\&L^{N,M}\mathbf {W}_R(x_i,t_j) = \mathbf {0} \quad \text {in} \quad G^{N,M};\quad \mathbf {W}_{R}(x_i,t_j) = \mathbf {s}_{r}(x_i,t_j) \quad \text {in} \quad \varGamma ^{N,M}, \end{aligned}$$

being

$$\begin{aligned}&[\mathbf {W}_R](d,t_j) = [\mathbf {s}_r](d,t_j), \quad [\mathbf {W}_L](d,t_j) = [\mathbf {s}_l](d,t_j), \\&\left[ \frac{\partial }{\partial x}\mathbf {W}_R\right] (d,t_j) = \left[ \frac{\partial }{\partial x}\mathbf {s}_r\right] (d,t_j), \quad \left[ \frac{\partial }{\partial x}\mathbf {W}_L\right] (d,t_j) = \left[ \frac{\partial }{\partial x}\mathbf {s}_l\right] (d,t_j). \end{aligned}$$

From the above, the discrete solution may be written as

$$\begin{aligned} \mathbf {U}(x_i,t_j)= {\left\{ \begin{array}{ll} \left( \mathbf {V}^{-} + {\mathbf {W}_L}^{-} + {\mathbf {W}_R}^-\right) (x_i,t_j)&{} \text {for} \quad x_i < d, \, t_j\in (0,T], \\ \left( {\mathbf {V}}^{-} + {\mathbf {W}_L}^{-} + {\mathbf {W}_R}^-\right) (x_i,t_j) &{}\\ \quad = \left( {\mathbf {V}}^{+} + {\mathbf {W}_L}^{+} + {\mathbf {W}_R}^+\right) (x_i,t_j)&{} \text {for} \quad x_i=d, \quad t_j\in (0,T], \\ \left( {\mathbf {V}}^{+} + {\mathbf {W}_L}^{+} + {\mathbf {W}_R}^+\right) (x_i,t_j)&{} \text {for} \quad x_i > d,\quad t_j\in (0,T]. \\ \end{array}\right. } \end{aligned}$$

Lemma 9

The regular component, \(\mathbf {V}\), of the discrete solution satisfies the following estimate

\(\Vert \mathbf {V} - \mathbf {r} \Vert _{{\bar{G}}^{N,M} \backslash (d,t_j)} \le CN^{-1} + CM^{-1},\)

where \(\mathbf {r}\) is the regular component of the continuous solution \(\mathbf {u}(x,t)\).

Proof

Applying similar arguments as those used to get the bounds in (13) and (20), we have that for \((x_i,t_j) \in {{\bar{G}}^{N,M} \backslash (d,t_j)}\) it is

$$\begin{aligned} \Vert L^{N,M} (\mathbf {V}^- - \mathbf {r}^-) (x_i,t_j)\Vert\le & {} C N^{-1} (\epsilon \Vert \mathbf {r}^-_{xxx} \Vert + \mu \Vert \mathbf {r}^-_{xx} \Vert ) + C M^{-1} \Vert \mathbf {r}^-_{tt} \Vert \\\le & {} C N^{-1} + C M^{-1}\\ \Vert L^{N,M} (\mathbf {V}^+ - \mathbf {r}^+) (x_i,t_j)\Vert\le & {} C N^{-1} (\epsilon \Vert \mathbf {r}^+_{xxx} \Vert + \mu \Vert \mathbf {r}^+_{xx} \Vert ) + C M^{-1} \Vert \mathbf {r}^+_{tt} \Vert \\\le & {} C N^{-1} + C M^{-1}. \end{aligned}$$

Now, applying the barrier function technique and Lemma (8), it can be verify that the truncation error of the regular component verifies that

$$\begin{aligned} \Vert \mathbf {V} - \mathbf {r} \Vert _{{\bar{G}}^{N,M} \backslash (d,t_j)} \le CN^{-1} + CM^{-1}. \end{aligned}$$

\(\square \)

Lemma 10

The truncation errors of the singular components satisfy

$$\begin{aligned} \Vert \mathbf {W}_L^- - \mathbf {s}_l^- \Vert _{{\bar{G}}^{N,M} \backslash (d,t_j)}\le & {} {\left\{ \begin{array}{ll} CN^{-1} \ln N + C M^{-1}&{} \mathrm{if} \quad \mu \le C \sqrt{\epsilon },\\ CN^{-1} \ln N + C M^{-1}&{} \mathrm{if} \quad \mu > C \sqrt{\epsilon }, \end{array}\right. } \end{aligned}$$
(32)
$$\begin{aligned} \Vert \mathbf {W}_L^+ - \mathbf {s}_l^+ \Vert _{{\bar{G}}^{N,M} \backslash (d,t_j)}\le & {} {\left\{ \begin{array}{ll} CN^{-1} \ln N + C M^{-1} &{}\mathrm{if} \quad \mu \le C \sqrt{\epsilon },\\ CN^{-1} (\ln N)^2 + C M^{-1}&{} \mathrm{if} \quad \mu > C \sqrt{\epsilon }, \end{array}\right. } \end{aligned}$$
(33)
$$\begin{aligned} \Vert \mathbf {W}_R^- - \mathbf {s}_r^- \Vert _{{\bar{G}}^{N,M} \backslash (d,t_j)}\le & {} {\left\{ \begin{array}{ll} CN^{-1} \ln N + C M^{-1} &{}\mathrm{if} \quad \mu \le C \sqrt{\epsilon },\\ CN^{-1} (\ln N)^2 + C M^{-1} &{}\mathrm{if} \quad \mu > C \sqrt{\epsilon }, \end{array}\right. } \end{aligned}$$
(34)
$$\begin{aligned} \Vert \mathbf {W}_R^+ - \mathbf {s}_r^+ \Vert _{{\bar{G}}^{N,M} \backslash (d,t_j)}\le & {} {\left\{ \begin{array}{ll} CN^{-1} \ln N + C M^{-1}&{} \mathrm{if} \quad \mu \le C \sqrt{\epsilon },\\ CN^{-1} \ln N + C M^{-1}&{} \mathrm{if} \quad \mu > C \sqrt{\epsilon }. \end{array}\right. } \end{aligned}$$
(35)

Proof

Applying similar arguments as those used to get the bounds in (14) and (23), we have that for \((x_i,t_j) \in {{\bar{G}}^{N,M} \backslash (d,t_j)}\) it is

$$\begin{aligned} \left\| L^{N,M} \left( \mathbf {W}^-_L - \mathbf {s}^-_l\right) (x_i,t_j)\right\| \le C N^{-1} \left( \epsilon \Vert \mathbf {s}^-_{lxxx} \Vert + \mu \Vert \mathbf {s}^-_{lxx} \Vert \right) + C M^{-1} \Vert \mathbf {s}^-_{ltt} \Vert . \end{aligned}$$

Similarly, having in mind (15) and (24), we have

$$\begin{aligned} \left| L^{N,M} \left( \mathbf {W}^+_L - \mathbf {s}^+_l\right) (x_i,t_j)\right| \le C N^{-1} \left( \epsilon \Vert \mathbf {s}^+_{lxxx} \Vert + \mu \Vert \mathbf {s}^+_{lxx} \Vert \right) + C M^{-1} \left\| \mathbf {s}^+_{ltt} \right\| . \end{aligned}$$

Finally, to get the required results in (32) and (33), one can apply the same techniques and idea followed by Riordan [21]. The results in (34) and (35) can be obtained similarly. \(\square \)

Theorem 3

Let \(\mathbf {u}(x,t)\) be the exact solution of problem (2)–(3) and \(\mathbf {U}(x_{i},t_{j})\) the discrete solution of (30)–(31). Then, for NM sufficiently large it is

$$\begin{aligned} \Vert \mathbf {U}(x_{i},t_{j})-\mathbf {u}(x_{i},t_{j})\Vert \le {\left\{ \begin{array}{ll} C N^{-1} \ln N + C M^{-1}&{} \mathrm{if} \quad \mu \le C \sqrt{\epsilon },\\ C N^{-1} (\ln N)^2 + C M^{-1}&{} \mathrm{if} \quad \mu > C \sqrt{\epsilon }, \end{array}\right. } \end{aligned}$$

where \((x_i,t_j) \in {\bar{G}}^{N,M}\), and C is a constant independent of \(\epsilon , \mu \), N and M.

Proof

Using the ideas in [21, 23] and the results in Lemma 2, 9 and 10 we can obtain, except for the point \(x_{N/2}=d\), the following

$$\begin{aligned} \Vert \mathbf {U}(x_{i},t_{j})-\mathbf {u}(x_{i},t_{j})\Vert \le {\left\{ \begin{array}{ll} C N^{-1} \ln N + C M^{-1}&{} \mathrm{if} \quad \mu \le C \sqrt{\epsilon },\\ C N^{-1} \ln ^2 N + C M^{-1} &{}\mathrm{if}\quad \mu > C \sqrt{\epsilon }. \end{array}\right. } \end{aligned}$$

To get the bounds concerning the point \( x_{N/2}=d\), we consider the following discrete barrier functions in the two exclusionary cases:

Case-(i): if \(\mu \le C \sqrt{\epsilon }\),

we consider \(\varvec{\psi }(x_i,t_j) = C N^{-1} \ln N + C \frac{h}{\sqrt{\epsilon }} {\varvec{\phi }}(x_i,t_j) \pm \mathbf {e}(x_i,t_j)\).

Here, \({\varvec{\phi }}(x_i,t_j)\) is the solution of the problem

$$\begin{aligned}&\epsilon \delta ^2_x{\varvec{\phi }}(x_i,t_j)+ \mu \alpha (x_i,t_j) D_x^-\varvec{\phi }(x_i,t_j) - \beta \varvec{\phi }(x_i,t_j) = 0\quad \forall (x_i,t_j) \in G^{N,M},\\&\varvec{\phi }(0,t_j)=0, \quad \varvec{\phi }(d,t_j) =1, \quad \varvec{\phi }(1,t_j)=0. \end{aligned}$$

Note: Here, \(\alpha \) and \(\beta \) are the same values that were defined for the continuous problem.

Case-(ii): if \(\mu > C \sqrt{\epsilon }\),

we consider

$$\begin{aligned} \varvec{\psi }(x_i,t_j) = C N^{-1} \ln ^2 N + {\left\{ \begin{array}{ll} \dfrac{C N^{-1} \sigma _2(x_i - d - \sigma _2) }{\mu ^2},&{} \forall (x_i,t_j)\in (d-\sigma _2, d)\times {\bar{\omega }}^M,\\ \dfrac{C N^{-1} \sigma _1 \mu ^2 (d+\sigma _1 -x_i)}{\epsilon ^2},&{} \forall (x_i,t_j)\in (d, d+\sigma _1)\times {\bar{\omega }}^M. \end{array}\right. } \end{aligned}$$

Based on the procedure and techniques adopted in [23, 24], applying Lemma 2 to the above barrier functions we get the following results:

For \(\mu \le C \sqrt{\epsilon }\),

$$\begin{aligned} \Vert \mathbf {U}(d,t_{j})-\mathbf {u}(d,t_{j})\Vert \le C N^{-1} \ln N + C M^{-1}\quad \forall t_{j} \in {\bar{\omega }}^M. \end{aligned}$$

For \(\mu > C\sqrt{\epsilon }\),

$$\begin{aligned} \Vert \mathbf {U}(d,t_{j})-\mathbf {u}(d,t_{j})\Vert\le & {} {\left\{ \begin{array}{ll} \dfrac{C N^{-1} \sigma _2^2}{\mu ^2},&{} \forall t_j \in {\bar{\omega }}^M \\ \dfrac{C N^{-1} \sigma _1^2 \mu ^2}{\epsilon ^2},&{} \forall t_j \in {\bar{\omega }}^M \end{array}\right. }\\\le & {} C N^{-1} \ln ^2 N + C M^{-1}. \end{aligned}$$

\(\square \)

Table 1 Maximum errors \(E^{N,M}_1,E^{N,M}_2\) and orders of convergence \(Q^{N,M}_1, Q^{N,M}_2\) corresponding to \(u_{1}\) and \(u_2\) for fixed \(\mu = 2^{-8}, \epsilon \in S_{\epsilon }\), and different values of NM with \(d=0.1,\,0.5,\,0.9\)
Table 2 Maximum errors \(E^{N,M}_1, E^{N,M}_2\) and orders of convergence \(Q^{N,M}_1, Q^{N,M}_2 \) corresponding to \(u_{1}\) and \(u_2\) for fixed \(\epsilon = 2^{-8}, \mu \in S_{\mu }\), and different values of NM with \(d=0.1,\,0.5,\,0.9\)

5 Numerical results

To show the efficiency of the proposed scheme and the accuracy of the results concerning the error analysis, we exhibit a numerical example with random discontinuous points. For this test problem, the errors and the corresponding rates of convergence are illustrated in the accompanying tables.

Example 3.1

Consider the system of partial differential equations given by

$$\begin{aligned}&\epsilon \dfrac{\partial ^{2}u_{1}}{\partial x^{2}} + \mu (2+x)^2 \dfrac{\partial u_{1}}{\partial x} - u_{1} - 0.5u_{2} - \dfrac{\partial u_{1}}{\partial t} = f_{1}(x,t) , \quad \forall (x,t) \in \varOmega ^{-}\cup \varOmega ^{+}\times (0,T],\\&\epsilon \dfrac{\partial ^{2}u_{2}}{\partial x^{2}} + \mu (2x+3) \dfrac{\partial u_{2}}{\partial x} - u_{1} - 2u_{2} - \dfrac{\partial u_{2}}{\partial t} = f_{2}(x,t) , \quad \forall (x,t) \in \varOmega ^{-}\cup \varOmega ^{+}\times (0,T], \end{aligned}$$

with the boundary conditions \(\mathbf {u}(x,0)=0;\,\mathbf {u}(0,t)=\mathbf {u}(1,t)= 0,\) where

$$\begin{aligned} f_{1}(x,t)= & {} \left\{ \begin{array}{ll} (2x+1)t &{}\quad \mathrm{if} \quad 0 \le x< d\\ - t &{}\quad \mathrm{if} \quad d< x \le 1\\ \end{array}\right. ;\\ f_{2}(x,t)= & {} \left\{ \begin{array}{ll} -(3x+4)t &{}\quad \mathrm{if} \quad 0 \le x< d\\ 3t+2 &{}\quad \mathrm{if} \quad d < x \le 1.\\ \end{array}\right. \end{aligned}$$
Fig. 2
figure 2

Surface plot the approximate solution for \(\epsilon = 2^{-8}\), \(\mu = 2^{-4}, N=2M=128\) and \(d=0.5\)

Fig. 3
figure 3

Surface plot of the approximate solution for \(\epsilon = 2^{-4}\), \(\mu = 2^{-8}, N=2M=128\) and \(d=0.5\)

As we do not know the exact solution of the example, the efficiency of the obtained numerical approximations will be resolved by using a twice improved mesh, which is known as the double mesh principle. For any fixed values of NM, and specified values of \(\epsilon ,\mu \), the maximum error \(E^{N,M}_{\epsilon , \mu }\) over all the grid points will be determined by

$$\begin{aligned} E^{N,M}_{\epsilon , \mu } \equiv \displaystyle \max _{(x_i,t_j)\in {\bar{G}}^{N,M}} \left\{ \left\| U^{N,M} (x_i,t_j) - {\bar{U}}^{2N,2M} (x_i,t_j)\right\| \right\} \,, \end{aligned}$$

where \({\bar{U}}^{2N,2M} (x_i,t_j)\) is the linear interpolant of the mesh function \({U}^{2N,2M} (x_i,t_j)\) provided by the “interp” function in Matlab. In addition, the error and order of convergence are computed by fixing \(\mu \) and varying \(\epsilon \) for a larger set. We have taken \(S_{\epsilon }= \{2^{-1},2^{-2},\ldots ,2^{-30}\}\). The maximum of these values is denoted by

$$\begin{aligned} E^{N,M}_\mu = \displaystyle \max _{ \epsilon \in S_{\epsilon }}E^{N,M}_{\epsilon ,\mu }. \end{aligned}$$

Using these values, one can estimate the order of convergence \(Q^{N,M}_\mu \), through the formula [22]

$$\begin{aligned} Q^{N,M}_\mu = \log _2\left( \dfrac{E^{N,M}_\mu }{E^{2N,2M}_\mu }\right) . \end{aligned}$$

From the above defined formula, we display the values of \(E^{N,M}_\mu \) and \(Q^{N,M}_\mu \) for some values of \(\epsilon = 2^{-i}\) raging from \(i=1\) up to \(i=30\). We have chosen the number of mesh points on the spatial and time-like components, \(N=2M=2^j, j=5,\ldots ,9\) and have presented three different cases according to the position of the point \(d \in (0,1)\). Specifically, we have taken \(d=0.1,0.5,0.9\), to analyze the performance of the proposed method near the endpoints and in the middle of the interval (0, 1). We have considered the time interval [0, 0.5] in all cases. The results obtained for a fixed value of \(\mu = 2^{-8}\) are shown in Table 1, where we have included the errors and approximate orders of convergence for each component.

Similarly, we can estimate the maximum point-wise errors and order of convergence for fixed \(\epsilon \) as in Table 2 where we have taken \(\epsilon = 2^{-8}\) and \(S_{\mu }= \{2^{-1},2^{-2},\ldots ,2^{-30}\}\). This experiment serves to validate that the method is almost first order convergent with respect to the perturbation parameters.

Figures 2 and 3 show the approximate solution for \(d=0.5\). In this case one can see the layers at the ends of the space integration interval as well as around the discontinuity point d.