Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Delay differential equations are common in the mathematical modelling of various physical, biological phenomena and control theory. A subclass of these equations consists of singularly perturbed ordinary differential equations with a delay. Such type of equations arise frequently in the mathematical modelling of various practical phenomena, for example, in the modelling of human pupil-light reflex [6], models of HIV infection [1], the study of bistable devices in digital electronics [2], variational problems in control theory [3], first exit time problems in modelling of activation of neuronal variability [5], evolutionary biology [8], mathematical ecology [4] and in a variety of models for physiological processes [7].

Investigation of boundary value problems for singularly perturbed linear second-order differential-difference equations was initiated by Lange and Miura [5].

The singularly perturbed boundary value problem for a system of delay differential equations under consideration is

$$\displaystyle{ \mathbf{L}\mathbf{u}(x)\; =\; -E\;\mathbf{u}^{\,{\prime\prime}}(x) + A(x)\;\mathbf{u}(x) + B(x)\;\mathbf{u}(x - 1) =\mathbf{ f}(x)\;\mathrm{on}\;(0,2) }$$
(1)
$$\displaystyle{ \text{with}\;\;\mathbf{u} =\boldsymbol{\phi }\;\mathrm{ on}\;[-1,0]\;\mathrm{and}\;\mathbf{u}(2) =\mathbf{ l}, }$$
(2)

where \(\boldsymbol{\phi }(x) = (\phi _{1}(x),\,\phi _{2}(x))^{T}\) is sufficiently smooth on [−1, 0]. For all \(x \in [0,2],\mathbf{u}(x) = (u_{1}(x),\,u_{2}(x))^{T}\) and \(\mathbf{f}(x) = (f_{1}(x),\,f_{2}(x))^{T}.\ E,\;A(x)\;\mathrm{and}\;B(x)\) are 2 × 2 matrices. \(E =\mathrm{ diag}(\boldsymbol{\varepsilon }\,),\;\boldsymbol{\varepsilon }= (\varepsilon _{1},\varepsilon _{2})\;\mathrm{with}\;0 <\varepsilon _{1} <\varepsilon _{2} << 1,\,B(x) =\mathrm{ diag}(\mathbf{b}(x)),\,\mathbf{b}(x) = (b_{1}(x),b_{2}(x)).\) For all x ∈ [0, 2], it is also assumed that the entries a ij (x) of A(x) and the components b i (x) of B(x) satisfy

$$\displaystyle{ b_{i}(x),\;a_{ij}(x) \leq 0\;\;\mathrm{for}\;\;1 \leq i\neq j \leq 2,\quad a_{ii}(x) >\sum _{i\neq j}\vert a_{ij}(x) + b_{i}(x)\vert }$$
(3)
$$\displaystyle{ \text{and}\qquad 0 <\alpha < \min _{_{i=1,2}^{x\in [0,2]}}(\sum _{j=1}^{2}a_{ ij}(x) + b_{i}(x)),\mathrm{for\;some}\;\alpha. }$$
(4)

Further, the functions f i (x),  a ij (x) and b i (x),  1  ≤   i, j  ≤   2 are assumed to be in C 2([0, 2]). The above assumptions ensure that \(\mathbf{u} \in \mathcal{C} = C^{\,0}([0,2]) \cap C^{\,1}((0,2)) \cap C^{\,2}((0,1) \cup (1,2)).\)

The problem ( 1)–( 2) can be rewritten as

$$\displaystyle{ \mathbf{L}_{1}\mathbf{u}(x) = -E\;\mathbf{u}^{\,{\prime\prime}}(x) + A(x)\;\mathbf{u}(x) =\mathbf{ f}(x) - B(x)\;\boldsymbol{\phi }(x - 1) =\mathbf{ g}(x)\;\mathrm{on}\;(0,1), }$$
(5)
$$\displaystyle{ \mathbf{L}_{2}\mathbf{u}(x) = -E\;\mathbf{u}^{\,{\prime\prime}}(x) + A(x)\;\mathbf{u}(x) + B(x)\;\mathbf{u}(x - 1) =\mathbf{ f}(x)\;\;\mathrm{on}\;\;(1,2), }$$
(6)
$$\displaystyle{ \mathbf{u}(0) =\boldsymbol{\phi } (0),\;\;\mathbf{u}(2) =\mathbf{ l},\mathbf{u}(1-) =\mathbf{ u}(1+)\;\mathrm{and}\;\mathbf{u}^{\,{\prime}}(1-) =\mathbf{ u}^{\,{\prime}}(1+). }$$
(7)

The reduced problem corresponding to ( 5), ( 6) and ( 7) is defined by

$$\displaystyle{ A(x)\;\mathbf{u}_{0}(x) =\mathbf{ g}(x)\;\mathrm{on}\;(0,1), }$$
(8)
$$\displaystyle{ A(x)\;\mathbf{u}_{0}(x) + B(x)\;\mathbf{u}_{0}(x - 1) =\mathbf{ f}(x)\;\mathrm{on}\;(1,2). }$$
(9)

For any vector-valued function \(\mathbf{y}\) on [0, 2] the following norms are introduced: \(\parallel \mathbf{ y}(x) \parallel =\max _{i}\vert y_{i}(x)\vert \) and \(\parallel \mathbf{ y} \parallel =\sup \{\parallel \mathbf{ y}(x) \parallel: x \in [0,2]\}.\) For any mesh function \(\mathbf{V }\) on \(\overline{\varOmega }^{N} =\{ x_{j}\}_{j=0}^{N}\) the following discrete maximum norms are introduced: \(\parallel \mathbf{ V }(x_{j}) \parallel =\max _{i}\vert V _{i}(x_{j})\vert \) and \(\parallel \mathbf{ V } \parallel =\max \{\parallel \mathbf{ V }(x_{j}) \parallel: x_{j} \in \overline{\varOmega }^{N}\}.\)

For any function \(\boldsymbol{\psi }\) the jump at x is \([\boldsymbol{\psi }](x) =\boldsymbol{\psi } (x+) -\boldsymbol{\psi } (x-).\)

Throughout the paper C denotes a generic positive constant, which is independent of x and of all singular perturbation and discretization parameters. Furthermore, inequalities between vectors are understood in the componentwise sense.

2 Analytical Results

This section presents some analytical results related to the problem (5), (6) and (7) which include maximum principle, stability result and the estimates of the derivatives.

Lemma 1

Let conditions ( 3 ) and ( 4 ) hold. Let \(\boldsymbol{\psi }= (\psi _{1},\,\psi _{2})^{T}\) be any function in \(\mathcal{C}\) such that \(\,\boldsymbol{\psi }(0) \geq {\boldsymbol 0}\,\,\boldsymbol{\psi }(2) \geq {\boldsymbol 0},\,\ \mathbf{L}_{1}\boldsymbol{\psi } \geq {\boldsymbol 0}\) on \(\;(0,1),\,\ \mathbf{L}_{2}\boldsymbol{\psi } \geq {\boldsymbol 0}\) on  (1,2) and \([\boldsymbol{\psi }](1) = {\boldsymbol 0},[\boldsymbol{\psi }^{\,{\prime}}](1) \leq {\boldsymbol 0}\) then \(\boldsymbol{\psi }\geq {\boldsymbol 0}\;\) on [0,2].

Proof

Let i , x be such that \(\psi _{i^{{\ast}}}(x^{{\ast}}) =\min _{i=1,2,\,x\in [0,2]}\psi _{i}(x)\). If \(\psi _{i^{{\ast}}}(x^{{\ast}}) \geq 0,\) there is nothing to prove. Suppose therefore that \(\psi _{i^{{\ast}}}(x^{{\ast}}) < 0.\) Then \(x^{{\ast}}\notin \{0,2\},\;\psi _{i^{{\ast}}}^{\,{\prime\prime}}(x^{{\ast}}) \geq 0.\) If x  ∈ (0, 1) then \((\mathbf{L}_{1}\boldsymbol{\psi }\,)_{i^{{\ast}}}(x^{{\ast}}) < 0,\) which is a contradiction. And if x  ∈ (1, 2) then \((\mathbf{L}_{2}\boldsymbol{\psi }\,)_{i^{{\ast}}}(x^{{\ast}}) < 0,\) which is also a contradiction.

Because of the boundary values, the only other possibility is that x  = 1. In this case, the argument depends on whether or not \(\boldsymbol{\psi }_{i^{{\ast}}}\) is differentiable at x = 1. If \(\psi _{i^{{\ast}}}^{\,{\prime}}(1)\) does not exist then \([\psi _{i^{{\ast}}}^{\,{\prime}}](1)\neq 0\) and since \(\psi _{i^{{\ast}}}^{\,{\prime}}(1-) \leq 0,\ \psi _{ i^{{\ast}}}^{\,{\prime}}(1+) \geq 0,\) it is clear that \([\psi _{i^{{\ast}}}^{\,{\prime}}](1) > 0,\) which is a contradiction. On the other hand, let \(\psi _{i^{{\ast}}}\) be differentiable at x = 1. As \(\sum _{j=1}^{2}a_{ i^{{\ast}}j}(x)\psi _{j}(x) < 0\) and all the entries of A(x) and ψ j (x) are in C([0, 2]), there exist an interval [1 − h, 1) on which \(\sum _{j=1}^{2}a_{ i^{{\ast}}j}(x)\psi _{j}(x) < 0.\) If \(\psi _{i^{{\ast}}}^{\,{\prime\prime}}(\hat{x}) \geq 0\) at any point \(\hat{x} \in [1 - h,1),\) then \((\mathbf{L}_{1}\boldsymbol{\psi }\,)_{i^{{\ast}}}(\hat{x}) < 0,\) which is a contradiction. Thus we can assume that \(\psi _{i^{{\ast}}}^{\,{\prime\prime}}(x) < 0\) on [1 − h, 1). But this implies that \(\psi _{i^{{\ast}}}^{\,{\prime}}(x)\) is strictly decreasing on [1 − h, 1). Already we know that \(\psi _{i^{{\ast}}}^{\,{\prime}}(1) = 0\) and \(\psi _{i^{{\ast}}}^{\,{\prime}}\in C((0,2)),\) so \(\psi _{i^{{\ast}}}^{\,{\prime}}(x) > 0\) on [1 − h, 1). Consequently the continuous function \(\psi _{i^{{\ast}}}(x)\) cannot have a minimum at x = 1, which contradicts the assumption that x  = 1.  □ 

As a consequence of the maximum principle, there is established the stability result for the problem ( 1)–( 2) in the following

Lemma 2

Let conditions ( 3 ) and ( 4 ) hold. Let \(\boldsymbol{\psi }\) be any function in \(\mathcal{C},\) such that \([\boldsymbol{\psi }](1) = {\boldsymbol 0}\)   and   \([\boldsymbol{\psi }^{\,{\prime}}](1) = {\boldsymbol 0},\) then for each i = 1,2 and x ∈ [0,2],

$$\displaystyle{\vert \psi _{i}(x)\vert \leq \max \left \{\parallel \boldsymbol{\psi } (0) \parallel,\parallel \boldsymbol{\psi } (2) \parallel, \tfrac{1} {\alpha } \parallel \mathbf{ L}_{1}\boldsymbol{\psi } \parallel, \tfrac{1} {\alpha } \parallel \mathbf{ L}_{2}\boldsymbol{\psi } \parallel \right \}.}$$

Proof

Let \(M =\max \{\parallel \boldsymbol{\psi } (0) \parallel,\parallel \boldsymbol{\psi } (2) \parallel, \frac{1} {\alpha } \parallel \mathbf{ L}_{1}\boldsymbol{\psi } \parallel, \frac{1} {\alpha } \parallel \mathbf{ L}_{2}\boldsymbol{\psi } \parallel \}.\) Define two functions \(\boldsymbol{\theta }^{\pm }(x) = M\mathbf{e} \pm \boldsymbol{\psi } (x)\;\mathrm{where}\;\mathbf{e} = (1,1)^{T}.\) Using the properties of A(x) and B(x) it is not hard to verify that \(\boldsymbol{\theta }^{\pm }(0) \geq {\boldsymbol 0},\,\boldsymbol{\theta }^{\pm }(2) \geq {\boldsymbol 0},\mathbf{L}_{1}\boldsymbol{\theta }^{\pm }(x) \geq {\boldsymbol 0}\) on (0, 1) and \(\mathbf{L}_{2}\boldsymbol{\theta }^{\pm }(x) \geq {\boldsymbol 0}\) on (1, 2). Moreover \([\boldsymbol{\theta }^{\pm }](1) = \pm [\boldsymbol{\psi }](1) = {\boldsymbol 0}\) and \([\boldsymbol{\theta }^{\pm }{}^{{\prime}}](1) = \pm [\boldsymbol{\psi }^{{\prime}}](1) = {\boldsymbol 0}.\) It follows from Lemma 1 that \(\boldsymbol{\theta }^{\pm }(x)\; \geq \;{\boldsymbol 0}\;\) on [0, 2].  □ 

Standard estimates of the solution of ( 1)–( 2) and its derivatives are contained in the following

Lemma 3

Let conditions ( 3 ) and ( 4 ) hold. Let \(\mathbf{u}\) be the solution of ( 1 )–( 2 ). Then for all x ∈ [0,2] and i = 1,2,

$$\displaystyle\begin{array}{rcl} & \vert u_{i}^{(k)}(x)\vert \leq C\,\varepsilon _{i}^{-\frac{k} {2} }\,\left (\vert \vert \mathbf{u}(0)\vert \vert + \vert \vert \mathbf{u}(2)\vert \vert + \vert \vert \mathbf{f}\vert \vert \right ),\;\mathrm{for}\;k = 0,1\ \text{and} & {}\\ & \vert u_{i}^{(k)}(x)\vert \leq C\,\varepsilon _{1}^{-\frac{(k-2)} {2} }\,\varepsilon _{i}^{-1}\,(\vert \vert \mathbf{u}(0)\vert \vert +\vert \vert \mathbf{u}(2)\vert \vert + \vert \vert \mathbf{f}\vert \vert +\varepsilon _{ 1}^{\frac{k-2} {2} }\,\vert \vert \mathbf{f}^{\,(k-2)}\vert \vert ),\;\mathrm{for}\;k = 2,3,4.& {}\\ \end{array}$$

Proof

The proof is by the method of steps. First, the bounds of \(\mathbf{u}\) and its derivatives are estimated in [0, 1]. Next, these bounds of \(\mathbf{u}\) and its derivatives are used to get the estimates in [1, 2]. Applying Lemma 3 of [9], the estimates of derivatives of \(\mathbf{u}\) on [0, 1] follow and using the procedure adopted in the proof of Lemma 3 of [9], it is not hard to derive estimates of derivatives of \(\mathbf{u}\) on [1, 2].  □ 

The Shishkin decomposition of the solution \(\mathbf{u}\) of ( 1)–( 2) is \(\mathbf{u} =\mathbf{ v} +\mathbf{ w}\,\) where the smooth component \(\mathbf{v}\) is the solution of

$$\displaystyle{ \mathbf{L}_{1}\mathbf{v} =\mathbf{ g}\;\;\mathrm{on}\;(0,1),\;\;\mathbf{v}(0) =\mathbf{ u}_{0}(0),\;\;\mathbf{v}(1-) = (A(1))^{-1}(\mathbf{f}(1) - B(1)\,\boldsymbol{\phi }(0)), }$$
(10)
$$\displaystyle{ \begin{array}{lcl} \mathbf{L}_{2}\mathbf{v} =\mathbf{ f}\;\;\mathrm{on}\;(1,2),\;\mathbf{v}(1+) = (A(1))^{-1}(\mathbf{f}(1) - B(1)\,\mathbf{u}_{0}(0)),\;\mathbf{v}(2) =\mathbf{ u}_{0}(2)\end{array} }$$
(11)

and the singular component \(\mathbf{w}\) is the solution of

$$\displaystyle{\mathbf{L}_{1}\,\mathbf{w} = {\boldsymbol 0}\;\mathrm{on}\;(0,1),\;\mathbf{L}_{2}\,\mathbf{w} = {\boldsymbol 0}\;\mathrm{on}\;(1,2)\;\;\mathrm{with}}$$
$$\displaystyle{ \mathbf{w}(0)=\mathbf{u}(0) -\mathbf{ v}(0),\;\mathbf{w}(2) =\mathbf{ u}(2) -\mathbf{ v}(2),\;[\mathbf{w}\,](1)= - [\mathbf{v}\,](1)\;\mathrm{and}\;[\mathbf{w}^{\,{\prime}}](1) = -[\mathbf{v}^{\,{\prime}}](1). }$$
(12)

The singular component is given a further decomposition

$$\displaystyle{ \mathbf{w}(x) = {\boldsymbol {\tilde{w}}}(x) + {\boldsymbol {\hat{w}}}(x) }$$
(13)

where \({\boldsymbol {\tilde{w}}}\) is the solution of

$$\displaystyle{-E{\boldsymbol {\tilde{w}}}^{\,{\prime\prime}}(x) + A(x){\boldsymbol {\tilde{w}}}(x) = {\boldsymbol 0}\;\text{on}\;(0,1),\;{\boldsymbol {\tilde{w}}}(0) =\mathbf{ w}(0),{\boldsymbol {\tilde{w}}}(1) =\mathbf{ K}_{ 1},{\boldsymbol {\tilde{w}}} = {\boldsymbol 0}\;\text{on}\;(1,2]}$$

and \({\boldsymbol {\hat{w}}}\) is the solution of

$$\displaystyle\begin{array}{rcl} & & -E{\boldsymbol {\hat{w}}}^{\,{\prime\prime}}(x) + A(x){\boldsymbol {\hat{w}}}(x) + B(x){\boldsymbol {\hat{w}}}(x - 1) = {\boldsymbol 0}\;\text{on}\;(1,2), {}\\ & & \phantom{-E{\boldsymbol {\hat{w}}}^{\,{\prime\prime}}(x)}{\boldsymbol {\hat{w}}}(1) =\mathbf{ K}_{ 2},\;{\boldsymbol {\hat{w}}}(2) =\mathbf{ w}(2),{\boldsymbol {\hat{w}}} = {\boldsymbol 0}\;\text{on}\;[0,1). {}\\ \end{array}$$

Here, \(\mathbf{K}_{1}\) and \(\mathbf{K}_{2}\) are vector constants to be chosen in such a way that the jump conditions at x = 1 are satisfied.

Bounds on the smooth component and its derivatives are contained in the following

Lemma 4

Let conditions ( 3 ) and ( 4 ) hold. Then for i = 1,2 and for all \(\;x \in [0,2],\,\ \vert v_{i}^{(k)}(x)\vert \leq C,\;\text{for}\;k = 0,1,2\) and \(\vert v_{i}^{(k)}(x)\vert \leq C(1 +\varepsilon _{ i}^{1-\frac{k} {2} }),\;\;\text{for}\;\;k = 3,4.\)

Proof

The proof is by the method of steps. Applying Lemma 4 of [9], the estimates of derivatives of \(\mathbf{v}\) on [0, 1−] follow. Now consider [1+, 2]. On this interval \(\mathbf{v}\) satisfies \(\mathbf{L}_{2}\mathbf{v}(x) =\mathbf{ f}(x)\) or \(\mathbf{L}_{1}\mathbf{v}(x) =\mathbf{ f}(x) - B(x)\,\mathbf{v}(x - 1).\) Using the bounds of \(\mathbf{v}\) and its derivatives on [0, 1−] and the procedure adopted in the proof of Lemma 4 of [9] for the operator \(\mathbf{L}_{1},\) it is not hard to derive the estimates of derivatives of \(\mathbf{v}\) on [1+, 2].  □ 

The layer functions \(B_{1,i}^{l},B_{1,i}^{r},B_{2,i}^{l},B_{2,i}^{r},B_{1,i},B_{2,i},i = 1,2,\) associated with the solution \(\mathbf{u},\) of ( 1)–( 2), are defined by

$$\displaystyle{\begin{array}{l} B_{1,i}^{l}(x) = e^{-x\sqrt{\alpha }/\sqrt{\varepsilon _{i}} },\;B_{1,i}^{r}(x) = e^{-(1-x)\sqrt{\alpha }/\sqrt{\varepsilon _{i}} },\;B_{1,i}(x) = B_{1,i}^{l}(x) + B_{1,i}^{r}(x),\;\mathrm{on}\;\;[0,1], \\ B_{2,i}^{l}(x) = e^{-(x-1)\sqrt{\alpha }/\sqrt{\varepsilon _{i}} },\;B_{2,i}^{r}(x) = e^{-(2-x)\sqrt{\alpha }/\sqrt{\varepsilon _{i}} },\;B_{2,i}(x) = B_{2,i}^{l}(x) + B_{2,i}^{r}(x), \\ \mathrm{on}\;\;[1,2]. \end{array} }$$

Definition 1

For \(B_{1,1}^{l},B_{1,2}^{l},\) let x be the point defined by \(\dfrac{B_{1,1}^{l}(x^{{\ast}})} {\varepsilon _{1}} = \dfrac{B_{1,2}^{l}(x^{{\ast}})} {\varepsilon _{2}}.\)

$$\displaystyle{\text{Then}\quad \frac{B_{1,1}^{r}(1 - x^{{\ast}})} {\varepsilon _{1}} = \frac{B_{1,2}^{r}(1 - x^{{\ast}})} {\varepsilon _{2}},\quad \frac{B_{2,1}^{l}(1 + x^{{\ast}})} {\varepsilon _{1}} = \frac{B_{2,2}^{l}(1 + x^{{\ast}})} {\varepsilon _{2}} }$$
$$\displaystyle{\text{and}\quad \quad \frac{B_{2,1}^{r}(2 - x^{{\ast}})} {\varepsilon _{1}} = \frac{B_{2,2}^{r}(2 - x^{{\ast}})} {\varepsilon _{2}}.}$$

The existence, uniqueness and the properties of x can be verified as in [9, 10].

Bounds on the singular component \(\mathbf{w}\) of \(\mathbf{u}\) and its derivatives are contained in the following

Lemma 5

Let conditions ( 3 ) and ( 4 ) hold. Then there exists a constant C such that for i = 1,2 and for x ∈ [0,1],

$$\displaystyle\begin{array}{rcl} \vert w_{i}(x)\vert \leq C\,B_{1,2}(x),\;\;\;\vert w_{i}^{\,{\prime}}(x)\vert \leq C\sum _{ q=i}^{2}\frac{B_{1,q}(x)} {\sqrt{\varepsilon _{q}}},\;\;\;\vert w_{i}^{\,{\prime\prime}}(x)\vert \leq C\sum _{ q=i}^{2}\frac{B_{1,q}(x)} {\varepsilon _{q}},& & {}\\ \vert w_{i}^{(3)}(x)\vert \leq C\sum _{ q=1}^{2}\frac{B_{1,q}(x)} {\varepsilon _{q}^{\frac{3} {2} }},\;\vert \varepsilon _{i}w_{i}^{(4)}(x)\vert \leq C\sum _{ q=1}^{2}\frac{B_{1,q}(x)} {\varepsilon _{q}} & & {}\\ {and for}\;x \in [1,2],\;\;\;\vert w_{i}(x)\vert \leq C\,B_{2,2}(x),\;\;\;\vert w_{i}^{\,{\prime}}(x)\vert \leq C\sum _{ q=i}^{2}\frac{B_{2,q}(x)} {\sqrt{\varepsilon _{q}}},& & {}\\ \vert w_{i}^{\,{\prime\prime}}(x)\vert \leq C\sum _{ q=i}^{2}\frac{B_{2,q}(x)} {\varepsilon _{q}},\;\vert w_{i}^{(3)}(x)\vert \leq C\sum _{ q=1}^{2}\frac{B_{2,q}(x)} {\varepsilon _{q}^{\frac{3} {2} }},\;\vert \varepsilon _{i}w_{i}^{(4)}(x)\vert \leq C\sum _{ q=1}^{2}\frac{B_{2,q}(x)} {\varepsilon _{q}}.& & {}\\ \end{array}$$

Proof

The proof is by the method of steps. First, the bounds of \(\mathbf{w}\) and its derivatives are estimated in [0, 1]. Next, these bounds of \(\mathbf{w}\) and its derivatives are used to get the estimates in [1, 2]. 

We now derive the bound on \(\mathbf{w}\) on [0, 1]. By using the barrier functions \(\boldsymbol{\theta }^{\pm }(x) = C\,B_{1,2}(x)\mathbf{e} \pm \mathbf{ w}(x),\) where \(\mathbf{e} = (1,1)^{T},\) and Lemma 1 of [9] to the operator \(\mathbf{L}_{1},\) the estimates of \(\mathbf{w}\) on [0, 1] follow. By using the mean value theorem it is easy to find that \(\vert w_{1}^{\,{\prime}}(x)\vert \leq C\,\varepsilon _{1}^{-1/2}\,B_{1,2}(x)\) and \(\vert w_{2}^{\,{\prime}}(x)\vert \leq C\,\varepsilon _{2}^{-1/2}\,B_{1,2}(x).\) In particular \(\vert w_{i}^{\,{\prime}}(0)\vert \leq C\,\varepsilon _{i}^{-1/2}\) and \(\vert w_{i}^{\,{\prime}}(1)\vert \leq C\,\varepsilon _{i}^{-1/2},i = 1,2.\)

By using the barrier functions \(\boldsymbol{\theta }^{\pm }(x) = \left [\begin{array}{*{10}c} C_{1}\left (\varepsilon _{1}^{\frac{-1} {2} }B_{1,1}(x) +\varepsilon _{ 2}^{\frac{-1} {2} }B_{1,2}(x)\right ) \\ C_{2}\varepsilon _{2}^{\frac{-1} {2} }B_{1,2}(x) \end{array} \right ]\pm \mathbf{w}^{\,{\prime}}(x)\) and Lemma 1 of [9] to the operator \(\mathbf{L}_{1},\) the estimates of \(\mathbf{w}^{\,{\prime}}\) on [0, 1] follow. The bounds on \(\mathbf{w}^{\,{\prime\prime}},\,\mathbf{w}^{\,(3)}\) and \(\mathbf{w}^{\,(4)}\) are derived by similar arguments. By using these same techniques and the bounds of \(\mathbf{w}\) and its derivatives on [0, 1], the bounds on \(\mathbf{w}\) and its derivatives are derived on [1, 2].  □ 

3 Improved Estimates

In the following lemma sharper estimates of the smooth component are presented.

Lemma 6

Let conditions ( 3 ) and ( 4 ) hold. Then the smooth component \(\mathbf{v}\) of the solution \(\mathbf{u}\) of  (1) (2) satisfies for \(i = 1,2,\;k = 0,1,2,3\,\) and for x ∈ [0,1],

$$\displaystyle{ \vert v_{i}^{(k)}(x)\vert \leq C\,\left (1 +\sum _{ q=i}^{2}\frac{B_{1,q}(x)} {\varepsilon _{q}^{\frac{k} {2} -1}} \right )\text{and for}\;x \in [1,2],\;\vert v_{i}^{(k)}(x)\vert \leq C\,\left (1 +\sum _{ q=i}^{2}\frac{B_{2,q}(x)} {\varepsilon _{q}^{\frac{k} {2} -1}} \right ). }$$

Proof

Here also the proof is by the method of steps. Applying Lemma 6 of [9], the estimates of the derivatives of \(\mathbf{v}\) on [0, 1] follow. Next for x ∈ [1, 2], the bounds on the derivatives of \(\mathbf{v}\) are derived using the procedure adopted in the proof of Lemma 6 of [9] and the bounds of the derivatives of \(\mathbf{v}\) in the interval [0, 1].  □ 

4 The Shishkin Mesh

A piecewise uniform Shishkin mesh with N mesh-intervals is now constructed on [0, 2] as follows. Let \(\varOmega ^{N} = \varOmega _{1}^{N} \cup \varOmega _{2}^{N}\) where \(\varOmega _{1}^{N} =\{ x_{j}\}_{j=1}^{\frac{N} {2} -1},\;\varOmega _{2}^{N} =\{ x_{j}\}_{j=\frac{N} {2} +1}^{N-1}\;\) and \(x_{\frac{N} {2} } = 1.\) Then \(\overline{\varOmega _{1}}^{N} =\{ x_{j}\}_{j=0}^{\frac{N} {2} },\;\overline{\varOmega _{2}}^{N} =\{ x_{j}\}_{j=\frac{N} {2} }^{N},\;\overline{\varOmega _{1}}^{N} \cup \overline{\varOmega _{2}}^{N} = \overline{\varOmega }^{N} =\{ x_{j}\}_{j=0}^{N}\;\) and Γ N = { 0, 2}. As the solution exhibits overlapping layers at x = 0 and x = 2 and interior overlapping layers at x = 1, a Shishkin mesh is constructed to resolve these layers. The interval [0, 1] is subdivided into 5 sub -intervals as follows \([0,\tau _{1}] \cup (\tau _{1},\tau _{2}] \cup (\tau _{2},1 -\tau _{2}] \cup (1 -\tau _{2},1 -\tau _{1}] \cup (1 -\tau _{1},1].\) The parameters τ r , r = 1, 2, which determine the points separating the uniform meshes, are defined by \(\tau _{2} =\min \{ \frac{1} {4}, \frac{2\sqrt{\varepsilon }_{2}} {\sqrt{\alpha }} \ln N\}\;\;\text{and}\;\tau _{1} =\min \{ \frac{\tau _{2}} {2}, \frac{2\sqrt{\varepsilon }_{1}} {\sqrt{\alpha }} \ln N\}.\)

On the sub -interval (τ 2, 1 −τ 2] a uniform mesh with \(\frac{N} {4}\) mesh points is placed and on each of the sub -intervals \([0,\tau _{1}],\;(\tau _{1},\tau _{2}],\;(1 -\tau _{2},1 -\tau _{1}]\;\text{and}\;(1 -\tau _{1},1],\) a uniform mesh of \(\frac{N} {16}\) mesh points is placed. Similarly, the interval (1, 2] is also divided into 5 sub -intervals \((1,1 +\tau _{1}],\;(1 +\tau _{1},1 +\tau _{2}],\;(1 +\tau _{2},2 -\tau _{2}],\;(2 -\tau _{2},2 -\tau _{1}]\;\text{and}\;(2 -\tau _{1},2],\) using the same parameters τ 1 and τ 2. In particular, when both the parameters τ 1 and τ 2 take on their lefthand value, the Shishkin mesh \(\overline{\varOmega }^{N}\) becomes a classical uniform mesh throughout from 0 to 2. In practice, it is convenient to take  N = 16k,   k ≥ 2. From the above construction of \(\overline{\varOmega }^{N},\) it is clear that the transition points \(\{\tau _{r},1 -\tau _{r},1 +\tau _{r},2 -\tau _{r}\},r = 1,2,\) are the only points at which the mesh-size can change and that it does not necessarily change at each of these points. The following notations are introduced: \(h_{j} = x_{j} - x_{j-1},h_{j+1} = x_{j+1} - x_{j}\) and if x j  = τ r  then \(h_{j}^{-} = x_{j} - x_{j-1},h_{j}^{+} = x_{j+1} - x_{j},\;J =\{ x_{j}: h_{j}^{+}\neq h_{j}^{-}\}.\)

5 The Discrete Problem

In this section, a classical finite difference operator with an appropriate Shishkin mesh is used to construct a numerical method for (1)– (2) which is shown later to be essentially first order parameter-uniform convergent.

The discrete two -point boundary value problem is now defined to be

$$\displaystyle{ \begin{array}{c} \mathbf{L}^{N}\mathbf{U}(x_{j}) = -E\,\delta ^{2}\mathbf{U}(x_{j}) + A(x_{j})\,\mathbf{U}(x_{j}) + B(x_{j})\,\mathbf{U}(x_{j} - 1) =\mathbf{ f}(x_{j})\;\;\text{on}\;\varOmega ^{N}, \\ \mathbf{U}(x_{j} - 1) =\boldsymbol{\phi } (x_{j} - 1)\;\text{for}\;x_{j} \in \varOmega _{1}^{N}\;\text{and}\;\mathbf{U} =\mathbf{ u}\;\;\text{on}\;\varGamma ^{N}.\end{array} }$$
(14)

The problem (14) can be rewritten as,

$$\displaystyle{ \begin{array}{c} \mathbf{L}_{1}^{N}\mathbf{U}(x_{j}) = -E\,\delta ^{2}\mathbf{U}(x_{j}) + A(x_{j})\,\mathbf{U}(x_{j}) =\mathbf{ g}(x_{j})\;\;\text{on}\;\varOmega _{1}^{N}, \\ \mathbf{L}_{2}^{N}\mathbf{U}(x_{j}) = -E\,\delta ^{2}\mathbf{U}(x_{j}) + A(x_{j})\,\mathbf{U}(x_{j}) + B(x_{j})\,\mathbf{U}(x_{j} - 1) =\mathbf{ f}(x_{j})\;\;\text{on}\;\varOmega _{2}^{N}, \\ \mathbf{U} =\mathbf{ u}\;\;\text{on}\;\varGamma ^{N},\;\;D^{-}\mathbf{U}(x_{N/2}) = D^{+}\mathbf{U}(x_{N/2}),\end{array} }$$
(15)

where \(\delta ^{2}\mathbf{Y }(x_{j}) = \dfrac{2} {x_{j+1} - x_{j-1}}\left \{D^{+}\mathbf{Y }(x_{j}) - D^{-}\mathbf{Y }(x_{j})\right \},\) \(D^{+}\mathbf{Y }(x_{j}) = \dfrac{\mathbf{Y }(x_{j+1}) -\mathbf{ Y }(x_{j})} {x_{j+1} - x_{j}}\)  and  \(D^{-}\mathbf{Y }(x_{j}) = \dfrac{\mathbf{Y }(x_{j}) -\mathbf{ Y }(x_{j-1})} {x_{j} - x_{j-1}}.\)

This is used to compute numerical approximations to the solution of (1)– (2). The following discrete results are analogous to those for the continuous case.

Lemma 7

Let conditions ( 3 ) and ( 4 ) hold. Then, for any mesh function \(\mathbf{Y },\) the inequalities \(\mathbf{Y } \geq {\boldsymbol 0}\;\text{on}\;\varGamma ^{N},\mathbf{L}_{1}^{N}\mathbf{Y } \geq {\boldsymbol 0}\) on \(\varOmega _{1}^{N},\ \mathbf{L}_{2}^{N}\mathbf{Y } \geq {\boldsymbol 0}\) on \(\varOmega _{2}^{N}\) and \(D^{+}\mathbf{Y }(x_{N/2}) - D^{-}\mathbf{Y }(x_{N/2}) \leq {\boldsymbol 0}\) imply that \(\mathbf{Y } \geq {\boldsymbol 0}\) on \(\overline{\varOmega }^{N}.\)

Proof

Let i , j be such that \(Y _{i^{{\ast}}}(x_{j^{{\ast}}}) =\min _{i,j}Y _{i}(x_{j})\) and assume that the lemma is false. Then \(Y _{i^{{\ast}}}(x_{j^{{\ast}}}) < 0\). From the hypotheses it is clear that j ≠ 0, N. Suppose \(x_{j^{{\ast}}} \in \varOmega _{1}^{N}.\,\ Y _{i^{{\ast}}}(x_{j^{{\ast}}}) - Y _{i^{{\ast}}}(x_{j^{{\ast}}-1}) \leq 0,\;Y _{i^{{\ast}}}(x_{j^{{\ast}}+1}) - Y _{i^{{\ast}}}(x_{j^{{\ast}}}) \geq 0,\) so \(\;\delta ^{2}Y _{i^{{\ast}}}(x_{j^{{\ast}}}) \geq 0.\) It follows that \((\mathbf{L}_{1}^{N}\mathbf{Y })_{i^{{\ast}}}(x_{j^{{\ast}}}) < 0,\) which is a contradiction. If \(x_{j^{{\ast}}} \in \varOmega _{2}^{N},\) a similar argument shows that \((\mathbf{L}_{2}^{N}\mathbf{Y })_{i^{{\ast}}}(x_{j^{{\ast}}}) < 0,\) which is a contradiction. Because of the boundary values, the only other possibility is that \(x_{j^{{\ast}}} = x_{N/2}.\) Then \(D^{-}Y _{i^{{\ast}}}(x_{N/2}) \leq 0 \leq D^{+}Y _{i^{{\ast}}}(x_{N/2}) \leq D^{-}Y _{i^{{\ast}}}(x_{N/2}),\) by the hypothesis. Then \((\mathbf{L}_{1}^{N}\mathbf{Y })_{i^{{\ast}}}(x_{\frac{N} {2} -1}) < 0,\) a contradiction. □ 

An immediate consequence of this is the following discrete stability result.

Lemma 8

Let conditions ( 3 ) and ( 4 ) hold. Then, for any mesh function \(\mathbf{Y }\) satisfying \(D^{+}\mathbf{Y }(x_{N/2}) = D^{-}\mathbf{Y }(x_{N/2}),\ \vert Y _{i}(x_{j})\vert \leq \max \{\vert \vert \mathbf{Y }(x_{0})\vert \vert,\,\vert \vert \mathbf{Y }(x_{N})\vert \vert,\, \tfrac{1} {\alpha } \parallel \mathbf{ L}_{1}^{N}\mathbf{Y } \parallel _{\varOmega _{ 1}^{N}},\) \(\tfrac{1} {\alpha } \parallel \mathbf{ L}_{2}^{N}\mathbf{Y } \parallel _{\varOmega _{ 2}^{N}}\},\;\text{for each}\;i = 1,2\;\text{and}\;\;0 \leq j \leq N.\)

Proof

Let \(M =\max \{ \vert \vert \mathbf{Y }(x_{0})\vert \vert,\;\vert \vert \mathbf{Y }(x_{N})\vert \vert,\; \tfrac{1} {\alpha } \parallel \mathbf{ L}_{1}^{N}\mathbf{Y } \parallel _{\varOmega _{ 1}^{N}},\; \tfrac{1} {\alpha } \parallel \mathbf{ L}_{2}^{N}\mathbf{Y } \parallel _{\varOmega _{ 2}^{N}}\}.\) Define two functions \(\mathbf{Z}^{\pm }(x_{j}) = M\mathbf{e}\, \pm \,\mathbf{ Y }(x_{j})\;\mathrm{where}\;\mathbf{e} = (1,1)^{T}.\) Using the properties of A(x j ) and B(x j ), it is not hard to find that \(\mathbf{Z}^{\pm }(x_{j}) \geq {\boldsymbol 0}\) for \(j = 0,N,\ \mathbf{L}_{1}^{N}\mathbf{Z}^{\pm }(x_{j}) \geq {\boldsymbol 0}\) for \(x_{j} \in \varOmega _{1}^{N}\) and \(\mathbf{L}_{2}^{N}\mathbf{Z}^{\pm }(x_{j}) \geq {\boldsymbol 0}\) for \(x_{j} \in \varOmega _{2}^{N}.\) At \(\,j\, =\, \tfrac{N} {2},\,D^{+}\mathbf{Z}^{\pm }(x_{ N/2})-D^{-}\mathbf{Z}^{\pm }(x_{ N/2}) = \pm (D^{+}\mathbf{Y }(x_{ N/2})-D^{-}\mathbf{Y }(x_{ N/2})) = {\boldsymbol 0}.\) Hence by Lemma 7, \(\mathbf{Z}^{\pm }\geq {\boldsymbol 0}\) on \(\overline{\varOmega }^{N}.\) □ 

6 Error Estimate

Analogous to the continuous case, the discrete solution \(\;\mathbf{U}\;\) can be decomposed into \(\;\mathbf{V }\;\) and \(\;\mathbf{W}\;\) which are defined to be the solutions of the following discrete problems

$$\displaystyle{ \begin{array}{lcl} \mathbf{L}_{1}^{N}\mathbf{V }(x_{j})& =&\mathbf{g}(x_{j}),\;x_{j} \in \varOmega _{1}^{N},\;\mathbf{V }(0) =\mathbf{ v}(0),\;\mathbf{V }(x_{N/2-1}) =\mathbf{ v}(1-), \\ \mathbf{L}_{2}^{N}\mathbf{V }(x_{j})& =&\mathbf{f}(x_{j}),\;x_{j} \in \varOmega _{2}^{N},\;\mathbf{V }(x_{N/2+1}) =\mathbf{ v}(1+),\;\mathbf{V }(2) =\mathbf{ v}(2)\end{array} }$$

and

$$\displaystyle{ \begin{array}{c} \mathbf{L}_{1}^{N}\mathbf{W}(x_{j}) = {\boldsymbol 0},\;x_{j} \in \varOmega _{1}^{N},\;\mathbf{W}(0) =\mathbf{ w}(0),\;\;\mathbf{L}_{2}^{N}\mathbf{W}(x_{j})={\boldsymbol 0},\;x_{j} \in \varOmega _{2}^{N},\;\mathbf{W}(2)=\mathbf{w}(2), \\ D^{-}\mathbf{V }(x_{N/2}) + D^{-}\mathbf{W}(x_{N/2}) = D^{+}\mathbf{V }(x_{N/2}) + D^{+}\mathbf{W}(x_{N/2}). \end{array} }$$

The error at each point \(x_{j} \in \overline{\varOmega }^{N}\) is denoted by \(\mathbf{e}(x_{j}) =\mathbf{ U}(x_{j}) -\mathbf{ u}(x_{j}).\) Then the local truncation error \(\mathbf{L}^{N}\mathbf{e}(x_{j}),\) for jN∕2, has the decomposition \(\mathbf{L}^{N}\mathbf{e}(x_{j}) =\mathbf{ L}^{N}(\mathbf{V } -\mathbf{ v})(x_{j}) +\mathbf{ L}^{N}(\mathbf{W} -\mathbf{ w})(x_{j}).\) The error in the smooth and singular components are bounded in the following

Theorem 1

Let conditions ( 3 ) and ( 4 ) hold. If \(\mathbf{v}\) denotes the smooth component of the solution of ( 1 )–( 2 ) and \(\mathbf{V }\) the smooth component of the solution of the problem ( 15 ), then, for \(\;i = 1,2,\;j\neq N/2,\)

$$\displaystyle{ \vert (\mathbf{L}_{1}^{N}(\mathbf{V } -\mathbf{ v}))_{ i}(x_{j})\vert \leq C\,(N^{-1}\,\ln N)^{2},\;\;0 \leq j \leq N/2 - 1, }$$
(16)
$$\displaystyle{ \vert (\mathbf{L}_{2}^{N}(\mathbf{V } -\mathbf{ v}))_{ i}(x_{j})\vert \leq C\,(N^{-1}\,\ln N)^{2},\;\;N/2 + 1 \leq j \leq N. }$$
(17)

If \(\mathbf{w}\) denotes the singular component of the solution of ( 1 )–( 2 ) and \(\mathbf{W}\) the singular component of the solution of the problem ( 15 ), then, for \(\;i = 1,2,\;j\neq N/2,\)

$$\displaystyle{ \vert (\mathbf{L}_{1}^{N}(\mathbf{W} -\mathbf{ w}))_{ i}(x_{j})\vert \leq C\,(N^{-1}\,\ln N)^{2},\;\;0 \leq j \leq N/2 - 1, }$$
(18)
$$\displaystyle{ \vert (\mathbf{L}_{2}^{N}(\mathbf{W} -\mathbf{ w}))_{ i}(x_{j})\vert \leq C\,(N^{-1}\,\ln N)^{2},\;\;N/2 + 1 \leq j \leq N. }$$
(19)

Proof

As the expression derived for the local truncation error in \(\mathbf{V }\) and \(\mathbf{W}\) and the estimates for the derivatives of the smooth and singular components are exactly in the form found in [9], the required bounds hold good. □ 

Define, for i = 1, 2, a set of discrete barrier functions on \(\,\overline{\varOmega }^{N}\,\) by

$$\displaystyle{ \omega _{i}(x_{j}) = \left \{\begin{array}{@{}l@{\quad }l@{}} \dfrac{\varPi _{k=1}^{j}(1 + (\sqrt{\alpha }h_{ k}/\sqrt{2\varepsilon _{i}}))} {\varPi _{k=1}^{N/2}(1 + (\sqrt{\alpha }h_{ k}/\sqrt{2\varepsilon _{i}}))}, \quad &\;0 \leq j \leq N/2, \\ \quad \\ \dfrac{\varPi _{k=j}^{N-1}(1 + (\sqrt{\alpha }h_{ k+1}/\sqrt{2\varepsilon _{i}}))} {\varPi _{k=N/2}^{N-1}(1 + (\sqrt{\alpha }h_{ k+1}/\sqrt{2\varepsilon _{i}}))},\quad &\;N/2 \leq j \leq N. \end{array} \right. }$$
(20)

It is not hard to see that,

$$\displaystyle{ \omega _{i}(0) = 0,\;\,\omega _{i}(1) = 1,\;\,\omega _{i}(2) = 0\;\text{and}\;\omega _{1}(x_{j}) <\omega _{2}(x_{j})\;\text{for any}\;0 \leq j \leq N, }$$
(21)
$$\displaystyle{ (\mathbf{L}_{1}^{N}\boldsymbol{\omega })_{ i}(x_{j}) > -\alpha \;\omega _{i}(x_{j}) +\sum _{ l=1}^{i}a_{ il}(x_{j})\,\omega _{i}(x_{j}) +\sum _{ l=i+1}^{2}a_{ il}(x_{j}), }$$
(22)
$$\displaystyle{ (\mathbf{L}_{2}^{N}\boldsymbol{\omega })_{ i}(x_{j}) \geq -\alpha \;\omega _{i}(x_{j})\; +\sum _{ l=1}^{i}a_{ il}(x_{j})\,\omega _{i}(x_{j}) +\sum _{ l=i+1}^{2}a_{ il}(x_{j}) + b_{i}(x_{j}), }$$
(23)
$$\displaystyle{ \text{and}\;\;(D^{+} - D^{-})\omega _{ i}(x_{N/2}) \leq -\frac{C} {\sqrt{\varepsilon }_{i}}. }$$
(24)

It is to be noted that

$$\displaystyle{ \vert (D^{+} - D^{-})e_{ i}(x_{\frac{N}{2} })\vert \;\leq \; C\;\dfrac{h^{{\ast}}} {\varepsilon _{i}} }$$
(25)

where \(h^{{\ast}} = h_{N/2}^{-} = h_{N/2}^{+}.\)

We now state and prove the main theoretical result of this paper.

Theorem 2

Let \(\mathbf{u}(x_{j})\) be the solution of the problem ( 1 )–( 2 ) and \(\mathbf{U}(x_{j})\) be the solution of the discrete problem ( 14 ). Then,

$$\displaystyle{\parallel \mathbf{ U}(x_{j}) -\mathbf{ u}(x_{j}) \parallel \leq C\,N^{-1}\ln N,\;\;0 \leq j \leq N.}$$

Proof

Consider \(Z_{i}(x_{j}) = C_{1}N^{-1}\ln N + C_{2} \dfrac{h^{{\ast}}} {\sqrt{\varepsilon }_{i}}\omega _{i}(x_{j}) \pm e_{i}(x_{j}),\;i = 1,2,\;0 \leq j \leq N,\) where C 1 and C 2 are constants. Then,

$$\displaystyle{ [\mathbf{L}_{1}^{N}\mathbf{Z}]_{ i}(x_{j})\; =\; C_{1}\,\sum _{l=1}^{2}a_{ il}(x_{j})N^{-1}\ln N + C_{ 2} \dfrac{h^{{\ast}}} {\sqrt{\varepsilon }_{i}}[\mathbf{L}_{1}^{N}\boldsymbol{\omega }\,]_{ i}(x_{j}) \pm [\mathbf{L}_{1}^{N}\mathbf{e}\,]_{ i}(x_{j}). }$$
(26)

Using (22) in (26) and Theorem 1,

$$\displaystyle{\begin{array}{l} [\mathbf{L}_{1}^{N}\mathbf{Z}]_{i}(x_{j}) \\ \; \geq \; C_{1}\,\sum _{l=1}^{2}a_{ il}(x_{j})N^{-1}\ln N + C_{ 2} \dfrac{h^{{\ast}}} {\sqrt{\varepsilon }_{i}}\left [-\alpha \;\omega _{i}(x_{j}) +\sum _{ l=1}^{i}a_{ il}(x_{j})\omega _{i}(x_{j})+\sum _{l=i+1}^{2}a_{ il}(x_{j})\right ] \\ \qquad \pm \, C\,N^{-1}\,\ln N\\ \\ \; =\; C_{1}\,\sum _{l=1}^{2}a_{ il}(x_{j})N^{-1}\ln N + C_{ 2} \dfrac{h^{{\ast}}} {\sqrt{\varepsilon }_{i}}\left [\sum _{l=1}^{i}a_{ il}(x_{j})-\alpha \right ]\omega _{i}(x_{j})+C_{2} \dfrac{h^{{\ast}}} {\sqrt{\varepsilon }_{i}}\sum _{l=i+1}^{2}a_{ il}(x_{j}) \\ \qquad \pm \, C\,N^{-1}\,\ln N.\end{array} }$$

Let \(\lambda _{i}(x_{j}) = \left (\sum _{l=1}^{i}a_{ il}(x_{j})-\alpha \right )\omega _{i}(x_{j}) +\sum _{ l=i+1}^{2}a_{ il}(x_{j}),\,i = 1,2.\) Then choosing \(C_{1} > C_{2}\vert \vert \boldsymbol{\lambda }\vert \vert + C,\;[\mathbf{L}_{1}^{N}\mathbf{Z}]_{i}(x_{j})\; \geq \; 0,\;\text{on}\;\varOmega _{1}^{N},\;\text{for}\;i = 1,2.\) For \(\,x_{j} \in \varOmega _{2}^{N},\)

$$\displaystyle{ [\mathbf{L}_{2}^{N}\mathbf{Z}]_{ i}(x_{j})\; =\; C_{1}\left (\sum _{l=1}^{2}a_{ il}(x_{j}) + b_{i}(x_{j})\right )N^{-1}\ln N+C_{ 2} \dfrac{h^{{\ast}}} {\sqrt{\varepsilon }_{i}}[\mathbf{L}_{2}^{N}\boldsymbol{\omega }\,]_{ i}(x_{j})\pm [\mathbf{L}_{2}^{N}\mathbf{e}\,]_{ i}(x_{j}). }$$
(27)

Using (23) in (27) and Theorem 1,

$$\displaystyle{\begin{array}{l} [\mathbf{L}_{2}^{N}\mathbf{Z}]_{i}(x_{j}) \\ \; \geq \; C_{1}\left (\sum _{l=1}^{2}a_{ il}(x_{j}) + b_{i}(x_{j})\right )N^{-1}\ln N \\ \qquad + C_{2} \dfrac{h^{{\ast}}} {\sqrt{\varepsilon }_{i}}\left [-\alpha \;\omega _{i}(x_{j}) +\sum _{ l=1}^{i}a_{ il}(x_{j})\omega _{i}(x_{j}) +\sum _{ l=i+1}^{2}a_{ il}(x_{j}) + b_{i}(x_{j})\right ] \pm C\,N^{-1}\,\ln N \\ \\ \; =\; C_{1}\left (\sum _{l=1}^{2}a_{ il}(x_{j}) + b_{i}(x_{j})\right )N^{-1}\ln N \\ \quad + C_{2} \dfrac{h^{{\ast}}} {\sqrt{\varepsilon }_{i}}\left [\sum _{l=1}^{i}a_{ il}(x_{j})-\alpha \right ]\omega _{i}(x_{j}) + C_{2} \dfrac{h^{{\ast}}} {\sqrt{\varepsilon }_{i}}\left [\sum _{l=i+1}^{2}a_{ il}(x_{j}) + b_{i}(x_{j})\right ] \pm C\,N^{-1}\,\ln N. \end{array} }$$

Let \(\mu _{i}(x_{j}) = \left (\sum _{l=1}^{i}a_{ il}(x_{j})-\alpha \right )\omega _{i}(x_{j}) +\sum _{ l=i+1}^{2}a_{ il}(x_{j}) + b_{i}(x_{j}),\,i = 1,2.\) Then choosing \(C_{1} > C_{2}\vert \vert \boldsymbol{\mu }\vert \vert + C,\;[\mathbf{L}_{2}^{N}\mathbf{Z}]_{i}(x_{j})\; \geq \; 0,\;\text{on}\;\varOmega _{2}^{N},\;\text{for}\;i = 1,2.\) Further,

$$\displaystyle\begin{array}{rcl} D^{+}Z_{ i}(1) - D^{-}Z_{ i}(1)& \leq & -C_{2}\dfrac{Ch^{{\ast}}} {\varepsilon _{i}} \pm C\dfrac{h^{{\ast}}} {\varepsilon _{i}},\;\text{using}\;\mathrm{(24)}\;\text{and}\;\mathrm{(25)} {}\\ & \leq & 0. {}\\ \end{array}$$

Also, using (21), for i = 1, 2,  Z i (0) ≥ 0,  Z i (2) ≥ 0. Therefore, using Lemma 7 for \(\;\mathbf{Z},\;\) it follows that \(\;Z_{i}(x_{j}) \geq 0\;\text{for}\;i = 1,2,\;0 \leq j \leq N.\) As, from (21), \(\;\omega _{i}(x_{j}) \leq 1\;\) for  i = 1, 2,  0 ≤ j ≤ N,   for N sufficiently large, \(\parallel \mathbf{ U} -\mathbf{ u} \parallel \leq CN^{-1}\ln N.\) □ 

7 Numerical Illustration

The parameter-uniform convergence of the numerical method proposed in this paper is illustrated through an example presented in this section.

Example

Consider the BVP

$$\displaystyle{-E\mathbf{u}^{\,{\prime\prime}}(x) + A(x)\mathbf{u}(x) + B(x)\mathbf{u}(x - 1) =\mathbf{ f}(x),\;\text{for}\;x \in (0,2),}$$

\(\;\mathbf{u}(x)\; =\; {\boldsymbol 1},\;\text{for}\;x \in [-1,0],\;\mathbf{u}(2)\; =\; {\boldsymbol 1},\;\) where \(\;E = \text{diag}(\varepsilon _{1},\;\varepsilon _{2}),\;\,A(x) = \left (\begin{array}{*{10}c} 4 &-1\\ -1 & 5 \end{array} \right ),\) \(B(x) = \text{diag}(-0.5,\;-0.5),\;\,\mathbf{f}(x) = (1,\,1)^{T}.\;\)

The maximum pointwise errors and the rate of convergence for this BVP are presented in Table 1. The solution of this problem for \(\varepsilon _{1} = 2^{-13},\;\varepsilon _{2} = 2^{-11}\) and N = 2048 is portraited in Fig. 1.

Fig. 1
figure 1

Solution profile

Table 1 Values of maximum pointwise errors \(\;D_{\varepsilon }^{N}\) and  D N,  order of convergence  p N,  error constant \(\;C_{p}^{N},\;\) order of \(\;\boldsymbol{\varepsilon }\;\)-uniform convergence  p   and \(\;\boldsymbol{\varepsilon }\;\)-uniform error constant \(\;C_{p^{{\ast}}}^{N}\;\) for \(\;\varepsilon _{1}\; =\; \dfrac{\eta } {16},\;\varepsilon _{2}\; =\; \dfrac{\eta } {4}\;\text{and}\;\alpha = 2.4999\;\)