1 Introduction

Whenever a real-life phenomenon is converted into a mathematical model, differential equation, partial differential equation and system of differential equation plays a vital role in modelling natural evolution, we researcher primarily try to obtain what is important, retaining the essential physical quantities and neglecting the negligible ones which involve small positive parameters. Due to their occurrence in a wide range of applications, the study of nonlinear singularly perturbed reaction-diffusion (SPNRD) problems has always been the topic of considerable interest for many mathematicians and engineers. These problems seem to be of significance to the environmental sciences in analyzing pollution from manufacturing sources that is entering the atmosphere. These type of problem occurs in chemical kinetics in catalytic reaction theory. The SPNRD problem models an isothermal reaction which is catalyzed in a pellet and modelled by Eq. (3.1) [29]. Where the concentration of reactant is denoted by v and \(\frac{1}{\sqrt{\varepsilon }}\) is called the Thiele module defined by \(\frac{K}{D}\), K is the reaction rate and D is the diffusion coefficient. In considering these types of problems, it is essential to acknowledge that the diffusion coefficient of the admixture in the material may be sufficiently small, resulting in substantial variations of concentration along with the material depth. Then, the diffusion boundary layers rise. Hence these type of problems exhibit a singularly perturbed character. The mathematical model of such problems has a perturbation parameter, which is a small coefficient multiplying the differential equation’s highest derivatives. Such specific problems rely on a small positive factor so that the solution changes swiftly in some areas of the domain and changes gradually in other sections of the domain. The mathematical model for an adiabatic tubular chemical reactor which processes an irreversible exothermic chemical reaction is also represented by SPNRD problems. The concentrations of the various chemical species involved in the reaction can be determined in a simple manner from a knowledge of v.

We rely on the numerical schemes to get the approximate solution of nonlinear systems by linearizing the nonlinear problems as only few nonlinear systems can be solved explicitly. On a uniform mesh, the existing numerical technique, such as finite difference, finite element, spline collocation, etc., gives unsatisfactory results or one has to modify the local mesh that works fine near the layer region and standard away from the layer region by designing a suitable layer adaptive mesh [2, 6, 16, 20, 28]. These type of problems have been studied by many author’s and proposed a numerical and iterative techniques such as, Natalia Kopteva et al proposed a finite element method and given the error analysis in maximum norm using green function approach [3, 4, 8, 14, 15]. Pankaj Mishra et al developed a cubic spline orthogonal collocation for SPNRD [20]. Relja Vulanovic proposed a six-order finite difference for the said problem [28]. M.K Kadalbajoo et al developed a spline technique on non-unifrom grids for the said problem [11]. SCS Rao et al presented a B-spline collocation method on piece-wise uniform girds for SPNRD [26]. S A Khuri et al developed a patching approach which is a based on the combination of variation of iteration method and adaptive cubic spline collocation scheme [13]. Muhammad Asif Zahoor Raja et al developed a neuro-evolutionary technique which is an artificial intelligent technique for solving SPNRD [25] and other [12, 16, 17].

The novelty of this article is to drive an analytic iterative approximation to nonlinear singularly perturbed reaction diffusion problems using a Bernstein collocation method based on Bernstein polynomial and operational matrix. Bernstein polynomials perform a vital role in numerous mathematics areas, e.g., in approximation theory and computer-aided geometry design [10]. The Bernstein polynomial method’s main advantage over the other existing approach is its simplicity of implementation for nonlinear problems. The key feature of this approach is that it reduces such problem to one of solving the system of algebraic equation via operational matrices [31].

Due to the flexibility and ability, the Bernstein collocation method (BCM) has emerged as a powerful tool to solve linear and nonlinear systems. Consequently, it was successfully applied to find solution of high even-order differential equations using integrals of Bernstein polynomials approach [9], to first order nonlinear differential equations with the mixed non-linear conditions [30], to high order system of linear volterra-fredholm integro-differential equation [19], to riccati differential equation and volterra population model [22], to nonlinear fredholm-volterra integro differential equations [32], to fractional differenital equation [1] and others [7, 27, 31].

The main advantages of this method are (i) it provides the approximate solution over the entire domain while other existing numerical method provide the approximate solution on the discrete point of the domain, (ii) to solve the nonlinear problem, one often use a quasi-linearization technique to linearize the problem and then solve the linearize problem by numerical or other existing techniques. Due to linearization of nonlinear problem the accuracy of nonlinear problem somehow degenerate, which may lead to deceptive solution some time. In this method, we solve the nonlinear problem without linearization. (iii) It is easy to implement.

The paper is organised as, in Section 2 brief sketch of Bernstein collocation method and auxiliary results are presented. Then in Section 3 the existence and uniqueness of the said problem is carried out. In Section 4 the error analysis is done. In Section 5, two nonlinear test problems are taken into account to validate the theoretical finding of the proposed method and a comparative analysis is carried out with the other existing methods. Section 6, contains the conclusion.

2 Brief sketch of the method

In this section we give some brief sketch and auxiliary results corresponding to our proposed method.

2.1 Properties of bernstein polynomial

Generalized form of Bernstein polynomial of \(m^{th}\) order on interval [0, 1] is defined as

$$\begin{aligned} \mathbf{B} _{i,m}(x)=\left( {\begin{array}{c}m\\ i\end{array}}\right) x^{i} (x-1)^{m-i}, \quad 1\le i \le m. \end{aligned}$$
(2.1)

Using Binomial expansion of \((x-1)^{m-i}\), Bernstein polynomial of \(m^{th}\) order reads as:

$$\begin{aligned} \mathbf{B} _{i,m}(x)= & {} \left( {\begin{array}{c}m\\ i\end{array}}\right) x^{i}\left( \sum _{k=0}^{m-i}(-1)^{k}\left( {\begin{array}{c}m-i\\ k\end{array}}\right) x^{k} \right) ,\end{aligned}$$
(2.2)
$$\begin{aligned}= & {} \sum _{k=0}^{m-i}(-1)^{k}\left( \left( {\begin{array}{c}m\\ i\end{array}}\right) \left( {\begin{array}{c}m-i\\ k\end{array}}\right) x^{k+i} \right) , \quad i=0, 1,\cdots ,m. \end{aligned}$$
(2.3)

\(\mathbf{B} _{i,m}(x)\) has the following properties:

  1. 1.

    \(\mathbf{B} _{i,m}(x)\) is continuous over interval [0, 1],

  2. 2.

    \(\mathbf{B} _{i,m}(x)\ge 0\) \(\forall \) \(x\in [0,1]\),

  3. 3.

    Sum of Bernstein polynomial is 1 (unity) i.e

    $$\begin{aligned} \sum _{i=0}^{m}{} \mathbf{B} _{i,m}(x)=1 \quad x\in [0,1]. \end{aligned}$$
    (2.4)
  4. 4.

    Bernstein polynomial \(\mathbf{B} _{i,m}(x)\) can be written in form of recursive relation as

    $$\begin{aligned} \mathbf{B} _{i,m}(x)=(1-x)\mathbf{B} _{i,m-1}(x)+x \mathbf{B} _{i-1,m-1}(x). \end{aligned}$$
    (2.5)

    Let \(\varphi (x)=[\mathbf{B} _{0,m}(x),\mathbf{B} _{1,m}(x),\cdots ,\mathbf{B} _{m,m}(x)]^{T}\), then we can write \(\varphi (x)\) as:

    $$\begin{aligned} \varphi (x)=Q\times T_m(x). \end{aligned}$$
    (2.6)

    Where vector \(Q_{i+1}\) is defined as follows:

    $$\begin{aligned} Q_{i+1}=\left\{ \overbrace{ 0,0,\cdots ,0}^{i times}, (-1)^{0}\left( {\begin{array}{c}m\\ i\end{array}}\right) ,(-1)^{1}\left( {\begin{array}{c}m\\ i\end{array}}\right) \left( {\begin{array}{c}m-i\\ 1\end{array}}\right) ,\cdots (-1)^{m-i}\left( {\begin{array}{c}m\\ i\end{array}}\right) \left( {\begin{array}{c}m-i\\ m-i\end{array}}\right) \right\} ,\nonumber \\ \end{aligned}$$
    (2.7)

    \(T_{m}(x)\) as

    $$\begin{aligned} T_{m}(x)= \begin{bmatrix} 1\\ x\\ \vdots \\ x^{m} \end{bmatrix}, \end{aligned}$$
    (2.8)

    and Q is an \((m+1)\times (m+1)\) matrix and written as follows:

    $$\begin{aligned} Q_{m}(x)= \begin{bmatrix} Q_1\\ Q_2\\ \vdots \\ Q_{m+1} \end{bmatrix}, \end{aligned}$$
    (2.9)

    and

    $$\begin{aligned} \varphi (x)= \begin{bmatrix} \mathbf{B} _{0,m}(x)\\ \mathbf{B} _{1,m}(x)\\ \vdots \\ \mathbf{B} _{m,m}(x) \end{bmatrix}. \end{aligned}$$
    (2.10)

From Eq. (2.7), it is concluded that matrix Q is an invertible matrix as it is an upper triangular matrix with non zero diagonal entries and determinant \(\vert Q \vert =\prod _{i=0}^{i=m}\left( {\begin{array}{c}m\\ i\end{array}}\right) .\)

2.2 Operational matrix for differentiation

In this subsection a Bernstein operational matrix associated with differentiation is derived. From Eq. (2.6) we have

$$\begin{aligned} \varphi (x)=Q\times T_m(x), \end{aligned}$$
(2.11)

and differentiation of \(\varphi (x)\) is calculated as:

$$\begin{aligned} \dfrac{d \varphi (x)}{d x}=Q_{m}\begin{bmatrix} 0\\ 1\\ 2x\\ \vdots \\ mx^{m-1} \end{bmatrix}, \end{aligned}$$
(2.12)

the above expression can be written as:

$$\begin{aligned} \dfrac{d \varphi (x)}{d x}=Q_{m}\Lambda ^{'}X^{'}(x). \end{aligned}$$
(2.13)

Where \(\Lambda ^{'}\) and \(X^{'}\) are written as:

$$\begin{aligned} \Lambda ^{'}= \left[ \begin{array}{l@{\quad }l@{\quad }l@{\quad }l} 0 &{} 0 &{}\cdots &{} 0\\ 1&{}0&{}\cdots &{} 0\\ 0&{}2&{}\cdots &{} 0\\ \vdots &{}\vdots &{}\ddots &{}\vdots \\ 0&{}0&{}\cdots &{}m \end{array}\right] , \end{aligned}$$
(2.14)

and

$$\begin{aligned} X^{'}(x)=\begin{bmatrix} 0\\ 1\\ x\\ \vdots \\ x^{m-1} \end{bmatrix}. \end{aligned}$$
(2.15)

Now vector \(X^{'}(x)\) can be expressed in form of Bernstein polynomial basis as \(\mathbf{B} _{i,m}\) as \(X^{'}(x)= \Delta ^{*}\varphi (x)\), where

$$\begin{aligned} \Delta ^{*}=\begin{bmatrix} Q^{-1}_{1}\\ Q^{-1}_{2}\\ Q^{-1}_{3}\\ \vdots \\ Q^{-1}_{m} \end{bmatrix}, \end{aligned}$$
(2.16)

hence,

$$\begin{aligned} \dfrac{d \varphi (x)}{d x}=Q_{m}\Lambda ^{'}\Delta ^{*}\varphi (x). \end{aligned}$$
(2.17)

Where \(\mathcal {O}=Q_{m}\Lambda ^{'}\Delta ^{*}\) is called the operational matrix of the derivatives. Let us assume that v(x) is approximated as:

$$\begin{aligned} v(x)\simeq V^{T}\varphi (x), \end{aligned}$$
(2.18)

Then the differentiation of v(x) in term of operational matrix is defined as:

$$\begin{aligned} v^{(n)}(x) \simeq V^{T} \varphi ^{(n)}(x)=V^{T}\mathcal {O}^{n}\varphi (x). \end{aligned}$$
(2.19)

2.3 Operational matrix of product

The main concern of this subsection is to explicitly evaluate the product of operational matrix corresponding to Bernstein polynomial of \(m^{th}\) degree operational matrix. Let c be a column vector of \((m+1)\times 1\) and Let \(\breve{C}\) be a \((m+1)\times (m+1)\) product of operational matrices.

$$\begin{aligned} c^{T}\varphi (x)\varphi (x)^{T} \simeq \varphi (x)^{T} \breve{C}, \end{aligned}$$
(2.20)

where \(\varphi (x)\) is defined in (2.6) and we have \(c^{T}\varphi (x)=\sum _{i=0}^{i=m}c_{i} \mathbf{B} _{i,m}\) , we rewrite Eq. (2.20) in form of Bernstein basis as

$$\begin{aligned} c^{T}\varphi (x)\varphi (x)^{T}= & {} c^{T}\varphi (x) T^{T}_{m}(x)Q^{T},\end{aligned}$$
(2.21)
$$\begin{aligned}= & {} [c^{T}\varphi (x), x c^{T}\varphi (x), x^{2}c^{T}\varphi (x), \cdots , x^{m}c^{T}\varphi (x)]Q^{T},\end{aligned}$$
(2.22)
$$\begin{aligned}= & {} \sum _{i=0}^{i=m}[c_{i}{} \mathbf{B} _{i,m}, c_{i}x \mathbf{B} _{i,m}, c_{i}x^{2} \mathbf{B} _{i,m}, \cdots , c_{i}x^{m} \mathbf{B} _{i,m}]Q^{T}. \end{aligned}$$
(2.23)

Now we evaluate all \(x^{k} \mathbf{B} _{i,m}\) in term of \( \lbrace \mathbf{B} _{i,m}\rbrace \) for all \(k, i=0, 1, \cdots , m\). Let

$$\begin{aligned} \mathbf{e} _{k,i}=[\mathbf{e} _{k,i}^{0}, \mathbf{e} _{k,i}^{1}, \cdots , \mathbf{e} _{k,i}^{m}]^{T}. \end{aligned}$$
(2.24)

Let D be a \((m+1)\times (m+1)\) dual matrix of \(\varphi (x)\) such that

$$\begin{aligned} D=\int _{0}^{1}\varphi (x)\varphi (x)^{T}dx. \end{aligned}$$
(2.25)

Now for \(i, k=0, 1, \cdots , m\) we have

$$\begin{aligned} \mathbf{e} _{k,i}^{T}\varphi (x)\simeq x^{k} \mathbf{B} _{i,m}. \end{aligned}$$
(2.26)

Now we define

$$\begin{aligned} \mathbf{e} _{k,i}= & {} D^{-1}\left[ \int _{0}^{1}x^{k} \mathbf{B} _{i,m}{} \mathbf{B} _{0,m}(x), \int _{0}^{1}x^{k} \mathbf{B} _{i,m}{} \mathbf{B} _{1,m}(x), \int _{0}^{1}x^{k} \mathbf{B} _{i,m}{} \mathbf{B} _{2,m}(x), \right. \nonumber \\&\quad \left. \cdots , \int _{0}^{1}x^{k} \mathbf{B} _{i,m}{} \mathbf{B} _{m,m}(x)\right] ^{T} , \end{aligned}$$
(2.27)
$$\begin{aligned} \mathbf{e} _{k,i}= & {} D^{-1}\left( \frac{\left( {\begin{array}{c}m\\ i\end{array}}\right) }{2m+k+1}\right) \left[ \frac{\left( {\begin{array}{c}m\\ 0\end{array}}\right) }{\left( {\begin{array}{c}2m+k\\ i+k\end{array}}\right) }, \frac{\left( {\begin{array}{c}m\\ 1\end{array}}\right) }{\left( {\begin{array}{c}2m+k\\ i+k+1\end{array}}\right) }, \cdots , \frac{\left( {\begin{array}{c}m\\ m\end{array}}\right) }{\left( {\begin{array}{c}2m+k\\ i+k+m\end{array}}\right) } \right] ^{T},\nonumber \\&\quad \text {for}\quad i, k=0, 1, \cdots , m. \end{aligned}$$
(2.28)

Let \(\check{C}_{m+1}\) be a \((m+1)\times (m+1)\) matrix of columns vectors \([\check{C}_1, \check{C}_2 ,\cdots , \check{C}_{m+1}]\) and \(\check{C}_{k+1}\) is defined as

$$\begin{aligned} \check{C}_{k+1}=[\mathbf{e} _{k,0}, \mathbf{e} _{k,1}, \cdots , \mathbf{e} _{k,m}]c \quad \forall \quad k=0, 1, \cdots , m. \end{aligned}$$
(2.29)

Then from Eq. (2.23)

$$\begin{aligned} c^{T}\varphi (x)\varphi (x)^{T}= & {} \begin{bmatrix} \sum _{i=0}^{i=m}c_{i}{} \mathbf{B} _{i,m}\\ \sum _{i=0}^{i=m}c_{i}x\mathbf{B} _{i,m}\\ \sum _{i=0}^{i=m}c_{i}x^{2}{} \mathbf{B} _{i,m}\\ \vdots \\ \sum _{i=0}^{i=m}c_{i}x^{m}{} \mathbf{B} _{i,m} \end{bmatrix}Q^{T}. \end{aligned}$$
(2.30)
$$\begin{aligned} c^{T}\varphi (x)\varphi (x)^{T}\simeq & {} \varphi (x)^{T}[\check{C}_1, \check{C}_2 ,\cdots , \check{C}_{m+1}]Q^{T},\nonumber \\\simeq & {} \varphi (x)^{T}\check{C}Q^{T}. \end{aligned}$$
(2.31)

Hence, the operational matrix of product is defined as:

$$\begin{aligned} \breve{C}=\check{C}Q^{T}. \end{aligned}$$
(2.32)

3 Existence and uniqueness

Consider the following class of singularly perturbed non-linear reaction diffusion problem.

$$\begin{aligned} {\left\{ \begin{array}{ll} \varepsilon v^{\prime \prime }(x) = g(x,v(x)); \qquad x\in (0,1)=\omega ,\\ v(0)=A, \quad v(1)=B, \end{array}\right. } \end{aligned}$$
(3.1)

where \(\varepsilon \) is singular perturbation parameter with \(0< \varepsilon<<1\) and \(g \in C^{\infty }[0,1]\times R\). Let assume that

$$\begin{aligned} g_{u}(x,v)>\Im ^{2}>0 \qquad \forall (x,v)\in \bar{\omega }\times R. \end{aligned}$$
(3.2)

Let \(\alpha (x)\) and \(\beta (x)\) are two smooth function such that \(\alpha (x)\le \beta (x)\) and satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} -\varepsilon \alpha ^{\prime \prime }(x) +g(x,\alpha (x))\le 0 \quad \text {and} \quad -\varepsilon \beta ^{\prime \prime }(x) +g(x,\beta (x))\ge 0\\ \alpha (0)\le A\le \beta (0), \quad \alpha (1)\le B \le \beta (1), \end{array}\right. } \end{aligned}$$
(3.3)

Nagumo condition holds:

$$\begin{aligned} {\left\{ \begin{array}{ll} g(x,v)=O(\vert v \vert )^2,\\ \text {as} \quad \vert v \vert \rightarrow \infty \quad \forall \quad (x,v) \in (\alpha ,\beta )\times [0,1]. \end{array}\right. } \end{aligned}$$
(3.4)

Theorem 1

Condition (3.3) and Nagumo condition (3.4) provides the existence of solution \(v(x)\in C^{2}[0,1]\) of problem (3.1), satisfying the condition \(\alpha (x)\le v(x) \le \beta (x)\) for all \(x\in [0,1]\).

Proof

Let us write the problem in operator form as:

$$\begin{aligned} \pounds v= g(x,v), \end{aligned}$$
(3.5)

where \(\pounds =\varepsilon \dfrac{d^2}{d x^2}\)

$$\begin{aligned} \pounds \alpha \ge g(x,\alpha ), \qquad \text {and} \qquad \pounds \beta \le g(x,\beta ) \qquad \text {on} \quad [a,b]\times R. \end{aligned}$$
(3.6)

As g is continuous for (xv) \(\in \) \([a,b]\times R\), which ensure the existence of v(x) s.t. \(\alpha (x) \le v(x) \le \beta (x)\) and satisfing the boundary value problem (3.1).

The proof of the above theorem can be done by using maximum principal [21, 24]. \(\square \)

Theorem 2

Let the function g be continuous with respect to (xv) and also g belongs to the class of \(C^1\) with respect to v for (xv) in \((\alpha ,\beta )\times [0,1]\) and there exist a positive constant m such that \(g_{v}(x,v)\ge m >0\) for \([0,1]\times \mathcal {R}\). Then for each \(\varepsilon >0\), the problem (3.1) has a unique solution \(v(x,\varepsilon )\in [0,1]\) such that \(\vert v(x,\varepsilon ) \vert \le \frac{\mathcal {M}}{m}\). Where \(\mathcal {M}=\max \lbrace \max \vert g(x,0) \vert , m \vert B \vert , m \vert A \vert \rbrace \)

Proof

Suppose for \(x\in [0,1]\),

$$\begin{aligned}&\alpha (x)=\frac{\mathcal {-M}}{m}, \quad \text {and} \quad \beta (x)=\frac{\mathcal {M}}{m}. \quad \text {Then} \\&\alpha (x)\le \beta (x), \quad \alpha (0)\le A\le \beta (0), \quad \alpha (1)\le B \le \beta (1). \end{aligned}$$

Applying Taylor’s theorem for some point \(\zeta \in (\alpha ,0)\), it is obtained as:

$$\begin{aligned} g(x,\alpha , 0)= & {} g(x,0,0)+\alpha g_{v}(x,\zeta , 0),\\ g(x,\alpha , 0)\le & {} g(x,0,0) + \alpha m \le \mathcal {M}+m(\frac{-\mathcal {M}}{m}) \le 0=-\varepsilon \alpha . \end{aligned}$$

Similarly for intermediate point \(\eta \in (0,\beta )\),

$$\begin{aligned} g(x,\beta , 0)= & {} g(x,0,0)+\beta g_{v}(x,\eta , 0),\\ g(x,\beta , 0)\ge & {} g(x,0,0) + \beta m \ge -\mathcal {M}+m(\frac{\mathcal {M}}{m}) \ge 0 =-\varepsilon \beta . \end{aligned}$$

Hence, it follows from Theorem 1 that for each \(\varepsilon >0\) the problem (3.1) has a solution \(v(x,\varepsilon )\) on [0, 1] satisfying :

$$\begin{aligned} \frac{-\mathcal {M}}{m} \le v(x,\varepsilon ) \le \frac{\mathcal {M}}{m}. \end{aligned}$$
(3.7)

The uniqueness of the solution of problem (3.1) follows from maximum principle. \(\square \)

3.1 Stability of degenerate solution

In this subsection we are concern with the existence and stability of solution for problem (3.1). However we only stick with stable solution of the proposed problem. Let \(z(x) \in C^{1}[a,b]\) be the solution of equation \(g(x,v(x))=g(x,z(x))\) in \(\Omega =[a,b]\). Then we define

$$\begin{aligned} \phi _{0}(z)=\lbrace (x,v(x): \vert v(x)-z(x)\vert \le \psi (x), x\in \Omega \rbrace , \end{aligned}$$
(3.8)

where \(\psi (x)\) is defined as

$$\begin{aligned} \psi (x)= {\left\{ \begin{array}{ll} \vert A-z(a)\vert + \varrho &{} \text {for } x\in [a,a+\varrho /2],\\ \varrho &{} \text {for } x\in [a+\varrho ,b-\varrho ],\\ \vert B-z(b)\vert + \varrho &{} \text {for} x \in [b-\varrho /2,b]. \end{array}\right. } \end{aligned}$$
(3.9)

Where \(\varrho \) be a small positive constant and suppose if \(A\ge z(a)\) and \(B\ge z(b)\), then we define

$$\begin{aligned} \phi _{1}(z)=\lbrace (x,v(x)): \vert v(x)-z(x)\vert \in [0, \psi (x)], x\in \Omega \rbrace , \end{aligned}$$
(3.10)

IIrly if \(A\le z(a)\) and \(B\le z(b)\) then

$$\begin{aligned} \phi _{2}(z)=\lbrace (x,v(x)): \vert v(x)-z(x)\vert \in [-\psi (x),0], x\in \Omega \rbrace , \end{aligned}$$
(3.11)

Now we discuss and define the stability for the solution of problem (3.1). Let us presume that g(xv(x)) has the stated number of continuous partial derivatives w.r.t v(x) in \(\phi _{i}\), \(i=0,1\) or 2 and \(n\ge 2\), \(q \ge 0\) be the integers.

Definition 3.1

The function \(z=z(x)\) be \(I_{q}\)-stable on \(\Omega \) if \(\exists \) a constant m such that

$$\begin{aligned} \frac{\partial ^{j} g(x,z(x))}{\partial v^{j}} = 0 \quad \forall \quad x\in \Omega ,0\le j\le 2q, \end{aligned}$$
(3.12)

and

$$\begin{aligned} \frac{\partial ^{q} g(x,z(x))}{\partial v^{q}} \ge m > 0 \quad \text {in} \quad \phi _{0}(z). \end{aligned}$$
(3.13)

Definition 3.2

The function \(z=z(x)\) be \(II_{n}\)-stable on \(\Omega \) and \(A\le z(a)\), \(B\le z(b)\) if \(\exists \) a constant \(m\ge 0\) such that

$$\begin{aligned} \frac{\partial ^{j} g(x,z(x))}{\partial v^{j}} \ge 0 \quad \forall \quad x\in \Omega ,1\le j\le n-1, \end{aligned}$$
(3.14)

and

$$\begin{aligned} \frac{\partial ^{n} g(x,z(x))}{\partial v^{n}} \ge m > 0 \quad \text {in} \quad \phi _{1}(z). \end{aligned}$$
(3.15)

Theorem 3.3

Let \(g(x,v(x))=0\) satisfies definition 3.1 i.e have \(I_q\) stable solution \(z=z(x)\in C^2(\Omega )\). Then \(\exists \) \(\varepsilon _{0}>0\) such that \(0<\varepsilon <\varepsilon _{0}\). Then problem (3.1) has a solution \(v(x)=v(x,\varepsilon )\) which satisfies the following

$$\begin{aligned} \vert v(x)-z(x)\vert \le s_l(x)+s_{r}(x)+C\varepsilon ^{1/(2q+1)}, \end{aligned}$$
(3.16)

where \(s_l\) and \(s_r\) is defined as

$$\begin{aligned} s_l= {\left\{ \begin{array}{ll} \vert A-z(a)\vert \exp (-\sqrt{m/\varepsilon }(x-a)) &{} \text {if } q=0,\\ \vert A-z(a)\vert [1+\rho \vert A-z(a)\vert ^{q}\varepsilon {-1/2}(x-a)^{-1/q} &{} \text {if } q \ge 1. \end{array}\right. } \end{aligned}$$
(3.17)

And

$$\begin{aligned} s_r= {\left\{ \begin{array}{ll} \vert B-z(b)\vert \exp (-\sqrt{m/\varepsilon }(b-x)) &{} \text {if } q=0,\\ \vert B-z(b)\vert [1+\rho \vert B-z(b)\vert ^{q}\varepsilon {-1/2}(b-x)^{-1/q} &{} \text {if } q \ge 1. \end{array}\right. } \end{aligned}$$
(3.18)

where \(\rho =\sqrt{mq}[(q+1)(2q+1)!]^{-1/2}.\)

Proof

For detail proof [5] \(\square \)

Theorem 3.4

Let \(g(x,v(x))=0\) satisfies definition 3.2 i.e have \(II_{n}\) stable solution \(z=z(x)\in C^2(\Omega )\) such that \(z(a)\le A\), \(z(b) \le B\) and \(z^{''} \ge 0\) in (ab). Then \( \exists \) \(\varepsilon _{0}>0\) such that \(0<\varepsilon <\varepsilon _{0}\). Then problem (3.1) has a solution \(v(x)=v(x,\varepsilon )\) which satisfies the following

$$\begin{aligned} 0 \le v(x)-z(x) \le s_{l}(x)+s_{r}(x)+C\varepsilon ^{\frac{1}{2}}, \end{aligned}$$
(3.19)

where \(w_l\) and \(w_r\) is defined as

$$\begin{aligned} w_{l}(x)=(A-z(a))\left[ 1+ (x-a)(A-z(a))^{\frac{1}{2(n-1)}} \rho _1 / \sqrt{\varepsilon } \right] ^{\frac{-2}{n-1}}, \end{aligned}$$
(3.20)

and

$$\begin{aligned} w_{r}(x)=(B-z(b)) \left[ 1+ (b-x)(B-z(b))^{\frac{1}{2(n-1)}} \rho _1 / \sqrt{\varepsilon } \right] ^{\frac{-2}{n-1}} , \end{aligned}$$
(3.21)

and \(\rho _1\) \( =(n-1)(\frac{m}{2}(m+1)!)^{1/2}\).

Proof

For detail proof [5]. \(\square \)

4 Error Analysis

In this section error analysis is carried out in maximum norm of proposed problem (3.1). Let us assume that \(\varepsilon \le Ch\) where C is a positive constant independent. Let us define the collocation points as \(x_j=x_0+\frac{j}{m}\) and \(h_j=x_j-x_{j-1}\) \(\forall \) \(j=0,1,\cdots ,m\).

First, let us consider the possible cases with g(xv(x)) as

$$\begin{aligned} g(x,v(x))= {\left\{ \begin{array}{ll} f(x,v(x)) ,\\ f(x,v(x))+p(x)v(x) ,\\ f(x,v(x))-p(x)v(x) . \end{array}\right. } \end{aligned}$$
(4.1)

There only three possible cases associated with g(xv(x)). In first case g(xv(x)) is nonlinear function and other two cases are when, linear part is extracted out from g(xv(x)) with positive and negative signs. Let \(p(x)\le \vert \wp \vert \).

Suppose \(\chi =C[0,1]\) be the Banach space equipped with norm defined as

$$\begin{aligned} \Vert v \Vert =\max _{\mathop {x\in [0,1]}}\vert v(x) \vert . \end{aligned}$$
(4.2)
Table 1 “Comparison between exact solution and the approximate solution for \(M=6,9\) of Example 5.1 for \(\varepsilon =0.1\)
Table 2 “Absolute error at different iterations of Example 5.1 for \(\varepsilon =0.1\)
Fig. 1
figure 1

“Comparison between exact solution and the numerical solution computed by BCM of Example 5.1 for \(\varepsilon =0.1.\)

Fig. 2
figure 2

“Comparison between exact solution and the numerical solution computed by BCM of Example 5.1 for \(\varepsilon =0.01.\)

Fig. 3
figure 3

“Comparison between exact solution and the numerical solution computed by BCM of Example 5.2 for \(\varepsilon =0.1.\)

Fig. 4
figure 4

“Comparison between exact solution and the numerical solution computed by BCM of Example 5.2 for \(\varepsilon =0.01.\)

Table 3 “Comparison between exact solution and the approximate solution for \(M=6, 9, 12\) of Example 5.1 for \(\varepsilon =0.01\)
Table 4 “Absolute error at different iterations of Example 5.1 for \(\varepsilon =0.01\)
Table 5 “CPU time of Table (6) of Example 5.1 for \(\varepsilon =0.01\)
Table 6 “Absolute error at different iterations of Example 5.1 for \(\varepsilon =0.001\)

Theorem 4.1

Let v(x) is the solution of (3.1) and \(g \in C^{\infty }[0,1]\times R\) then we have the following bound on the derivative of v(x)

$$\begin{aligned} \vert v^{(i)}(x) \vert \le \vert C(1+\varepsilon ^{-i}e^{-\Im x/\sqrt{\varepsilon }}+\varepsilon ^{-i}e^{\Im (-1+x)/\sqrt{\varepsilon }})\vert , \end{aligned}$$
(4.3)

for \(i=1,2,3\).

Proof

For proof of the above theorem see [14]. \(\square \)

Theorem 4.2

Suppose \(\mathcal {F}\in \chi \) and \(\mathbf{B} _{m}(\mathcal {F})\) be a sequence converges to \(\mathcal {F}\) uniformly, where \(\mathbf{B} _{m}(\mathcal {F})\) is defined in (2.1). Then for any \(\delta \ge 0\) \(\exists \) m such that

$$\begin{aligned} \Vert \mathbf{B} _{m}(\mathcal {F})-\mathcal {F}\Vert \le \delta . \end{aligned}$$
(4.4)
Table 7 “Comparison between exact solution and the approximate solution for \(M=6,9,12\) of Example 5.2 for \(\varepsilon =0.1\)
Table 8 “Absolute error at different iterations of Example 5.2 for \(\varepsilon =0.1\)
Table 9 “Comparison between exact solution and the approximate solution for \(M=6,9,12\) of Example 5.2 for \(\varepsilon =0.01\)
Table 10 “Absolute error at different iterations of Example 5.2
Table 11 “Absolute error comparison of proposed method for Example 5.2 with neuro-evolutionary model technique [25]”
Table 12 “Maximum absolute error comparison of proposed method for Example 5.2 with a patching approach Method in [13]”
Table 13 “Maximum norm error comparison of proposed method for Example 5.2 with spline technique [11] on piece-wise uniform mesh”
Table 14 “Maximum norm error comparison of proposed method for Example 5.2 with B-spline collocation method [26] on piece-wise uniform mesh”

Proof

For detailed proof see [23]. \(\square \)

Theorem 4.3

Let \(\mathcal {F}\) be a bounded and continuous function and \(\mathcal {F}^{''}\) exist in [0,1], then we have the following error bound

$$\begin{aligned} \Vert \mathbf{B} _{m}(\mathcal {F})-\mathcal {F}\Vert \le \frac{1}{2m}x(1-x) \Vert \mathcal {F}^{''} \Vert . \end{aligned}$$
(4.5)

Proof

For detail proof see [18]. \(\square \)

Theorem 4.4

Let v be the exact solution and \(v_m\) denotes the approximate solution by BCM. Suppose nonlinear function g(xv) satisfies the Lipschitz condition

$$\begin{aligned} \vert g(x,v)-g(x,v^{\star })\vert \le \mathcal {L}\vert v-v^{\star }\vert , \end{aligned}$$
(4.6)

then the error bound for the BCM is given as:

$$\begin{aligned} \Vert v-v_m\Vert \le \frac{\mathcal {L}\wp }{8m}\Vert v^{''}\Vert . \end{aligned}$$
(4.7)

where \(\mathcal {L}\) is known as Lipschitz constant.

Proof

Let

$$\begin{aligned} \Vert v-v_m\Vert = \max _{\mathop {x\in [0,1]}}\vert g(x,v(x))-g(x,v_{m}(x)) \vert . \end{aligned}$$
(4.8)

Case 1 When \(g(x,v(x))=f(x,v(x))\), then

$$\begin{aligned} \Vert v-v_m\Vert = \max _{\mathop {x\in [0,1]}}\vert f(x,v(x))-f(x,v_{m}(x)) \vert , \end{aligned}$$
(4.9)

now using Lipschitz condition (4.6)

$$\begin{aligned} \Vert v-v_m\Vert \le \mathcal {L}\max _{\mathop {x\in [0,1]}} \vert v(x)-v_{m}(x) \vert , \end{aligned}$$
(4.10)

now we have approximated v(x) by BCM then we have

$$\begin{aligned} \Vert v-v_m\Vert \le \mathcal {L}\max _{\mathop {x\in [0,1]}}\vert v(x)-\mathbf{B} _{m}(x) \vert , \end{aligned}$$
(4.11)

Now from theorem 4.3, we have

$$\begin{aligned} \Vert v-v_m\Vert\le & {} \frac{ \mathcal {L}}{2m} \max _{\mathop {x\in [0,1]}} \vert x(1-x)\vert \Vert v^{''} \Vert ,\end{aligned}$$
(4.12)
$$\begin{aligned}\le & {} \frac{ \mathcal {L}}{8m}\Vert v^{''} \Vert , \end{aligned}$$
(4.13)

Case 2 When \(g(x,v(x))=f(x,v(x))+p(x)v(x)\), then

$$\begin{aligned} \Vert v-v_m\Vert= & {} \max _{\mathop {x\in [0,1]}}\vert f(x,v(x))+p(x)v(x)-f(x,v_{m}(x))-p(x)v_{m}(x) \vert , \end{aligned}$$
(4.14)
$$\begin{aligned}= & {} \max _{\mathop {x\in [0,1]}}\vert f(x,v(x))-f(x,v_{m}(x))+p(x)(v(x)-v_{m}(x)) \vert , \end{aligned}$$
(4.15)
$$\begin{aligned}\le & {} \max _{\mathop {x\in [0,1]}}\vert f(x,v(x))-f(x,v_{m}(x))\vert +\max _{\mathop {x\in [0,1]}} \vert p(x)\vert \vert (v(x)-v_{m}(x)) \vert ,\nonumber \\ \end{aligned}$$
(4.16)

using Lipschitz condition (4.6)

$$\begin{aligned} \Vert v-v_m\Vert \le \mathcal {L} \left[ \max _{\mathop {x\in [0,1]}}\vert v(x)-v_{m}(x)\vert +\max _{\mathop {x\in [0,1]}} \vert p(x)\vert \vert v(x)-v_{m}(x) \vert \right] . \end{aligned}$$
(4.17)

Now the proof is straight forward. Using conditions (4.11), (4.124.13). We obtain the following bound

$$\begin{aligned} \Vert v-v_m\Vert \le \frac{ \mathcal {L}\wp }{8m}\Vert v^{''} \Vert , \end{aligned}$$
(4.18)

Case 3 When \(g(x,v(x))=f(x,v(x))-p(x)v(x)\), The proof is similar and we have

$$\begin{aligned} \Vert v-v_m\Vert \le \frac{ \mathcal {L}\wp }{8m}\Vert v^{''} \Vert . \end{aligned}$$
(4.19)

\(\square \)

Theorem 4.5

Let v be the exact solution of Problem (3.1) and \(v_m\) be the approximate solution evaluated by BCM method. Then we have the following error bound

$$\begin{aligned} \Vert v-v_m\Vert \le Ch^2. \end{aligned}$$
(4.20)

Where C is a constant independent of \(\varepsilon \) and h.

Proof

From Theorem 4.4 we have the following bound,

$$\begin{aligned} \Vert v-v_m\Vert \le Ch\Vert v^{''}\Vert . \end{aligned}$$
(4.21)

From Theorem 4.1 we have the following bound,

$$\begin{aligned} \Vert v-v_m\Vert\le & {} C(1+\varepsilon ^{-2}e^{-\Im x/\sqrt{\varepsilon }}+\varepsilon ^{-2}e^{\Im (-1+x)/\sqrt{\varepsilon }}), \end{aligned}$$
(4.22)
$$\begin{aligned}\le & {} C(h^2+e^{-\Im x/\sqrt{\varepsilon }}+e^{\Im (-1+x)/\sqrt{\varepsilon }}). \end{aligned}$$
(4.23)

Now \(e^{-\Im x/\sqrt{\varepsilon }}\ge e^{\Im (-1+x)/\sqrt{\varepsilon }}\) for \(x\in [0,1/2]\), and \(e^{-\Im x/\sqrt{\varepsilon }}\le e^{\Im (-1+x)/\sqrt{\varepsilon }}\) for \(x\in [1/2,0]\). So we omit the function \(e^{\Im (-1+x)/\sqrt{\varepsilon }}\) and we do analysis for \(x\in [0,1/2]\). Now using assumption that \(\varepsilon \le Ch^2\) and inequality \(1-e^{-x} \le x \) \(\forall x>0\). The proof is straight forward and we have following bound

$$\begin{aligned} \Vert v-v_m \Vert \le Ch^2. \end{aligned}$$
(4.24)

\(\square \)

5 Numerical results and discussion

This section analyzes the proposed method’s efficiency and implements the BCM to solve two nonlinear singularly perturbed reaction-diffusion problems. The proposed method approximated solution is compared with spline technique [11], B-spline collocation method [26], a patching approach based on novel combination of variation of iteration and cubic spline collocation method [13] and a neuro-evolutionary artificial technique [25].

Example 5.1

Consider the non-linear problem singularly pertubed problem used as a model of Michaelis-Menten process, the model takes the form of an equation describing the rate of enzymatic reaction in biology [20].

$$\begin{aligned} -\varepsilon v^{\prime \prime }(x)-\frac{v(x)-1}{2-v(x)}+f(x)=0, \qquad v(0)=v(1)=0. \end{aligned}$$
(5.1)

The f(x) of the above problem is calculate so that the exact solution of the above problem is \(v(x)=1-\frac{e^{\frac{-x}{\sqrt{\varepsilon }}}+e^{\frac{-(1-x)}{\sqrt{\varepsilon }}}}{1+e^{\frac{-1}{\sqrt{\varepsilon }}}}\). The approximate solution obtained by BCM and exact solution of Example 5.1 for different values of \(\varepsilon \) are given in Tables 1 and 3. The absolute error calculated for Example 5.1 is given in Tables 2 and 6.

Example 5.2

Consider the following non-linear singularly perturbed problem from [11, 13, 25, 26]. This problem can be used to describe a mathematical model of an adiabatic tubular chemical reactor that processes an irreversible exothermic chemical reaction. Where \(\varepsilon \) represents the dimensionless adiabatic temperature. In fact, the steady state temperature of the reaction is equivalent to a positive solution v.

$$\begin{aligned} {\left\{ \begin{array}{ll} -\varepsilon v^{\prime \prime }+v+v^{2}=e^{\frac{-2x}{\sqrt{\varepsilon }}}\\ v(0)=1, \quad v(1)=e^{\frac{-1}{\sqrt{\varepsilon }}}. \end{array}\right. } \end{aligned}$$
(5.2)

The exact solution of the above problem is given as: \(v(x)=e^{\frac{-x}{\sqrt{\varepsilon }}}\). The approximate solution obtained by the BCM and the exact solution of Example 5.2 for different values of \(\varepsilon \) are given in Tables 7 and 9. The absolute error calculated for Example 5.2 is given in Tables 8 and 10. We have compared the error obtained by the BCM for Example 5.2 with spline technique [11], B-spline collocation method [26], a patching approach based on novel combination of variation of iteration and cubic spline collocation method [13] and a neuro-evolutionary artificial technique [25] in Tables 11, 12, 14, and 13 respectively.

Figures 1 and 2 depict the comparison between the exact solution and the approximate solution obtained from the proposed method for Example 5.1 with \(\varepsilon =0.1 \) and 0.01 respectively and Figs. 3 and 4 depict the comparison between the exact solution and the approximate solution obtained from the proposed method for Example 5.2 with \(\varepsilon =0.1\) and 0.01 respectively. It is observed in all figures as m increases the approximate solution converges to the exact solutions, which demonstrate the convergence of our proposed method.

Table 15 “Absolute error at different iterations of Example 5.3 for \(\varepsilon =1\)
Table 16 “ Absolute error at different iterations of Example 5.3 for \(\varepsilon =0.1\)

Example 5.3

Consider the following non-linear singularly perturbed problem.

$$\begin{aligned} {\left\{ \begin{array}{ll} -\varepsilon v^{\prime \prime }+v+(v+1)^3=-1\\ v(0)=0, \quad v(1)=0. \end{array}\right. } \end{aligned}$$
(5.3)

The exact solution of the above problem is not known. The approximate solution obtained by the BCM and the absolute error calculated for Example 5.3 is given in Tables 15 and 16. As the true solution of problem 5.3 is not known to us. So to calculate the error we take a reference solution computed using \(m=20\).

6 Conclusion

In this article, we have successfully implemented the Bernstein collocation method for solving SPNRD problems. To solve the nonlinear problems, one often uses a quasi-linearization technique to linearize the problem and then solve the linearized problem by numerical or other existing techniques. Due to the linearization of the nonlinear problem, the approximated solution’s accuracy somehow degenerates, which may leads to deceptive solutions sometimes. Here we address this issue and solve the nonlinear problem without linearization. The proposed method is easy to implement as it changes complex nonlinear problems to a system of algebraic equation system and can be extended to even a general class of problems. This method yield a higher level of precision just using lower degree polynomials without any limiting assumption.