1 Introduction

Twelfth-order boundary-value problems arises in many scientific applications in various branches of science. For example, when heating an infinite horizontal layer of fluid from below along with a uniform magnetic field in the same direction across the fluid and with the fluid is subjected to the action of rotation then, instability will occur. When instability occurs as ordinary convection, it is modelled by tenth-order boundary-value problems or, when instability occurs as over stability, it is modelled by twelfth-order boundary-value problems [5, 6, 8, 9].

Agarwals book [2] contains a detailed theorems which discusses the exitance and uniqueness for solving general higher-order boundary-value problems.

Several numerical methods have been developed for solving higher-order boundary value problem such as non polynomial spline [26, 28], modified Adomian decomposition method [3, 31], Legendre operational matrix method [25], Quintic B-spline collocation method [30], iterative method [29], chebychev polynomial solution [10], differential transform technique [24] and variational iteration method [23]. Each of these methods has its essential advantages and disadvantages. An ongoing search for a more effective, general and accurate numerical techniques is a must.

In recent years, a lot of attention has been devoted to the study of Euler method to investigate various scientific models. The efficiency of the method has been formally proved by many researchers [4, 16, 18,19,20]. Euler methods for ordinary differential equations has many salient features due to the properties of the basis functions and the manner in which the problem is discretized. The approximating discrete system depends only on parameters of the differential equation.

The aim of this paper is to develop Euler-collocation method for the numerical solution of the following class of linear and non-linear higher-order boundary value problems:

$$\begin{aligned} u^{(2r)}(x)+\sum _{m=0}^{2r-1}P_{m}(x)u^{(m)}(x)=\sigma (x,u(x)), \quad 0\le x \le 1,\quad r=5,6. \end{aligned}$$
(1.1)

subject to the boundary conditions

$$\begin{aligned} u^{(i)}(0)=\alpha _{i}, \quad u^{(i)}(1)=\beta _{i},\quad i=0,1,2,\ldots ,r-1, \end{aligned}$$
(1.2)

where \(\sigma (x,u)\), \(P_{m}(x)\) and u(x) are continuous functions in \(L^{2}(0,1)\).

The organization of the paper is as follows. In Sect. 2, Euler polynomials along with their relative properties that will be needed later is introduced. In Sect. 3, Euler method is developed for linear and nonlinear higher-order boundary value problems. Error analysis of the method is presented in Sect. 4. In Sect. 5, some numerical examples are presented. Finally, Sect. 6 provides conclusions of the study.

2 Preliminaries and fundamentals relations

2.1 Euler polynomials

The formulation of Euler numbers and polynomials was first introduced by Euler back in 1740 and like other polynomials they have substantial literature and applications in number theory [7, 27, 32]. They are well related to Bernoulli polynomials [13, 15] where the classical Euler polynomials \(E_{n}(x)\) can be defined by means of exponential generating functions as

$$\begin{aligned} \frac{2e^{t x}}{e^{t}+1}=\sum _{n=0}^{\infty }E_{n}(x) \frac{t^{n}}{n!}, \quad (|t|\le \pi ), \end{aligned}$$
(2.1)

which is also satisfies the following interesting properties

$$\begin{aligned}&E'_n(x)=n\,E_{n-1}(x),\quad n=1,2,\ldots \end{aligned}$$
(2.2)
$$\begin{aligned}&\sum _{k=0}^{n}\left( {\begin{array}{c}n\\ k\end{array}}\right) E_{k}(x)+E_{n}(x)=2x^{n}, \end{aligned}$$
(2.3)
$$\begin{aligned}&\int _{0}^{1}E_{n}(x)dx=\frac{-2 n!}{(n+1)!}E_{n+1}(0), \end{aligned}$$
(2.4)

with \(E_0(x)=1\) and \(\left( {\begin{array}{c}n\\ k\end{array}}\right) \) is a binomial coefficient. Explicitly, the first few basic Euler polynomials can be expressed by

$$\begin{aligned} E_1(x)=x-\frac{1}{2},\quad E_2(x)=x^{2}-x,\quad E_3(x)=x^{3}-\frac{3}{2}x^{2}+\frac{1}{4}. \end{aligned}$$

Figure 1 shows the behavior of the first few Euler polynomials on the interval \([-1,2]\).

Fig. 1
figure 1

Behaviour of Euler polynomials

Also, Euler polynomials \(E_{n}(x)\) are related to Bernoulli polynomials from the following formula

$$\begin{aligned} E_{n}(x)=\frac{1}{n+1}\sum _{k=1}^{n+1}\left( 2-2^{k+1}\right) \left( {\begin{array}{c}n+1\\ k\end{array}}\right) B_{k}(0)\,x^{n+1-k}, \quad n=0,1,\ldots \end{aligned}$$
(2.5)

where \(B_{k}(x), k=0,1,\ldots \) are the Bernoulli polynomials of order k which satisfies the well known relations [1, 14, 22]

$$\begin{aligned} B_{n}^{\prime }(x)=nB_{n-1}(x), \quad \int _{0}^{1}B_{n}(x)dx=0, \quad n=1,2,\ldots , \end{aligned}$$
$$\begin{aligned} \sum _{k=0}^{n}\left( {\begin{array}{c}n+1\\ k\end{array}}\right) B_{k}(x)=(n+1)x^{n}. \end{aligned}$$

For the approximation of any unknown function, Euler basis \(E_{n}(x)\) has several advantages over other methods since they uses less number of basis functions compared to other methods and their implementations are simple and straightforward. Next, we will introduce the differentiation matrix of these polynomials that will be needed later.

2.2 Euler operational matrix of differentiation

Euler polynomials has a lot of interesting relations and properties as mentioned in Sect. 2.1 and the most important relation is the relations of their derivatives which is used for solving different type of boundary-value problem. We introduce a technique based on Euler approximation to the solution of Eqs. (1.1)–(1.2) expressed in the truncated Euler series form

$$\begin{aligned} u_N(x)=\sum _{n=0}^N\,c_n\,E_n(x)=\mathbf E (x)\mathbf c \end{aligned}$$
(2.6)

where \(\left\{ c_n\right\} _{n=0}^N \) are the unknown Euler coefficients, N is any chosen positive integer such that \(N\ge 2r\), and \(E_n(x)\), \( n=0,1,\ldots ,N\) are the Euler polynomial of the first kind which are constructed according to Eq. (2.5) and it’s relations. Then, the Euler coefficient vector \(\mathbf c \) and the Euler vector \(\mathbf E (x)\) are given by

$$\begin{aligned} \mathbf c ^t=[c_0,c_1,\ldots ,c_N],\quad \mathbf E (x)=[E_0(x),E_1(x),\ldots ,E_N(x)]. \end{aligned}$$

According to the expansion in Eq. (2.5) and it’s later properties we find

$$\begin{aligned} \underbrace{ \left[ \begin{array}{c} E_{0}(x) \\ E_{1}(x) \\ \vdots \\ E_{N}(x) \\ \end{array} \right] }_\mathbf{E ^{t}(x)} =\underbrace{\left[ \begin{array}{cccc} \frac{(2-2^2)}{1}\left( {\begin{array}{c}1\\ 1\end{array}}\right) B_1(0) &{} 0&{} \ldots &{} 0 \\ \frac{(2-2^3)}{2}\left( {\begin{array}{c}2\\ 2\end{array}}\right) B_2(0) &{} \frac{(2-2^2)}{2} \left( {\begin{array}{c}2\\ 1\end{array}}\right) B_1(0) &{} \ldots &{} 0 \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ \frac{(2-2^{N+2})}{N}\left( {\begin{array}{c}N\\ N\end{array}}\right) B_N(0) &{} \frac{(2-2^{N+1})}{N}\left( {\begin{array}{c}N\\ N-1\end{array}}\right) B_{N-1}(0) &{} \ldots &{} \frac{(2-2^2)}{N} \left( {\begin{array}{c}N\\ 1\end{array}}\right) B_1(0) \\ \end{array} \right] }_\mathbf{B } \underbrace{\left[ \begin{array}{c} 1 \\ x \\ \vdots \\ x^{N} \\ \end{array} \right] }_\mathbf{X ^{t}(x)}. \end{aligned}$$
(2.7)

Since \(\mathbf B \) is a lower triangular matrix which has a nonzero diagonal elements , it is nonsingular and \(\mathbf B ^{-1}\) exists. Thus, Euler vector can be described directly from

$$\begin{aligned} \mathbf E (x)=\mathbf X (x)\,\mathbf B ^{t} \end{aligned}$$
(2.8)

note that \( \left[ \quad \right] ^t \) denotes transpose of the matrix [ ] where \(\mathbf E ^t(x)\) and \(\mathbf X ^t(x)\) be the \((N+1)\times 1\) and \(\mathbf B \) is the \((N+1)\times (N+1)\) operational matrix. Now, the matrix form of the solution

$$\begin{aligned} u_{N}(x)=\mathbf X (x)\,\mathbf B ^t\,\mathbf c . \end{aligned}$$
(2.9)

According to Eq. (2.8) the following formula is concluded evidently. Also, the relation between he matrix \(\mathbf X (x)\) and its derivative \(\mathbf X ^{(1)}(x)\) is

$$\begin{aligned} \mathbf X ^{(1)}(x)=\underbrace{\left[ 1,x,x^2,\ldots ,x^N \right] }_\mathbf X (x) \,\underbrace{\left[ \begin{array}{ccccc} 0 &{} 1&{} 0&{}\ldots &{} 0 \\ 0 &{} 0&{} 2&{}\ldots &{} 0 \\ \vdots &{} \vdots &{}\vdots &{} \ddots &{} \vdots \\ 0 &{} 0&{} 0&{}\ldots &{} N \\ 0 &{} 0&{} 0&{}\ldots &{} 0 \\ \end{array} \right] }_\mathbf{M }, \end{aligned}$$
(2.10)

from which we conclude that

$$\begin{aligned} \mathbf X ^{(i)}(x)=\mathbf X (x)\,\mathbf M ^i,\quad i=0,1,\ldots , 2r \end{aligned}$$
(2.11)

where \(\mathbf X ^{(i)}(x)\) is denoting the ith derivative of \(\mathbf X (x)\) then we have

$$\begin{aligned} u_{N}^{(i)}=\mathbf X (x) \mathbf M ^i \mathbf B ^t \mathbf c ,\quad i=0,1,\ldots ,2r \end{aligned}$$
(2.12)

Next, we will illustrate the use of these polynomials along with their operational matrix of derivative for solving Eq. (1.1).

3 Application of Euler matrix collocation method

We will illustrate our method based on Eq. (1.1) on two different cases

3.1 Linear high-order BVP

In this case we assign the source term \(\sigma (x,u)=f(x)\) into Eq. (1.1), then the equation becomes

$$\begin{aligned} u^{(2r)}(x)+\sum _{m=0}^{2r-1}P_{m}(x)u^{(m)}(x)=f(x), \quad 0\le x \le 1 ,\quad r=5,6. \end{aligned}$$
(3.1)

Let us seek an approximation of (3.1) expressed in terms of Euler polynomials as

$$\begin{aligned} u(x)\approx u_N(x)=\sum _{n=0}^N\,c_n\,E_n(x)=\mathbf E (x)\,\mathbf c \end{aligned}$$

where the Euler coefficient vector \(\mathbf c \) and the Euler vector \(\mathbf E (x)\) are given

$$\begin{aligned} \mathbf c ^t=[c_0,c_1,\ldots ,c_N],\quad \mathbf E (x)=[E_0(x),E_1(x),\ldots ,E_N(x)], \end{aligned}$$

then the ith derivative of \(u_N(x)\) can be expressed in the matrix from by

$$\begin{aligned} u_N^{(i)}(x)=\mathbf E ^{(i)}\,\mathbf c =\mathbf X (x)\,\mathbf M ^{i}{} \mathbf B ^t\,\mathbf c \quad i=1,2,\ldots ,2r. \end{aligned}$$
(3.2)

To obtain the approximated solution, we first reduce the terms of Eq. (3.1) . The corresponding form of the nonhomogeneous term f(x) of Eq. (3.1), can be shown

$$\begin{aligned} f(x)\approx \sum _{n=0}^N\,f_n\,E_n(x)=\mathbf E (x)\mathbf F =\mathbf X (x)\,\mathbf B ^t\,\mathbf F , \end{aligned}$$
(3.3)

where \(\mathbf F = [f_0, f_1, \ldots , f_N]^t\) and \(f_j,\) for \(j = 0, 1, . . .,N\) can be calculated from the a backward linear relation [17]

$$\begin{aligned} f_N=\frac{1}{N!}\int _0^1 f^{(N)}(x)dx,\quad j=N, \end{aligned}$$
(3.4)

and

$$\begin{aligned}&\displaystyle f_j=\frac{\int _0^1 f^{(j)}(x)dx+\sum _{k=0}^{N-j-1}\frac{2j!}{k+2}\left( {\begin{array}{c}k+j+1\\ k+1\end{array}}\right) E_{k+2}(0)f_{j+k+1}}{j!},\nonumber \\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \displaystyle j=0,1,\ldots ,N-2,N-1. \end{aligned}$$
(3.5)

By replacing each term of (3.1) with it’s approximation defined in (2.8), (2.11) and (3.3) respectively, and substituting \( x = x_k\) collocation points defined by

$$\begin{aligned} x_k=\frac{1}{N}\,k,\quad k=0,1,\ldots ,N, \end{aligned}$$

we reach the following theorem.

Theorem 3.1

If the assumed approximate solution of the boundary-value problem (3.1), (1.2) is (2.5) then the discrete Euler system for the determination of the unknown coefficients \(\left\{ c_n\right\} _{n=0}^N\) is given by

$$\begin{aligned} \sum _{n=0}^N\,c_n\,E_n^{(2r)}(x_k)+\sum _{m=0}^{2r-1} \sum _{n=0}^N\,c_n\,P_{m}(x_k)\,\,E_n^{(m)}(x_k) =\sum _{n=0}^N\,f_n\,E_n(x_k). \end{aligned}$$
(3.6)

Proof

If we replace each term of (3.1) with its corresponding approximation given by (2.5), (3.2) and (3.3) and substitute with the collocation points \( x = x_k\) which is mentioned in section 2.2 and applying the collocation to it. \(\square \)

Equation (3.6) can be written in matrix form

$$\begin{aligned} \mathbf W \,\mathbf c =\mathbf F \end{aligned}$$
(3.7)

where

$$\begin{aligned} \mathbf W =\left[ \mathbf M ^{2r}+\sum _{m=0}^{2r-1}{} \mathbf P _m\, \mathbf M ^{m}\right] \end{aligned}$$

and

$$\begin{aligned} \mathbf P _m=\left( \begin{matrix} p_m(x_0) &{} \quad 0 &{}\quad 0 &{}\quad \ldots &{}\quad 0\\ 0 &{}\quad p_m(x_1) &{}\quad 0 &{} \quad \ldots &{}\quad 0\\ 0 &{}\quad 0 &{}\quad p_m(x_2) &{} 0&{}0\\ \vdots &{}\quad \vdots &{}\quad \ldots &{}\quad \ddots &{} \quad \vdots \\ 0 &{}\quad 0 &{}\quad \dots &{}\quad 0 &{}\quad p_m(x_N)\\ \end{matrix} \right) . \end{aligned}$$

The matrix representation of the boundary conditions (1.2), can be written as

$$\begin{aligned} \begin{aligned} \mathbf M ^{i}{} \mathbf E (0)\,\mathbf c&=\alpha _{i},\\ \mathbf M ^{i}{} \mathbf E (1)\,\mathbf c&=\beta _{i},\quad i=0,1,2,\ldots ,r-1. \end{aligned} \end{aligned}$$
(3.8)

The simplification in conditions is done by writing (3.8) as

$$\begin{aligned} \Theta \,\mathbf c =\tilde{\mathbf{F }} \end{aligned}$$
(3.9)

To obtain the solution of Eq. (3.1) with their conditions (1.2), we have to replace the row matrices (3.9) by the last 2r rows of the matrix (3.7) and acquire the new augmented matrix

$$\begin{aligned} \left[ \Theta ,\tilde{\mathbf{F }}\right] = \left[ \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} w_{00} &{} w_{01} &{} \cdots &{} w_{0N} &{} \vdots &{} f_0 \\ w_{10} &{} w_{11} &{} \cdots &{} w_{1N} &{} \vdots &{} f_1 \\ w_{20} &{} w_{21} &{} \cdots &{} w_{2N} &{} \vdots &{} f_2 \\ \vdots &{} \vdots &{} \cdots &{} \vdots &{} \vdots &{} \vdots \\ w_{N-2r,0} &{} w_{N-2r,1} &{} \cdots &{} w_{N-2r,N} &{} \vdots &{} f_N-2r \\ u_{00} &{} u_{01} &{} \cdots &{} u_{0N} &{} \vdots &{} \alpha _0 \\ \vdots &{} \vdots &{} \cdots &{} \vdots &{} \vdots &{} \vdots \\ u_{2r,0} &{} u_{2r,1} &{} \cdots &{} u_{2r,N} &{} \vdots &{} \beta _{2r} \\ \end{array} \right] . \end{aligned}$$
(3.10)

Now, we have a linear system of \(N + 1\) equations of the \(N + 1\) unknown coefficients. Calculations of those coefficients can be done by solving this linear system. There are numerous method for solving the system (3.10) among which is the Q-R method that will be used in this paper. This solution produces the coefficients \(\mathbf c \) that is used for approximating Euler solution. The algorithm for the proposed method is listed below.

Algorithm

  • Input number of iterations N,

  • Enter \(x=x_{k}\) collocation points,

  • Calculate \(f_{j}\) from equations (3.43.5),

  • Formulate the system \(\Theta \,\mathbf c =\tilde{\mathbf{F }}\),

  • Solve the system using the Q-R method to find \(\mathbf c \),

  • End.

3.2 Nonlinear high-order BVP

In this case we assign the source term \(\sigma (x,u)=f(x)-\left[ u(x)\right] ^{n}\) in Eq. (1.1) then by substituting each term in Eq. (3.1) with the approximation defined in (2.8),(2.11) and (3.3) respectively the equation becomes

$$\begin{aligned} \left[ \mathbf M ^{2r}+\sum _{m=1}^{2r-1}P_{m}(x)\mathbf M ^{m}\right] \mathbf X (x)\mathbf B ^t\,\mathbf c \,+\left[ \mathbf X (x)\, \mathbf B ^t\,\mathbf c \right] ^{n}=\mathbf X (x)\,\mathbf B ^t \mathbf F . \end{aligned}$$
(3.11)

where \(\mathbf F = [f_0, f_1, \ldots , f_N]^\tau \) can be calculated also according to Eqs. (3.4)–(3.5). Then the residual term can be calculated

$$\begin{aligned} \mathfrak {R}_{N}(x)= \left[ \mathbf M ^{2r}+\sum _{m=1}^{2r-1}P_{m}(x)\mathbf M ^{m}\right] \mathbf X (x)\mathbf B ^\tau \,\mathbf c \,+\left[ \mathbf X (x)\, \mathbf B ^\tau \,\mathbf c \right] ^{n}-\mathbf X (x)\,\mathbf B ^\tau \mathbf F \end{aligned}$$
(3.12)

We need to first collocate Eq. (3.12) at \(N-2r+1\) points. For suitable collocation points we use the first \(N-2r+1\) Euler roots of \(E_{N+1}(x)\). These equations obtained from Eq. (3.12) along with the boundary conditions from (1.2) generates \(N+1\) nonlinear equations in \(N+1\) unknowns coefficients \(\mathbf c \) that can be solved using Newton’s iterative method [11]. Consequently, u(x) can be calculated according to Eq. (2.5).

Newton’s method

In order to solve the system of Eq. (3.12), we formulate it as

$$\begin{aligned} \mathbf R(c) =\left( \begin{array}{c} R_{1}(c_{0},c_{1},\ldots ,c_{N}) \\ R_{2}(c_{0},c_{1},\ldots ,c_{N}) \\ \vdots \\ R_{N+1}(c_{0},c_{1},\ldots ,c_{N}) \end{array} \right) =\left( \begin{array}{c} 0 \\ 0 \\ \vdots \\ 0 \end{array} \right) . \end{aligned}$$
(3.13)

where \(\mathbf c \) is the column vector of the independent variables and \(\mathbf R \) is the column vector of the function \(\mathbf R _{i}\), with \(\mathbf R _{i}(\mathbf C )=\mathbf R _{i}(c_{0},c_{1},\ldots ,c_{N})\),\( 1\le i\le N+1\). The number of zero valued functions is equal to the numbers of the independent variables. Then a good approximation to the method for solving Eq. (3.13) is the Newton’s method [11].

Consider \(\mathbf c ^{(j)}\) be the initial guess at the iteration j of the solution. Also, let \(\mathbf R ^{(j)}\) indicate the value of \(\mathbf R \) at the jth iteration. Assuming that \(\Vert \mathbf R ^{(j)}\Vert \) is not too small, then we need to update vectors \(\Delta \mathbf c ^{(j)}\)

$$\begin{aligned} \mathbf c ^{(j+1)}=\mathbf c ^{(j)}+\Delta \mathbf c ^{(j)}\Longleftrightarrow \left( \begin{array}{c} c_{0}^{j+1} \\ c_{1}^{j+1} \\ \vdots \\ c_{N}^{j+1} \end{array} \right) = \left( \begin{array}{c} c_{0}^{j} \\ c_{1}^{j} \\ \vdots \\ c_{N}^{j} \end{array} \right) +\left( \begin{array}{c} \Delta c_{0}^{j} \\ \Delta c_{1}^{j} \\ \vdots \\ \Delta c_{N}^{j} \end{array} \right) \end{aligned}$$
(3.14)

such that \(\mathbf R (\mathbf c ^{j+1})=0\). Using the multidimensional extension of Taylor’s theorem for the approximation of the variation of \(\mathbf R (\mathbf c )\) in the neighborhood of \(\mathbf c ^{j}\) gives

$$\begin{aligned} \mathbf R (\mathbf c ^{j}+\Delta \mathbf c ^{j})=\mathbf R (\mathbf c ^{(j)})+\mathbf R ^{\prime } (\mathbf c ^{(j)})\Delta \mathbf c ^{j}+o(\Vert \Delta \mathbf c ^{(j)}\Vert ^{2}) \end{aligned}$$
(3.15)

where \(\mathbf R ^{\prime }(\mathbf c ^{(j)})\) is the jacobian of the system of equations

$$\begin{aligned} \mathbf R ^{\prime }(\mathbf c )\equiv \mathbf J (\mathbf c )= \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} \frac{\partial R_{1}}{\partial c_{0}}(\mathbf c ) &{} \frac{\partial R_{1}}{\partial c_{1}}(\mathbf c ) &{} \ldots &{} \frac{\partial R_{1}}{\partial c_{N}}(\mathbf c ) \\ \frac{\partial R_{2}}{\partial c_{0}}(\mathbf c ) &{} \frac{\partial R_{2}}{\partial c_{1}}(\mathbf c ) &{} \ldots &{} \frac{\partial R_{2}}{\partial c_{N}}(\mathbf c ) \\ \vdots &{} \vdots &{} \vdots &{} \vdots \\ \frac{\partial R_{N+1}}{\partial c_{0}}(\mathbf c ) &{} \frac{\partial R_{N+1}}{\partial c_{1}}(\mathbf c ) &{} \ldots &{} \frac{\partial R_{N+1}}{\partial c_{N}}(\mathbf c )\\ \end{array} \right) . \end{aligned}$$
(3.16)

Appointing \(\mathbf J ^{(j)}\) as the jacobian evaluated at \(\mathbf c ^{(j)}\) and neglect terms of higher-order then we can rearrange Eq. (3.15) as

$$\begin{aligned} \mathbf R (\mathbf c ^{j}+\Delta \mathbf c ^{j})=\mathbf R (\mathbf c ^{(j)})+\mathbf J ^{(j)}\Delta \mathbf c ^{(j)}. \end{aligned}$$
(3.17)

Our goal of this method is to reach \(\mathbf R (\mathbf c ^{j}+\Delta \mathbf c ^{j})=0\), so assigning that term to zero in the last equation gives

$$\begin{aligned} \mathbf J ^{(j)}\Delta \mathbf c ^{(j)}=-\mathbf R (\mathbf c ^{(j)}). \end{aligned}$$
(3.18)

Then, Eq. (3.18) is the system of \(N+1\) linear equation in the \(N+1\) unknown \(\Delta \mathbf c ^{(j)}\). Each Newton iteration step involves evaluating the vector \(\mathbf R ^{(j)}\), the matrix \(\mathbf J ^{(j)}\) and a solution to Eq. (3.18). Newton iteration is stopped whenever the distance between two iteration is less than a give tolerance, i.e., when \(\Vert c^{(j+1)}-c^{(j)}\Vert \le \varepsilon \). The algorithm for the above method is listed below.

Algorithm

  • Initialize \(\mathbf c =\mathbf c ^{(0)},\)

  • For \(i=0,1,2,\ldots , \mathbf R ^{(j)}\) from equation (3.12),

  • If \(\Vert \mathbf R ^{(j)}\Vert \) is small enough, stop,

  • Compute \(\mathbf J ^{(j)},\)

  • Solve \(\mathbf J ^{(j)}\Delta \mathbf c ^{(j)}=-\mathbf R (\mathbf c ^{(j)})\),

  • \(\mathbf c ^{(j+1)}=\mathbf c ^{(j)}+\Delta \mathbf c ^{(j)},\)

  • End.

4 Error analysis

In this section, we will analyze the error of the presented method, suppose that \(H=L^{2}[0,1]\),

$$\begin{aligned} \mathbf U =\text{ span }\{E_{0}(x),E_{1}(x), \ldots , E_{N}(x)\}, \end{aligned}$$

f be an arbitrary element in H and \(\mathbf U \) is a finite dimensional vector space and f has the unique best approximation out of \(\mathbf U \) such that \(\bar{f}\in U\), that is

$$\begin{aligned} \forall \,u\in \mathbf U ,\quad \Vert f-\bar{f}\Vert \le \Vert f-u\Vert . \end{aligned}$$

Since \(\bar{f} \in \mathbf U \) is the best \(L_{2}\) approximation of f in \(\mathbf U \), there exist a unique coefficients \(\{f_{0}, f_{1}, \ldots , f_{N}\}\) such that

$$\begin{aligned} f(x)\simeq \bar{f}(x)=\sum _{n=0}^{N}f_{n}E_{n}(x)=\mathbf E (x)\mathbf F , \end{aligned}$$

where

$$\begin{aligned} \mathbf E (x)=[E_{0}(x),E_{1}(x), \ldots , E_{N}(x)]\,\,\text{ and }\quad \mathbf F =[f_{0}, f_{1}, \ldots , f_{N}]^t. \end{aligned}$$

Theorem 4.1

[21] Suppose that \(f(x)\in C^{\infty }[0,1]\) and F(x) is the approximated function using Euler polynomials. Then the upper bound for these polynomials would be calculated according to the following

$$\begin{aligned} \Vert \mathbf E _{N}(x)\Vert _{\infty }\le \mu N! \pi ^{-(N+1)} \end{aligned}$$

where \(\mu \) is a positive number independent of N.

Theorem 4.2

Suppose that u(x) be an enough smooth function and \(u_N(x)\) be the truncated Euler series of u(x). Then,

$$\begin{aligned} \Vert u(x) - u_N(x)\Vert _ \infty \le \mathbf G \,N!\,(\pi )^{-(N+1)}, \end{aligned}$$

where \(\mathbf G \) is a positive constant and the global error \(O \left( N! \pi ^{-(N+1)}\right) .\)

Proof

In an operator form, Eq. (1.1) can be written as

$$\begin{aligned} Lu=u^{(2r)}=f(x)+g(x,u) \end{aligned}$$
(4.1)

where the differential operator L is given by

$$\begin{aligned} L=\frac{d^{2r}}{dx^{2r}}. \end{aligned}$$

The inverse operator \(L^{-1}\) is therefore considered a 2r-fold integral operator defined by

$$\begin{aligned} L^{-1}=\underbrace{\int _0^x}_{(2r)\text{ times }}(.)\,\underbrace{dx}_{(2r)\text{ times }} \end{aligned}$$

Operating with \(L^{-1}\) on (4.1) yields

$$\begin{aligned} \begin{aligned} u(x)&=L^{-1}f(x)+L^{-1}g(x,u)\\&=\underbrace{\int _0^x}_{(2r)\text{ times }}\,f(x)\, \underbrace{dx}_{(2r)\text{ times }}+\underbrace{\int _0^x}_{(2r) \text{ times }}g(x,u)\,\underbrace{dx}_{(2r)\text{ times }}\\&=F(x)+ G(x,u(x)). \end{aligned} \end{aligned}$$

By approximating the functions u(x) and F(x) by the Euler polynomials. Therefore,

$$\begin{aligned} u_{N}(x)=F_{N}(x)+ G(x,u_{N}(x)), \end{aligned}$$

thus, by subtracting the last two equations we find

$$\begin{aligned} \begin{aligned} \Vert u(x)-u_{N}(x)\Vert _{\infty }&=\Vert F(x)-F_{N}(x)+ (G(x,u(x))-G(x,u_{N}(x)))\Vert _{\infty }\\&=\Vert F(x)-F_{N}(x)\Vert _\infty + \Vert (G(x,u(x))-G(x,u_{N}(x )))\Vert _{\infty }\\ \Vert u(x)-u_{N}(x)\Vert _{\infty }&\le \Vert F(x)-F_{N}(x)\Vert _ +L_{G}\Vert u(x)-u_{N}(x)\Vert _{\infty }, \end{aligned} \end{aligned}$$
(4.2)

where \(L_{G}\) is the Lipschitz constant of the function G(xu(x)), then

$$\begin{aligned} \Vert u(x)-u_{N}(x)\Vert _{\infty }\le \frac{1}{1-L_{G}}\Vert F(x)-F_{N}(x)\Vert _{\infty }, \end{aligned}$$

Using Theorem 4.1 yields

$$\begin{aligned} \Vert u(x)-u_{N}(x)\Vert _{\infty }\le \frac{1}{1-L_{G}}\Big [\mu N! \pi ^{-(N+1)}\Big ] \le \mathbf G N! \pi ^{-(N+1)}. \end{aligned}$$

where \(\mathbf G =\frac{\mu }{1-L_{G}}\). Thus the presented method supplies an approximate solution with global error \(O (N! \pi ^{-(N+1)})\) which completes the proof. \(\square \)

In the next section we will discuss the accuracy of the solution based on the residual function.

4.1 Accuracy of the solution based on residual function

We can easily check the accuracy of the suggested method as follows. Since the truncated Euler series in Eq. (2.6) is an approximate solution of Eq. (1.1), when the function \(u_{N}(x)\) and it’s derivatives are substituted in Eq. (1.1), the resulting equation must be approximately satisfied ; that is, for

$$\begin{aligned} x=x_{k}\in [0,1],\quad k=0,1,\ldots ,N. \end{aligned}$$
$$\begin{aligned} \mathfrak {R}_{N}(x_{k})=| u^{(2r)}(x)+\sum _{m=0}^{2r-1}P_{m}(x)u^{(m)}(x)-\sigma (x,u(x))|\cong 0. \end{aligned}$$
(4.3)

or

$$\begin{aligned} \mathfrak {R}_{N}(x_{k})\le 10^{-q_{k}} \end{aligned}$$

where \(q_{k}\) is any positive integer. If max \(10^{-q_{k}}=10^{-q}\) (q is any positive integer) is prescribed, then the truncation limit N is increased until the difference \(\mathfrak {R}_{N}(x_{k})\) at each of the points become smaller than the prescribed \(10^{-q}\). Also, the error function can be estimated from

$$\begin{aligned} \mathfrak {R}_{N}(x)=u^{(2r)}(x)+\sum _{m=0}^{2r-1}P_{m}(x)u^{(m)}(x) -\sigma (x,u(x)). \end{aligned}$$

If \(E_{N}(x)\rightarrow 0\), as N has sufficiently enough value, then the error decreases.

5 Numerical examples

We present four numerical examples which will prove the accuracy, performance and effectiveness of the proposed techniques. We compare our results with the results that were collected from the open literature found in [12, 23, 24, 28,29,30]. The computations associated with these examples were performed using Matlab 2014 on a personal computer. The absolute error may be defined as

$$\begin{aligned} \Vert e_{N}(x)\Vert =\max {\Vert u(x)-u_{N}(x)\Vert ,\quad 0\le x \le 1\ }. \end{aligned}$$

Example 5.1

[30] Let us consider The linear 10th order boundary value problem according to Eq. (1.1) with \(r=5\) in the form

$$\begin{aligned} u^{(10)}+5u=10\cos (x)+4(x-1)\sin (x),\quad 0\le x \le 1, \end{aligned}$$

subject to the boundary conditions

$$\begin{aligned} \begin{aligned} u(0)&=0, \quad u(1)=0,\\ u^{\prime }(0)&=-1, \quad u^{\prime }(1)=\sin (1),\\ u^{\prime \prime }(0)&=2, \quad u^{\prime \prime }(1)=2\cos (1),\\ u^{\prime \prime \prime }(0)&=1, \quad u^{\prime \prime \prime }(1)=-3\sin (1),\\ u^{(4)}(0)&=-4, \quad u^{(4)}(1)=-4\cos (1). \end{aligned} \end{aligned}$$

whose exact solution is

$$\begin{aligned} u(x)=(x-1)\sin \,x. \end{aligned}$$

Maximum absolute error are tabulated in Table 1 at different values of N along with the CPU time. Table 2 exhibits a comparison between the errors obtained by using Euler matrix method with the analogous results of Viswanadham and Raju [30], for underlying B-spline method. Figure 2, illustrates the approximate and exact solution at \(N=14\) and the error history at the same values.

Table 1 Maximum absolute errors and CPU time for Example 5.1
Table 2 Comparison of absolute error for Example 5.1
Fig. 2
figure 2

a Approximate and exact solution, b error history at \(N=14\)

Example 5.2

[29, 30] Now, we consider a nonlinear 10th-order boundary value problem

$$\begin{aligned} u^{(10)}=e^{-x}u^{2} \end{aligned}$$

subject to the boundary conditions

$$\begin{aligned} \begin{aligned}&u(0)=u^{\prime }(0)=u^{\prime \prime }(0)=u^{\prime \prime \prime }(0)=u^{(4)}(0)=1,\\&u(1)=u^{\prime }(1)=u^{\prime \prime }(1)=u^{\prime \prime \prime }(1)=u^{(4)}(1)=e, \end{aligned} \end{aligned}$$

whose exact solution is

$$\begin{aligned} u(x)=e^{x}. \end{aligned}$$

Table 3 exhibits the maximum absolute error for different values of N along with the CPU time. Table 4 exhibits a comparison between the errors obtained by using Euler matrix method with the analogous results of Rashidinia et al. [29], for underlying Iterative method and with the analogous results of Viswanadham, and Raju [30], for underlying B-spline method.

Table 3 Maximum absolute errors and CPU time for Example 5.2
Table 4 Comparison of absolute error for Example 5.2
Table 5 Maximum absolute errors and CPU time for Example 5.3
Table 6 Comparison of absolute error for Example 5.3
Table 7 Maximum absolute errors and CPU time for Example 5.4
Table 8 Comparison of maximum absolute error for Example 5.4
Fig. 3
figure 3

a Approximate and exact solution for Example 5.4, b comparison of error with N \(= 12, 14\), 16 for Example 5.4

Example 5.3

[28] Now we turn to linear 12th-order boundary value problem

$$\begin{aligned} u^{(12)}+xu=-(120+23x+x^{3})e^{x},\quad 0 \le x \le 1, \end{aligned}$$

subject to the boundary conditions

$$\begin{aligned} \begin{aligned} u(0)&=0, \quad u(1)=0,\\ u^{\prime }(0)&=1, \quad u^{\prime }(1)=-e,\\ u^{\prime \prime }(0)&=0, \quad u^{\prime \prime }(1)=-4e,\\ u^{\prime \prime \prime }(0)&=-3, \quad u^{\prime \prime \prime }(1)=-9e,\\ u^{(4)}(0)&=-8, \quad u^{(4)}(1)=-16e,\\ u^{(5)}(0)&=-15, \quad u^{(5)}(1)=-25e, \end{aligned} \end{aligned}$$

whose exact solution is

$$\begin{aligned} u(x)=x(1-x)e^{x}. \end{aligned}$$

Table 5 exhibits the maximum absolute error for different values of N along with the CPU time. Table 6 exhibits a comparison between the errors obtained by using Euler matrix method with the analogous results of Shahid [28], for underlying spline method.

Example 5.4

[10, 12, 23, 24, 29] Our final example is a nonlinear 12th-order boundary value problem

$$\begin{aligned} u^{(12)}-u^{\prime \prime \prime }=2e^{x}u^{2},\quad 0 \le x \le 1, \end{aligned}$$

subject to the boundary conditions

$$\begin{aligned} u^{(i)}(0)=(-1)^{i}, \quad u^{(i)}(1)=(-1)^{i}e^{-1},\quad i=0,1,\ldots , 5. \end{aligned}$$

whose exact solution is

$$\begin{aligned} u(x)=e^{-x}. \end{aligned}$$

Table 7 exhibits the maximum absolute error of the presented method at different values of N along with the CPU time. Also, Table 8 gives a comparison between the results obtained with the analogous results of Ullah et al. [29], for underlying Iterative method, with the analogous results of Opanuga et al. [24], for underlying differential transformation method, with the analogous results of Grover and Kumar [12] for underlying Homotopy perturbation method and with the analogous results of Noor and Mohyud-Din [23] for underlying variational method. Also, Fig. 3a introduces the maximum absolute error at different N and Fig. 3b exhibits the Euler solution and exact solutions.

6 Conclusion

This paper has discussed how Euler matrix collocation method can be applied for obtaining solutions of linear and nonlinear both tenth and twelfth-order boundary-value problems. The formulation and implementation of the scheme are illustrated. Our method is tested on four examples and comparisons with other methods are made. It is shown that Euler matrix method yields good results. Euler matrix method is a simple tool for providing high accurate numerical simulations to a large variety of model problems. So it is easily applied by researchers and engineers familiar with Euler matrix method.