Keywords

1 Introduction

Consider the Fredholm integral equation defined on \(\mathbb {E}=\mathscr {C}[0,1]\) by

$$\displaystyle \begin{aligned} u(s)-\int_{0}^1\kappa(s,t)u(t)dt=f(s),\quad 0\leq s\leq 1, \end{aligned} $$
(5.1)

where κ is a smooth kernel, \(f\in \mathbb {E}\) is a real-valued continuous function and u denotes the unknown function. The Nyström method (see [5]) for solving (5.1) consists in replacing the integral in (5.1) by a numerical formula and it has been widely studied in the literature. A general framework for the method in the case of interpolatory projection is presented in [1, 2]. In [6] the method using a quartic spline quasi-interpolant is proposed. A superconvergent version of the Nyström method based on spline quasi-interpolants of degree d ≥ 2 is analysed in [7]. In this paper we construct a quadrature formula based on integrating a sextic spline quasi-interpolant and this formula is used for the numerical solution of the Fredholm integral equation (5.1). We show that the convergence order of the approximate solution to the exact solution is the same as that of the quadrature rule. We show that the approximate solution of (5.1) has an asymptotic error expansion and one step of the Richardson extrapolation further improves the order of convergence.

The paper is divided into five sections. In Sect. 5.2, we set the notation and the sextic spline quasi-interpolant \(\mathscr {Q}_n\) is constructed. In Sect. 5.3, we introduce the quadrature rule based on \(\mathscr {Q}_n\) and we establish an expression of the error estimate. In Sect. 5.4, the Nyström method for the approximate solution of (5.1) is analysed and asymptotic series expansion for the proposed solution is obtained. Numerical examples are given in Sect. 5.5.

2 Sextic Spline Quasi-Interpolant

2.1 B-splines

Definition 5.1

Let \(d\in \mathbb {N}\) and let

$$\displaystyle \begin{aligned} x_{-d}\leq\ldots\leq x_{-1}\leq 0=x_0<\ldots<x_n=1\leq x_{n+1}\leq\ldots\leq x_{n+d} \end{aligned}$$

be an extended partition of the interval I = [0, 1]. The normalized B-spline of degree d associated with the knots x i…, x i+d+1 is defined by

$$\displaystyle \begin{aligned} B_{i,d}(x)&=(x_{i+d+1}-x_i)[x_i,\ldots,x_{i+d+1}](.-x)_+^d{,} \end{aligned} $$

where \([x_i,\ldots ,x_{i+d+1}](.-x)_+^d\) is the divided difference of \(t\longrightarrow (t-x)_+^d\) with respect to the d + 2 points x i, …, x i+d+1.

By using the definition of the divided differences, we obtain

$$\displaystyle \begin{aligned} B_{i,d}(x)=[x_{i+1},\ldots,x_{i+d+1}](.-x)_+^d-[x_{i},\ldots,x_{i+d}](.-x)_+^d. \end{aligned} $$
(5.2)

Thus, from the above formula, we get

$$\displaystyle \begin{aligned} B_{i,0}(x)=(x_{i+1}-x)_+^0-(x_{i}-x)_+^0 \end{aligned}$$

which is the characteristic function on the interval [x i, x i+1[, i.e.

$$\displaystyle \begin{aligned} B_{i,0}(x)=\begin{cases} 1, & \text{if}\quad x_{i}\leq x < x_{i+1}\\ 0, \quad & \text{otherwise}. \end{cases} \end{aligned} $$
(5.3)

The B-splines of higher degree (d ≥ 1) can be evaluated by using the following recursion formula (see [3, Chap.4]):

$$\displaystyle \begin{aligned} B_{i,d}(x)=w_{i,d}B_{i,d-1}+(1-w_{i+1,d})B_{i+1,d-1}, \end{aligned} $$
(5.4)

with

$$\displaystyle \begin{aligned} w_{i,d}(x)=\left\{ \begin{array}{ll} \frac{x-x_{i}}{x_{i+d}-x_{i}} & \text{if} \quad x_{i}< x_{i+d},\\ 0 & \text{otherwise.} \end{array} \right. \end{aligned}$$

2.2 Construction of the Discrete Spline Quasi-Interpolant

Let \(\mathbb {X}_{n}=\{x_{k}=\frac {k}{n},\,0\leq k\leq n\} \) denote the uniform partition of the interval I onto n equal subintervals I k = [x k−1, x k], 1 ≤ k ≤ n with meshlength \(h=\frac {1}{n}\). Let \(S_{6}(I, \mathbb {X}_{n})\) be the space of \(\mathscr {C}^{5}\) sextic splines on this partition. Its canonical basis is formed by the n + 6 normalized B-splines {B k ≡ B k−7,6, k ∈ J n} where J n = {1, …, n + 6}. The support of B k is [x k−7, x k] if we add multiple knots at the endpoints

$$\displaystyle \begin{aligned} x_{-6}=x_{-5}=\ldots=x_{0}=0\quad \text{and}\quad x_{n}=x_{n+1}=\ldots=x_{n+6}=1. \end{aligned}$$

For 7 ≤ k ≤ n, we have \(B_k(x)=\bar {B}(\frac {x}{h}-k),\) where \(\bar {B}\) is the cardinal B-spline associated with the knots {0, 1, 2, 3, 4, 5, 6, 7} and defined by

$$\displaystyle \begin{aligned}\bar{B}(x)=\left\{ \begin{array}{ll} \frac{1}{720} x^6, &0 \leq x\leq 1, \\ -\frac{7}{720}+\frac{7}{120}x-\frac{ 7}{48}x^2+\frac{7}{36}x^3-\frac{7}{48}x^4+\frac{7}{120}x^5-\frac{1}{120}x^6, & 1 \leq x\leq 2, \\ \frac{1337}{720}-\frac{133}{24}x+\frac{329}{48}x^2-\frac{161}{36}x^3+\frac{77}{48}x^4-\frac{7}{24}x^5+\frac{1}{48}x^6, & 2 \leq x\leq 3, \\ -\frac{12089}{360}+\frac{196}{3}x-\frac{1253}{24}x^2+\frac{196}{9}x^3-\frac{119}{24}x^4+\frac{7}{12}x^5-\frac{1}{36}x^6, & 3 \leq x\leq 4, \\ \frac{59591}{360}-\frac{700}{3}x+\frac{ 3227}{24}x^2-\frac{364}{9}x^3+\frac{161}{24}x^4-\frac{7}{12}x^5+\frac{1}{48}x^6, & 4 \leq x\leq 5, \\ -\frac{208943}{720}+\frac{7525}{24}x-\frac{6671 }{48}x^2+\frac{1169}{36}x^3-\frac{203}{48}x^4+\frac{7}{24}x^5-\frac{1}{120}x^6, & 5 \leq x\leq 6, \\ \frac{1}{720}(7-x)^6, & 6 \leq x\leq 7, \\ 0, & \text{elsewhere.} \end{array} \right. \end{aligned}$$

We recall (see [10, Theorem 4.21 & Remark 4.1]) the representation of monomials using symmetric functions of the interior knots N k = {x k−6, …, x k−1} in the support of B k, which are defined by σ 0(N k) = 1 and for 1 ≤ r ≤ 6:

$$\displaystyle \begin{aligned} \sigma _{r}(N_{k}) =\sum_{1\leq \ell_{1} < \dots < \ell_{r} \leq 6} x_{k- \ell_{1}}\dots x_{k- \ell_{r}}. \end{aligned}$$

For 0 ≤ r ≤ 6, let m r(x) = x r. Then, we have

$$\displaystyle \begin{aligned} m_{r}(x)=\sum_{k\in J_n}(-1)^{6-r}\frac{r!}{6!}D^{6-r}\psi_{k}(0)B_k(x)= \sum_{k\in J_{n}}\theta_{k}^{(r)}B_k(x){,} \end{aligned}$$

where

$$\displaystyle \begin{aligned} \psi_{k}(t)=\prod\limits_{\ell=1}^{6}(x_{k-\ell}-t). \end{aligned}$$

Hence

$$\displaystyle \begin{aligned} \theta_{k}^{(r)}= \binom{6}{r}^{-1}\sigma _{r}(N_{k}),\quad 0 \leq r \leq 6. \end{aligned}$$

For r = 0, we have \(\theta _{k}^{(0)}=1\), for all k ∈ J n, since \(\sum \limits _{k\in J_n}B_k(x)=1.\)

For r = 1, we have \(\binom {6}{r}^{-1}=\frac {1}{6}\) and \(\sigma _{1}(N_k)=\sum \limits _{1\leq \ell \leq 6}x_{k-\ell }=x_{k-1}+\ldots +x_{k-6}.\) Thus, we obtain the Greville abscissae:

$$\displaystyle \begin{aligned} \theta_{k}=\theta_{k}^{(1)}=\frac{1}{6}\sum_{\ell=1}^{6}x_{k-\ell}, \end{aligned}$$

which are the coefficients of \(m_{1}(x)=\sum \limits _{k\in J_n}\theta _{k}B_k(x)\).

The sextic discrete spline quasi-interpolant (abbr. dQI) used here (see [8]) is the following spline operator

$$\displaystyle \begin{aligned} \mathscr{Q}_nf=\sum_{k\in J_n}\mu_{k}(f)B_{k}, \end{aligned}$$

whose coefficients are linear combinations of discrete values of f on a set of data points \(\mathbb {T}_n=\{t_j,\;j\in \Gamma _n\}\) where Γn = {j = 1, 2, …, n + 2}. The elements of \(\mathbb {T}_n\) are defined by

$$\displaystyle \begin{aligned} t_1=0,\quad t_{n+2}=1,\quad t_j=\frac{x_{j-2}+x_{j-1}}{2},\quad 2\leq j\leq n+1. \end{aligned}$$

The dQI is constructed to be exact on Π6, where Π6 is the space of polynomials of degree at most 6, that means \(\mathscr {Q}_nm_r = m_r\) for 0 ≤ r ≤ 6 and therefore

$$\displaystyle \begin{aligned} \mu_{k}(m_r)=\theta_k^{(r)},\quad k\in J_n,\quad 0\leq r\leq6. \end{aligned}$$

For 7 ≤ k ≤ n, the functionals μ k use values of f in a neighbourhood of the support of B k, thus it is natural to express μ k in the following way

$$\displaystyle \begin{aligned} \mu_k(f)=\sum_{i=1}^7\alpha_{i}f_{k-i+2}, \end{aligned}$$

where f k = f(t k). This leads us to solve the system of linear equations

$$\displaystyle \begin{aligned} \sum_{i=1}^7\alpha_{i}t^r_{k-i+2}=\theta_k^{(r)},\quad 0\leq r\leq 6. \end{aligned}$$

For 1 ≤ k ≤ 6 and n + 1 ≤ k ≤ n + 6 we write respectively

$$\displaystyle \begin{aligned} \mu_k(f)=\sum_{i=1}^7\beta_{i,k}f_{i}\quad \text{and}\quad \mu_k(f)=\sum_{i=1}^7\gamma_{i,k}f_{n-i+3}, \end{aligned} $$
(5.5)

which is equivalent to the systems of linear equations

$$\displaystyle \begin{aligned} \sum_{i=1}^7\beta_{i,k}t^r_i=\theta_k^{(r)}\quad \text{and}\quad \sum_{i=1}^7\gamma_{i,k}t^r_{n-i+3}=\theta_k^{(r)},\quad 0\leq r\leq 6. \end{aligned}$$

All these systems have Vandermonde determinants and since the \((t_j)_{j\in \Gamma _n}\) are distinct, they have unique solutions, whence the existence and unicity of the dQI. The functional coefficients are respectively defined by the following formulas:

$$\displaystyle \begin{aligned} \mu_1(f) & = f_1,\\ \mu_2(f) & = \frac{3887}{10395}f_1+\frac{231}{256}f_2-\frac{385}{768}f_3+\frac{231}{640}f_4 -\frac{165}{896}f_5+\frac{385}{6912}f_6-\frac{21}{2816}f_7, \\ \mu_3(f) & = -\frac{5689}{22275}f_1+\frac{27631}{19200}f_2-\frac{9151}{34560}f_3+\frac{1091}{9600}f_4 -\frac{79}{1920}f_5+\frac{997}{103680}f_6-\frac{221}{211200}f_7, \\ \mu_4(f) & = -\frac{20959}{155925}f_1+\frac{3089}{9600}f_2+\frac{5015}{3456}f_3-\frac{4811}{4800}f_4 +\frac{3277}{6720}f_5-\frac{7381}{51840}f_6+\frac{1961}{105600}f_7, \\ \mu_5(f) & = \frac{5821}{31185}f_1-\frac{1193}{1920}f_2+\frac{26737}{17280}f_3+\frac{1}{320}f_4 -\frac{1217}{6720}f_5+\frac{4001}{51840}f_6-\frac{83}{7040}f_7, \\ \mu_6(f) & {=} -\frac{2159}{31185}f_1+\frac{2957}{11520}f_2-\frac{30451}{34560}f_3+\frac{13673}{5760}f_4 -\frac{33727}{40320}f_5+\frac{17977}{103680}f_6{-}\frac{2159}{126720}f_7,\\ \mu_{k}(f) &= -\frac{2159}{138240}(f_{k+1}+f_{k-5})+\frac{751}{4608}(f_{k}+f_{k-4})-\frac{37003}{46080}(f_{k-1}+f_{k-3})+\frac{79879}{34560}f_{k-2},\\ &\qquad \quad (7\leq k\leq n) \end{aligned} $$

and for n + 1 ≤ k ≤ n + 6, μ k(f) is given by the second formula in (5.5) with γ i,k = β i,k, 1 ≤ i ≤ 7. Since \(\mathscr {Q}_n\) reproduces Π6 it is easy to show that for \(f\in \mathscr {C}^7[0,1],\) we have

$$\displaystyle \begin{aligned} \Vert f-\mathscr{Q}_nf\Vert_\infty\leq c_1h^7\Vert f^{(7)}\Vert_\infty, \end{aligned} $$
(5.6)

where c 1 is a constant independent of n.

It is more convenient to write the quasi-interpolant \(\mathscr {Q}_n\) under the quasi-Lagrange form

$$\displaystyle \begin{aligned} \mathscr{Q}_nf=\sum_{j\in \Gamma_n}f_jL_j, \end{aligned} $$
(5.7)

where the quasi-Lagrange functions L j are linear combinations of seven B-splines. For example, using the value f 1 are {μ 1, μ 2, μ 3, μ 4, μ 5, μ 6}, therefore we have

$$\displaystyle \begin{aligned} L_{1} = B_{1}+\frac{3887}{10395}B_{2}-\frac{5689}{22275}B_{3}-\frac{20959}{155925}B_{4}+\frac{5821}{31185}B_5-\frac{2159}{31185}B_6. \end{aligned}$$

For 8 ≤ k ≤ n − 5, we have

$$\displaystyle \begin{aligned} L_{k} = -\frac{2159}{13824}(B_{k-1}+B_{k+5})+\frac{751}{4608}(B_{k}+B_{k+4})-\frac{37003}{46080}(B_{k+1}+B_{k+3})+\frac{79879}{34560}B_{k+2}. \end{aligned}$$

This representation is used in Sects. 5.3 and 5.4 below.

3 Quadrature Formula Associated with \(\mathscr {Q}_n\)

By integrating \(\mathscr {Q}_nf\) in the quasi-Lagrange form (5.7) we obtain as in [9], the following quadrature formula

$$\displaystyle \begin{aligned} \int_0^1f(x)dx=I(f)\simeq I_n(f)=\int_0^1 \mathscr{Q}_n f(x) dx=h\sum_{j\in \Gamma_n}\omega_jf_j, \end{aligned}$$

with weights \(\omega _j=\frac {1}{h}\int _0^1L_j(x)dx.\) Using the fact that \(\int _0^1B_j(x)=\frac {x_j - x_{j-7}}{7},\) we get

$$\displaystyle \begin{aligned} I_n(f) &=h\sum_{j=8}^{n-5}f_j+h\left[\frac{101}{735}(f_1+f_{n+2})+\frac{113221}{138240}(f_2+f_{n+1})+\frac{1035241}{967680}(f_3+f_n)\right.\notag\\ &+\frac{464651}{483840}(f_4+f_{n-1}) +\frac{3446899}{3386880}(f_5+f_{n-2})+\frac{962903}{967680}(f_6+f_{n-3})\notag\\ &+\left. \frac{193657}{193536}(f_7+f_{n-4})\right]. \end{aligned} $$
(5.8)

Since \(\mathscr {Q}_n\) is exact on Π6, and the weights and knots \((t_j)_{j\in \Gamma _n}\) are symmetric with respect to the midpoint of I, we deduce that the quadrature rule (5.8) is exact on Π7. Therefore, the error E n(f) = I(f) − I n(f) is a \(\mathscr {O}(h^8)\), when \(f \in \mathscr {C}^8[0,1]\). Now, according to the Peano kernel theorem (see [4, Chap.3]), we have

$$\displaystyle \begin{aligned} E_n(f)=\frac{1}{7!}\int_0^1K(t)f^{(8)}(t)dt, \end{aligned}$$

where K(t) is the Peano kernel defined by

$$\displaystyle \begin{aligned} K(t)=\int_0^1(s-t)^7_+ds-h\sum_{j\in \Gamma_n}\omega_j(t_j-t)^7_+. \end{aligned}$$

Theorem 5.1

The Peano kernel K(t) is negative in the intervals J 1 = [0, t 0] and J 3 = [1 − t 0, 1] and positive in J 2 = [t 0, 1 − t 0] with t 0 = τ 0 h and τ 0 = 1.38135 (Fig. 5.1).

Fig. 5.1
figure 1

Graph of the Peano kernel with n = 16

Proof

Using

$$\displaystyle \begin{aligned} K(t)=\frac{1}{8} (1-t)^8-h\sum_{j\in\Gamma_n}\omega_j (t_j-t)_+^7, \end{aligned}$$

we see immediately that K(0) = K(1) = 0. In fact, since for all j ∈ Γn, \( (t_j-1)_+^7=0 \), we obtain K(1) = 0. On the other hand, K(0) = 0 is due to the fact that the polynomial p(x) = x 7, belongs to Π7, hence it is exactly integrated by the quadrature rule I n and therefore

$$\displaystyle \begin{aligned} I(p)=\frac{1}{8}=h\sum_{j\in\Gamma_n}\omega_j t_j^7. \end{aligned}$$

We need to study the sign of K(t) as it is done for the quartic case, (see [6]).

Now, setting t = τh, τ ∈ [0, n] and x = ξh, we obtain

$$\displaystyle \begin{aligned} \frac{1}{h^8} K (t)=\int_0^n(\xi -\tau)_+^7d\xi-\sum_{j\in\Gamma_n}\omega_j (\tau_j-\tau)_+^7, \end{aligned}$$

where

$$\displaystyle \begin{aligned} \tau_1=0, \quad \tau_{n+2}=n \quad \text{and}\quad \tau_j=j-\frac{3}{2}\quad \text{for}\quad j=2,\ldots,n+1. \end{aligned}$$

We have also

$$\displaystyle \begin{aligned} \int_0^n(\xi -\tau)_+^7d\xi=\int_\tau^n(\xi -\tau)^7d\xi=\left[\frac{(\xi -\tau)^8}{8}\right]_\tau^n=\frac{(n-\tau)^8}{8}, \end{aligned}$$

which gives

$$\displaystyle \begin{aligned} \frac{1}{h^8} K (t)= \frac{(n-\tau)^8}{8}-\sum_{j\in\Gamma_n}\omega_j (\tau_j-\tau)_+^7=p(\tau). \end{aligned}$$

Consequently, K(t) and p(τ) have the same sign. By using the symmetry of nodes and weights, it is easy to verify that p(τ) = p(n − τ). Then

$$\displaystyle \begin{aligned} p(\tau)=\frac{\tau^8}{8}-\sum_{j\in\Gamma_n}\omega_j (\tau_j-\tau)_+^7. \end{aligned}$$

Now let study the sign of p(τ) on [0, n]. Let \(p_j\equiv p|{ }_{[\tau _j,\tau _{j+1}]},\;j=1,\ldots ,n+1.\)

  • In the interval \([\tau _1,\tau _2]=[0,\frac {1}{2}] :\)

    $$\displaystyle \begin{aligned} p_1(\tau)&=\frac{\tau^7}{8} \left(\tau-\frac{808}{735}\right)\leq 0, \end{aligned} $$

    which admits τ = 0 as root on this interval.

  • In the interval \([\tau _2,\tau _3]=[\frac {1}{2},\frac {3}{2}]:\)

    $$\displaystyle \begin{aligned} p_2(\tau)&=p_1(\tau)-\frac{267}{326}\left(\tau-\frac{1}{2}\right)^7, \end{aligned} $$

    which admits \(\tau _0 = \frac { 1985}{1437}=1.38135\) as root in the interval [τ 2, τ 3]. We can check numerically that p(τ) ≤ 0 in [τ 2, τ 0] and p(τ) ≥ 0 in [τ 0, τ 3].

  • In the interval \([\tau _3,\tau _4]=[\frac {3}{2},\frac {5}{2}]:\)

    $$\displaystyle \begin{aligned} p_3(\tau)&=p_2(\tau)-\frac{996}{931}\left(\tau-\frac{3}{2}\right)^7\geq 0, \end{aligned} $$

    which does not admits any roots in the interval [τ 3, τ 4].

  • In the interval \([\tau _4,\tau _5]=[\frac {5}{2},\frac {7}{2}]:\)

    $$\displaystyle \begin{aligned} p_4(\tau)&=p_3(\tau)-\frac{339}{353}\left(\tau-\frac{5}{2}\right)^7\geq 0, \end{aligned} $$

    which does not admits any roots in the interval [τ 4, τ 5].

  • In the interval \([\tau _5,\tau _6]=[\frac {7}{2},\frac {9}{2}]:\)

    $$\displaystyle \begin{aligned} p_5(\tau)&=p_4(\tau)-\frac{402}{395}\left(\tau-\frac{7}{2}\right)^7\geq 0, \end{aligned} $$

    which does not admits any roots in the interval [τ 5, τ 6].

  • In the interval \([\tau _6,\tau _7]=[\frac {9}{2},\frac {11}{2}] :\)

    $$\displaystyle \begin{aligned} p_6(\tau)&=p_5(\tau)-\frac{1411}{1418}\left(\tau-\frac{9}{2}\right)^7\geq 0, \end{aligned} $$

    which does not admits any roots in the interval [τ 6, τ 7].

  • In the interval \([\tau _7,\tau _8]=[\frac {11}{2},\frac {13}{2}]:\)

    $$\displaystyle \begin{aligned} p_7(\tau)&=p_6(\tau)-\frac{1600}{1599}\left(\tau-\frac{11}{2}\right)^7\geq 0, \end{aligned} $$

    which does not admits any roots in the interval [τ 7, τ 8].

  • In the interval \([\tau _8,\tau _9]=[\frac {13}{2},\frac {15}{2}]:\)

    $$\displaystyle \begin{aligned} p_8(\tau) &=p_7(\tau)-\left(\tau-\frac{13}{2}\right)^7\geq 0. \end{aligned} $$
  • In the interval [τ i, τ i+1], 8 ≤ i ≤ n − 5 it can be shown by induction that p i(τ) = p i−1(τ − 1).

  • In the last seven intervals, since p(τ) = p(n − τ), we get

    $$\displaystyle \begin{aligned} p_{n+3-j}(\tau)=p_j(n-\tau),\quad \tau\in[\tau_{n+3-j},\tau_{n+4-j}],\quad 1\leq j\leq 7 , \end{aligned}$$

    which means that the behaviour of p is symmetrical of that one in the first seven intervals. This completes the proof.

Using the above theorem, the following asymptotic error formula can be proved.

Proposition 5.1

For any function \(f\in \mathscr {C}^8 [0,1] \) , there exist a point τ ∈ [0, 1] such that

$$\displaystyle \begin{aligned} E_n (f) =I(f)-I_n (f)=c_0h^8 f^{(8)}(\tau)+\mathscr{O}(h^9), \end{aligned} $$
(5.9)

where \(c_0=\frac {1107467}{3251404800}\simeq 3.41\times 10^{-4}.\)

Proof

The proof is similar to the proof of Theorem 2 in [6]. □

4 The Nyström Method

By using the quadrature scheme (5.8) to approximate the integral in (5.1), we obtain a new equation

$$\displaystyle \begin{aligned} u_n(s)-h\sum_{j\in \Gamma_n}\omega_j\kappa(s,t_j)u_n(t_j)=f(s),\quad s\in[0,1], \end{aligned} $$
(5.10)

where the unknowns are {u n(t j), j ∈ Γn} and they can be evaluated by solving the following linear system of size n + 2

$$\displaystyle \begin{aligned} u_n(t_i)-h\sum_{j\in \Gamma_n}\omega_j\kappa(t_i,t_j)u_n(t_j)=f(t_i),\quad i\in \Gamma_n. \end{aligned} $$
(5.11)

From (5.10), the approximate solution u n(s) is completely determined by their values at the nodes \((t_i)_{i\in \Gamma _n}\). In fact,

$$\displaystyle \begin{aligned} u_n (s) = f(s) + h\sum_{j\in \Gamma_n} \omega_j k(s,t_j) u_n(t_j),\quad s\in[0,1]. \end{aligned}$$

Now let \(\mathscr {K}\) be the integral operator defined by

$$\displaystyle \begin{aligned} (\mathscr{K}u)(s)=\int_0^1 \kappa(s,t)u(t)dt, \end{aligned}$$

and let \(\mathscr {K}_n\) be the following Nyström approximation

$$\displaystyle \begin{aligned} (\mathscr{K}_nu)(s) =h\sum \limits _{j\in \Gamma_n} \omega _j \kappa(s,t_j)u(t_j). \end{aligned}$$

The following theorem gives a complete information for analyzing the convergence of the Nyström method.

Theorem 5.2

Let κ(s, t) be a continuous kernel for s, t ∈ [0, 1]. Assume that the quadrature scheme (5.8) is convergent for all continuous functions on [0, 1]. Further, assume that the integral equation (5.1) is uniquely solvable for a given \(f\in \mathscr {C}[0,1]\) . Then, for all sufficiently large n, say n  N, the operators \((I-\mathscr {K}_n)^{-1}\) exist and are uniformly bounded,

$$\displaystyle \begin{aligned} \Vert(I-\mathscr{K}_n)^{-1}\Vert_\infty\leq \frac{1+\Vert(I-\mathscr{K})^{-1}\Vert_\infty \Vert \mathscr{K}_n\Vert_\infty}{1-\Vert(I-\mathscr{K})^{-1}\Vert_\infty\Vert(\mathscr{K}-\mathscr{K}_n)\mathscr{K}_n\Vert_\infty}\leq c,\quad n\geq N \end{aligned}$$

with a suitable constant c < ∞. For the equations \((I-\mathscr {K})u=f\) and \((I-\mathscr {K}_n)u_n=f,\)

$$\displaystyle \begin{aligned} \Vert u-u_n\Vert_\infty & \leq\Vert(I-\mathscr{K})^{-1}\Vert_\infty\Vert(\mathscr{K}-\mathscr{K}_n)u\Vert_\infty, \end{aligned} $$
(5.12)
$$\displaystyle \begin{aligned} & \leq c\Vert(\mathscr{K}-\mathscr{K}_n)u\Vert_\infty,\quad n\geq N.{} \end{aligned} $$
(5.13)

Proof

See Atkinson [1, Theorem 4.1.2]. □

Theorem 5.3

Let u be the exact solution of (5.1). Assume that \(\kappa (s,.)u(.)\in \mathscr {C}^8[0,1]\) for all s ∈ [0, 1]. Then, for a sufficiently large n,

$$\displaystyle \begin{aligned} \Vert u-u_n\Vert_\infty=\mathscr{O}(h^8). \end{aligned} $$
(5.14)

Proof

The estimation (5.13) shows that ∥u − u n and \(\Vert (\mathscr {K}-\mathscr {K}_n)u\Vert _\infty \) converges to zero with the same speed. By (5.9), we have for s ∈ [0, 1] the asymptotic integration error

$$\displaystyle \begin{aligned} (\mathscr{K}u)(s)-(\mathscr{K}_nu)(s)=c_0h^8 \left[ \frac{\partial^8}{\partial t^8} \kappa(s,t)u(t)\right]_{t=\tau} +\mathscr{O}(h^9). \end{aligned} $$
(5.15)

Hence, from (5.13) and (5.15), the Nyström method converges with an order of \(\mathscr {O}(h^8)\), provided κ(s, t)u(t) is eight times continuously differentiable with respect to t, uniformly in s. □

An asymptotic series expansion for the Nyström solution u n is obtained below.

Theorem 5.4

Under the assumption of Theorem 5.3 , we have

$$\displaystyle \begin{aligned} u_n-u=c_0[(I-\mathscr{K})^{-1}\mathscr{W}u]h^8+\mathscr{O}(h^9), \end{aligned} $$
(5.16)

with

$$\displaystyle \begin{aligned} (\mathscr{W}u)(s)=\left[ \frac{\partial^8}{\partial t^8} \kappa(s,t)u(t)\right]_{t=\tau},\quad s\in[0,1]. \end{aligned}$$

Proof

Since

$$\displaystyle \begin{aligned} (I-\mathscr{K}_n)(u-u_n)=(\mathscr{K}-\mathscr{K}_n)u, \end{aligned}$$

we can write as in [1, Chap.4]

$$\displaystyle \begin{aligned} u-u_n=e_n+R_n, \end{aligned}$$

where

$$\displaystyle \begin{aligned} e_n & =(I-\mathscr{K})^{-1}(\mathscr{K}-\mathscr{K}_n)u, \\ R_n & =[(I-\mathscr{K}_n)^{-1}-(I-\mathscr{K})^{-1}](\mathscr{K}-\mathscr{K}_n)u, \\ & =(I-\mathscr{K}_n)^{-1}(\mathscr{K}_n-\mathscr{K}) (I-\mathscr{K})^{-1}(\mathscr{K}-\mathscr{K}_n)u. \end{aligned} $$

Using the asymptotic expansion (5.15), we get

$$\displaystyle \begin{aligned} e_n=[(I-\mathscr{K})^{-1}\mathscr{W}u]c_0h^8+\mathscr{O}(h^9) \end{aligned}$$

and

$$\displaystyle \begin{aligned} (I-\mathscr{K}_n)R_n & =(\mathscr{K}_n-\mathscr{K})e_n,\\ &=-c_0h^8 \left[ \frac{\partial^8}{\partial t^8} \kappa(.,t)e_n(t)\right]_{t=\tau^{\prime}_n} +\mathscr{O}(h^9),\\ &=-c_0h^{16} \left[ \frac{\partial^8}{\partial t^8} \kappa(.,t)c(t)\right]_{t=\tau^{\prime}_n}+\mathscr{O}(h^9), \end{aligned} $$

where \(c(t)=c_0[(I-\mathscr {K})^{-1}\mathscr {W}u](t)\) and τ n ∈ [0, 1]. Letting S n be the solution of equation

$$\displaystyle \begin{aligned} (I-\mathscr{K}_n)S_n=-c_0h^7 \left[ \frac{\partial^8}{\partial t^8} \kappa(.,t)c(t)\right]_{t=\tau^{\prime}_n}, \end{aligned}$$

we deduce that

$$\displaystyle \begin{aligned} R_n(t)=(S_n(t)-S(t))h^9+S(t)h^9, \end{aligned}$$

where S satisfies

$$\displaystyle \begin{aligned} (I-\mathscr{K})S=-c_0 h^7\left[ \frac{\partial^8}{\partial t^8} \kappa(.,t)c(t)\right]_{t=\tau^{\prime}_n}. \end{aligned}$$

Taking into account that S − S n ≃ e n, we finally obtain

$$\displaystyle \begin{aligned} R_n(t)\simeq S(t)h^9. \end{aligned}$$

This completes the proof. □

One step of Richardson extrapolation can be used to further improve the order of convergence of u n. Let u 2n be the solution associated with a uniform partition of [0, 1] with 2n intervals and norm \(\frac {h}{2}.\) Define

$$\displaystyle \begin{aligned} u_{2n}^R=\frac{2^8u_{2n}-u_n}{2^8-1}. \end{aligned}$$

Theorem 5.5

If \(\kappa (s,.)u(.)\in \mathscr {C}^8[0,1]\) for all s ∈ [0, 1], then, we have

$$\displaystyle \begin{aligned} \Vert u-u_{2n}^R\Vert_\infty=\mathscr{O}(h^9). \end{aligned} $$
(5.17)

Proof

From Theorem 5.4 we obtain

$$\displaystyle \begin{aligned} u_{2n}-u=c_0[(I-\mathscr{K})^{-1}\mathscr{W}u]\left(\frac{h}{2}\right)^8+\mathscr{O}(h^9). \end{aligned} $$
(5.18)

The estimate (5.17) follows from (5.16) and (5.18). □

5 Numerical Results

Example 1

Consider the following linear Fredholm integral equation of the second kind

$$\displaystyle \begin{aligned} u(s)- \int_{0}^{1} s^{\frac{1}{2}} t^8 u(t) dt =f(s) ,\quad s\in [0,1], \end{aligned}$$

where the exact solution is u(s) = s and f is chosen accordingly. The errors

$$\displaystyle \begin{aligned} \Vert u-u_n\Vert_\infty=\mathscr{O}(h^\alpha)\quad \text{and}\quad \Vert u-u^R_{2n}\Vert_\infty=\mathscr{O}(h^\beta) \end{aligned}$$

were approximated respectively by

$$\displaystyle \begin{aligned} \max\left\{ |u(\frac{i}{100})-u_n(\frac{i}{100})|,\;i=0,1,\ldots,100\right\} \end{aligned}$$

and

$$\displaystyle \begin{aligned} \max\left\{ |u(\frac{i}{100})-u^R_{2n}(\frac{i}{100})|,\;i=0,1,\ldots,100\right\}. \end{aligned}$$

Using two successive values of n, the values of α and β are computed and are listed in Table 5.1.

Table 5.1 Nyström and extrapolated Nyström methods for example 1

From the above table it can be seen that the computed orders of convergence match well with the expected values.

Example 2

Consider the following Fredholm integral equation quoted from [6]

$$\displaystyle \begin{aligned} u(s)- \int_{0}^{1} (s+1) e^{-st} u(t) dt =f(s) ,\quad s\in [0,1], \end{aligned}$$

where f is chosen so that \(u(s) = \cos {}(s)\). The results are given in Table 5.2.

Table 5.2 Nyström and extrapolated Nyström methods for example 2

We denote by \(u^Q_n\) and \(u^{QB}_n\) the approximated solutions given by Nyström method based on the integration of a quartic spline quasi-interpolant and Nyström method associated with the extrapolated quadrature formula I QB in [6] respectively. The errors

$$\displaystyle \begin{aligned} \Vert u-u^Q_n\Vert_\infty=\mathscr{O}(h^\gamma)\quad \text{and}\quad \Vert u-u^{QB}_n\Vert_\infty=\mathscr{O}(h^\delta) \end{aligned}$$

which are listed in Table 5.3, are quoted from [6]. Note that the predicted values of γ and δ are respectively, 6 and 7. The numerical algorithm was run on a PC with Intel Core i5 1.60 GHz CPU, 8GB RAM, and the programs were compiled by using Wolfram Mathematica.

Table 5.3 Nyström method based on I Q and I QB for example

It can be seen from Tables 5.2 and 5.3 that the approximation \(u^R_{2n}\) with n = 32 is better than the approximation \(u^{QB}_n\) with n = 128.

6 Conclusion

The results, which are displayed in Table 5.1, show that a very high accuracy is obtained even for a kernel which is only continuous with respect to the variable s. On the other hand, we obtained significant performances in comparison with those of the quadrature rules in [6] and this is due to the fact that the order of convergence of the proposed method is higher. Note that the size of the corresponding linear system is n + 2. It can be shown that to solve the present problem by a piecewise polynomial interpolation scheme, a linear system of size at least 4n will need to be solved to obtain accuracy of comparable order.