1 Introduction

Integral equations is a branch of mathematics that deals with equations in which an unknown function appears under an integral sign. These equations arise in many areas of science and engineering, such as physics, chemistry, biology, economics, signal processing, potential theory electrostatics and fluid dynamics [1,2,3,4,5,6,7,8,9,10,11,12]. The theory of integral equations is an important tool for understanding and solving complex problems in various fields of science and engineering. Consider the second kind nonlinear Volterra integral equation

$$\begin{aligned} u(x)=\int _{0}^{x}G\left( x,t,u(t)\right) \textrm{d}t+g(x), ~~~~~ x\in [0,T], \end{aligned}$$
(1.1)

where g and G are given functions. A large variety of numerical methods have been developed to approximate the numerical solution of Volterra integral equations. In [13], the higher degree fuzzy transform technique is used for the numerical solution of the second kind Volterra integral equations with singular and nonsingular kernels. A smoothing transformation technique based on quasi-linearization and product integration methods has been used for the numerical solution of nonlinear weakly singular Volterra integral equations in [14]. A solution has been obtained in [15], using the fixed point technique in the setting of dislocated extended b-metric space for the Volterra integral equations. Xiao-yong [16] developed a high-order algorithm for the solution of nonlinear Volterra integral equations with vanishing variable delays. The algorithm was a nontrivial extension of single step methods. A Runge–Kutta method based on the first kind Chebyshev polynomials has been developed to approximate the solution of nonlinear stiff Volterra integral equations of the second kind in [17]. In [18], a collocation method based on Chebyshev polynomial interpolation is developed for the numerical solution of Volterra integral equations of the second kind. This method used the roots of Chebyshev polynomial as collocation points. A lot of other numerical approaches such as Chebyshev collocation method [19], Romberg extrapolation method [20, 21], iterated Galerkin method [22] and finite difference method [23] have been used to approximate the numerical solution of Volterra integral equations of the second kind.

Using spline functions in the context of numerical solution of integral equations dates back to 1970s. Hung [24], used spline function to approximate the solution of Volterra integral equations of the first and second kind. He obtained a convergent scheme based on linear and quadratic spline. He also proved that using cubic spline leads to divergent approximations. Tom [25], developed an approximation based on spline function to approximate the solution Volterra integral equations of the second kind. It is proved that, splines of degree greater than two with maximum degree of smoothness lead to unstable approximations for the solution of Volterra equations of the second time. Tom [26], determined the conditions for stability of spline based methods for the solution of Volterra integral equations. In [27], a \(C^2\)-cubic spline is used to approximate the solution of the Volterra integral equation of the second kind but the question of stability does not answered. Kauthen [28], developed a polynomial spline collocation method for the solution of an integral-algebraic equations of the Volterra type. In successive papers [29,30,31,32], Oja et al. discuss the stability and convergence of spline collocation method for the solution of Volterra integral equations of the second kind. It is proved that solving Volterra integral equations with step by step collocation using m-th degree spline of class \(C^{m-1}[0,T]\) leads to an unstable and divergent approximations for \(m\ge 3.\) In recent years, many researchers did attempt to approximate the solution of second kind Volterra integral equations using spline [33,34,35,36,37].

In the current work, we endeavor to devise a novel approach anchored in the collocation B-spline method, tailored to solve nonlinear Volterra integral equations of the second kind. Our methodology is meticulously designed, taking into account both odd and even-degree splines, which are treated separately to harness their unique strengths. For odd degree splines, we collocate the problem on the grid points of the uniform partition while the midpoints of the partition are used for the even degrees. For \(m\ge 2\), some extra relations are needed to uniquely determine the spline approximation. To construct such extra relations, we collocate the problem at near boundary midpoints for odd values of m and near boundary grid points for even values of m. At first, we prove that such an spline is exist and we obtain the local error bounds for interpolating sufficiently smooth functions. Then, we use the method to approximate the solution of nonlinear Volterra integral equations of the second kind. It is proved that the method is super-convergent for even m.

The rest of the paper is organized as follows: In Sect. 2, we will introduce a B-spline collocation method with new end conditions and we will obtain some local error bounds to be used in the convergence analysis. In Sect. 3, we will construct our new approach based on proposed B-spline for the solution of the nonlinear Volterra integral equations. Section 4 is devoted to the convergence analysis and error bound of the method. In Sect. 5, we will solve some problems to show the applicability and efficiency of the proposed method.

2 B-Spline Collocation Method

B-splines are the splines with smallest compact support, which means that the basis functions are non-zero only within a small interval or segment of the domain. Such a property makes B-splines computationally efficient, as the calculations can be localized to these smaller segments. B-splines form a partition of unity, which means that their sum over the entire domain is equal to one. This property allows for the construction of a global approximation by combining local B-spline basis functions. Each basis function is associated with a specific control point and only affects a small region of the domain. By properly choosing the control points and their corresponding basis functions, it is possible to create a global approximation that accurately represents the behavior of the solution throughout the domain. The partition of unity property also ensures that the approximated solution satisfies important properties, such as conservation laws or boundary conditions. This is crucial in many applications where physical laws or constraints need to be preserved. The partition of unity and non-negativity of the B-spline basis functions are equivalent to the affine invariance and the convex hull properties of B-spline curves. Meaning that, a B-spline curve is contained in the convex hull of its control polyline. Consequently, as the interpolated function u moves from a to b and crosses a knot, a basis function becomes zero and a new non-zero basis function becomes effective. As a result, one control point whose coefficient becomes zero will leave the definition of the current convex hull and is replaced with a new control point whose coefficient becomes non-zero. One of the advantages of using B-splines for this purpose is their ability to provide a smooth and continuous representation of the solution. This is particularly important in the context of differential equations, where the solution is often expected to be differentiable and exhibits certain smoothness properties.

In what follows, we will provide a succinct overview of (cardinal) B-spline functions and their properties. Afterward, we will introduce a novel numerical scheme that utilizes the Gauss–Legendre quadrature method. This scheme aims to numerically solve integral equations with high precision and convergence. If you wish to delve deeper into the subject matter, we recommend referring to references [38] and [39] for comprehensive information on B-splines and Gauss–Legendre quadrature method.

Let \(\Delta \equiv \{0=x_{0}< x_{1}< \cdots < x_{n}=T\}\) be a uniform partition of the interval [0, T] with \(h=\frac{T}{n}\) as step size and \(\Omega \equiv \left\{ \tau _{i}=x_{i}-\frac{h}{2}\right\} _{i=1}^{n}\) be the set of mid-points of \(\Delta \). Also let denote by \(I_i=[x_{i-1},x_{i}), i=1,\ldots , n-1\) and \(I_n=[x_{n-1},x_{n}]\), the subintervals of the partition \(\Delta \). Given a positive integer \(m<n\), the space of piecewise polynomials of degree m is defined as

$$\begin{aligned} \mathcal {P}\Pi _{m}(\Delta )\equiv \left\{ q: \exists p_i\in \Pi _{m}, i=1,\ldots ,n, ~ s.t.~ q(x)=p_i(x), \forall x \in I_i \right\} , \end{aligned}$$

where \(\Pi _{m}\) is the space of polynomials of degree m. Now for \(\Delta \) and m defined above the space of polynomial spline of degree m with the simple knots at the points \(x_{0}, x_{1},...,x_{n}\) can be defined as

$$\begin{aligned} \mathcal{S}\mathcal{P}_{m}(\Delta )\equiv \mathcal {P}\Pi _{m}(\Delta ) \cap C^{m-1}[0,T], \end{aligned}$$

where \(C^{m-1}[0,T]\) is the space of \(m-1\) time continuously differentiable functions defined on [0, T]. It is the smoothest space of piecewise polynomials of degree m. Polynomial spline spaces are finite dimensional linear spaces with very convenient bases. In the ongoing, following [38], we will introduce a basis for \(\mathcal{S}\mathcal{P}_{m}(\Delta )\), which is compact support and defines a partition of unity.

Let us define the extended partition \(\tilde{\Delta }=\{\tilde{x}_{i}\}_{i=0}^{n+2m}\), associated with \(\Delta \) with the knots \(\tilde{x}_{i}\) defined as

$$\begin{aligned} \tilde{x}_{0}\le \cdots \le \tilde{x}_{m} = 0, ~~~~ T = \tilde{x}_{n+m}\le \cdots \le \tilde{x}_{n+2m} \end{aligned}$$

and

$$\begin{aligned} \tilde{x}_{m+i}= x_i, ~~~ i=1,\ldots n-1.\end{aligned}$$

It should be noted that the interior knots \(\{\tilde{x}_{i}\}_{i=m}^{m+n}\) are uniquely determined while the first and last m points can be chosen arbitrarily. Let \(\tilde{\Delta }\) be the extended partition associated with \(\Delta \) and define

$$\begin{aligned} \mathfrak {B}_i(x)=(-1)^{m+1}(\tilde{x}_{m+i+1}-\tilde{x}_{i})\left[ \tilde{x}_{i},\ldots ,\tilde{x}_{m+i+1}\right] (x-y)_{+}^{m}, ~~~~ 0\le i \le n+m-1\end{aligned}$$

where \((x-y)_{+}^{m}\) is the truncated power function

$$\begin{aligned} (x-y)_{+}^{m}=\left\{ \begin{array}{llllll} (x-y)^{m} &{} x\ge y \\ 0 &{} x<y \end{array}, \right. \end{aligned}$$

and the divided difference operator \(\left[ \tilde{x}_{i},\ldots ,\tilde{x}_{m+i+1}\right] (x-y)_{+}^{m}\) is defined in [38] Definition 2.49. The set of functions \(\{\mathfrak {B}_i(x), ~ 0 \le i \le n+m-1 \}\) forms a basis for the space of spline functions of \(\mathcal{S}\mathcal{P}_{m}(\Delta )\) which is compact support, that is

$$\begin{aligned} \mathfrak {B}_i(x)= 0, ~~~~ x\notin [\tilde{x}_{i},\tilde{x}_{i+m+1}] \end{aligned}$$

and

$$\begin{aligned} \mathfrak {B}_i(x)> 0, ~~~~ x\in (\tilde{x}_{i},\tilde{x}_{i+m+1}). \end{aligned}$$

Moreover, the basis functions form a partition of unity that is

$$\begin{aligned} \sum _{i=0}^{n+m-1}\mathfrak {B}_i(x)=1,~~~ \forall x \in [0,T]. \end{aligned}$$

In another terminology using the cardinal spline function

$$\begin{aligned} \mathcal {B}_{m+1}(x)=\frac{1}{m!}\sum _{j=0}^{m+1}(-1)^{j}\begin{pmatrix}m+1\\ j\end{pmatrix}(x-j)_{+}^{m}, \end{aligned}$$
(2.1)

the B-spline basis functions can be constructed as follows [40]

$$\begin{aligned} \mathfrak {B}_{i}(x)=\mathcal {B}_{m+1}\left( \frac{x-x_{0}}{h}-i+2\right) , ~~ i=0,\ldots ,n+m-1, \end{aligned}$$

which means that

$$\begin{aligned} \mathcal{S}\mathcal{P}_{m}(\Delta )=span\left\{ \mathcal {B}_{m+1}\left( \frac{x-x_{0}}{h}-i+2\right) , ~~ i=0,\ldots ,n+m-1\right\} . \end{aligned}$$

Therefore, any spline \(s\in \mathcal{S}\mathcal{P}_{m}(\Delta )\) can be written in the following form

$$\begin{aligned} s(x)=\sum _{i=0}^{n+m-1}c_{i}\mathcal {B}_{m+1}\left( \frac{x-x_{0}}{h}-i+2\right) , \end{aligned}$$

where \(c_i\), \(\small {i = 0,..., n+m-1}\) are unknown coefficients to be determined. We will define the interpolation conditions on the grid points and midpoints of the partition for odd and even m, respectively. Let for \(f\in C[a,b]\), s(x) satisfies the interpolatory conditions

$$\begin{aligned} s(\tau _{i})=f(\tau _{i}), ~~~~ 1\le i\le n, \end{aligned}$$
(2.2)

for even m, and

$$\begin{aligned} s(x_{i})=f(x_{i}), ~~~~ 0\le i\le n, \end{aligned}$$
(2.3)

for odd m.

As we know the dimension of \(\mathcal{S}\mathcal{P}_{m}(\Delta )\) is \(n+m\), thus in order to uniquely determine s(x), we need \(m-1\) end conditions for odd m and m end conditions for even m along with interpolatory conditions. To avoid the utilization of derivatives of the solution at the initial point as additional conditions, we introduce the following alternative end conditions:

$$\begin{aligned} s(x_{i})=f(x_{i}), ~~~~ i\in \{0,1,\ldots ,\delta \}\cup \{n-\delta ,\ldots ,n\}, \end{aligned}$$
(2.4)

for even m and

$$\begin{aligned} s(\tau _{i})=f(\tau _{i}), ~~~~ i\in \{1,\ldots ,\sigma \}\cup \{n-\sigma +1,\ldots ,n\}, \end{aligned}$$
(2.5)

for odd m, where \(\delta =[\frac{m-1}{2}]\) and \(\sigma =[\frac{m}{2}]\) (\([\nu ]\) means the greatest integer smaller than \(\nu \)).

To establish the well-defined nature of the spline interpolation introduced above, it is necessary to refer to the following Theorem from [41], which provides a sufficient condition for the unique solvability of the resulting system.

Theorem 2.1

The coefficients matrix \(A=\left( \mathcal {B}_i(x_j)\right) \) of the B-spline, interpolating a function at the grid points \(x_j\) is nonsingular if and only if all diagonal elements \(\mathcal {B}_i(x_i)\) are nonzero.

At both the grid points and midpoint of the partition, the subsequent Theorem presents a local error estimate for the aforementioned spline interpolation. Before, we need to recall the following Lemma from [42].

Lemma 2.1

Let \(A=\{a_{ij}\}\) be an \(n\times n\) matrix with \(|a_{ii}|\ge \sum _{j=1,j\ne i}^{n}|a_{ij}|+\varepsilon ,\) \(1\le i \le n\), where \(\varepsilon >0\), then we have \(\Vert A^{-1}\Vert _{\infty }<\varepsilon ^{-1}\).

Theorem 2.2

Let \(s\in \mathcal{S}\mathcal{P}_{m}(\Delta )\) be the unique spline interpolation for \(f\in C^{m+2}[0,T]\) satisfying interpolatory conditions (2.2)–(2.4) for even m or (2.3)–(2.5) for odd m, then

$$\begin{aligned} s(x_{i})=f(x_{i})+O(h^{m+2}), ~~~~ 0 \le i \le n, \end{aligned}$$
(2.6)

if m is even, and

$$\begin{aligned} s(\tau _{i})=f(\tau _{i})+O(h^{m+1}), ~~~~ 1\le i \le n, \end{aligned}$$
(2.7)

if m is odd.

Proof

We will prove the first relation, since the second one can be proved in a similar manner. Consider the following consistency relation which connect s(x) at the midpoints and grid points of the partition [40]

$$\begin{aligned} \sum _{j=-\delta -1}^{\delta +1}l_{i}s(x_{i+j})=\sum _{j=-\delta }^{\delta +1}d_{i}s(\tau _{i+j}), \hspace{28.45274pt}\delta< i < n-\delta , \end{aligned}$$
(2.8)

where

$$\begin{aligned} l_{i}=\mathcal {B}_{m+1}\left( m+\frac{1}{2}-i\right) , \hspace{28.45274pt}d_{i}=\mathcal {B}_{m+1}\left( m-i\right) . \end{aligned}$$
(2.9)

Let us define the error function \(e(x)=s(x)-f(x)\). Subtracting \(\sum _{j=-\delta -1}^{\delta +1}l_{i}f(x_{i+j})\) from both sides of (2.8), we have

$$\begin{aligned}{} & {} \sum _{j=-\delta -1}^{\delta +1}l_{i}s(x_{i+j})-\sum _{j=-\delta -1}^{\delta +1}l_{i}f(x_{i+j})\\{} & {} \qquad =\sum _{j=-\delta }^{\delta +1}d_{i}s(\tau _{i+j})-\sum _{j=-\delta -1}^{\delta +1}l_{i}f(x_{i+j}), \delta< i < n-\delta . \end{aligned}$$

which using the definition of e(x) the left hand side can be written as \(\sum _{j=-\delta -1}^{\delta +1}l_{i}e(x_{i+j})\) and by the interpolatory conditions (2.2), \(s(\tau _{k})\) in the right hand side can be replaced by \(f(\tau _{k})\) to obtain

$$\begin{aligned} \sum _{j=-\delta -1}^{\delta +1}l_{i}e(x_{i+j})=\sum _{j=-\delta }^{\delta +1}d_{i}f(\tau _{i+j})-\sum _{j=-\delta -1}^{\delta +1}l_{i}f(x_{i+j}),\hspace{17.07182pt}\delta< i < n-\delta . \end{aligned}$$

By the assumptions \(f\in C^{m+2}[0,T]\), so \(M_1=\max _{0 \le x\le T}|f^{(m+2)}(x)|<\infty \). By expanding f in both terms in the right hand side of the relation above, using Taylor series expansion about \(x_{i}\) and simplifying we obtain

$$\begin{aligned} \sum _{j=-\delta -1}^{\delta +1}l_{i}e(x_{i+j})= O(M_1\hspace{1.42271pt}h^{m+2})=O(h^{m+2}), ~~~~~ \delta< i < n-\delta . \end{aligned}$$
(2.10)

The system above contains \(n-m+1\) equations with \(n+1\) unknown \(e(x_{i}),~ 0\le i\le n\). Using (2.4) and (2.10) together, we obtain the following \(\tiny {(n+1)\times (n+1)}\) system

$$\begin{aligned} \left\{ \begin{array}{llllll} e(x_{0})=e(x_{1})=\cdots =e(x_{\delta })=0,\\ \sum _{j=-\delta -1}^{\delta +1}l_{i}e(x_{i+j})=O(h^{m+2}), &{}&{} \delta< i < n-\delta , \\ e(x_{n-\delta })=\cdots =e(x_{n-1})=e(x_{n})=0. \end{array} \right. \end{aligned}$$
(2.11)

Obviously, certain elements within the above system are zero. By eliminating the unknowns corresponding to these zero elements, the size of the system can be reduced. Consequently, the system can be reformulated in its matrix form as shown below:

$$\begin{aligned} \mathcal {A} \hspace{1.42271pt}E=O(h^{m+2}) \end{aligned}$$
(2.12)

where

$$\begin{aligned} E=[e(x_{\delta +1}),e(x_{\sigma +2}),\ldots ,e(x_{n-\delta -1})]^T \end{aligned}$$

and

$$\begin{aligned} \mathcal {A}=\left( \begin{array}{ccccccccccccc} l_{\delta } &{} l_{\delta +1} &{} \cdots &{} l_{m} &{} &{} \\ l_{\delta -1} &{} l_{\delta } &{} \cdots &{} l_{m-1} &{} l_{m} &{} &{} \\ &{} &{}\ddots &{} \\ l_{0} &{} l_{1} &{} \cdots &{} l_{\delta } &{}\cdots &{} l_{m-1} &{} l_{m} &{} &{} \\ &{}&{} &{} &{} \ddots &{} \\ &{} &{} l_{0}&{} l_{1} &{} \cdots &{} l_{\delta } &{} \cdots &{}l_{m-1} &{}l_{m}\\ &{} &{} &{}&{} &{} &{} \ddots &{} &{} \\ &{} &{} &{} &{} l_{0} &{} l_{1} &{} \cdots &{} l_{\delta }&{} l_{\delta +1}\\ &{} &{} &{} &{} &{} l_{0} &{} l_{1} &{} \cdots &{} l_{\delta } \\ \end{array} \right) .\qquad \end{aligned}$$
(2.13)

According to Theorem 2.1, the coefficients matrix \(\mathcal {A}\) is nonsingular if all the diagonal elements are nonzero. Specifically, all the diagonal elements of \(\mathcal {A}\) are equal to \(l_{\delta }\) which possesses the following value

$$\begin{aligned} l_{\delta }=\mathcal {B}_{m+1}\left( m+\frac{1}{2}-\delta \right) . \end{aligned}$$

Since \(m+\frac{1}{2}-\delta \in support (\mathcal {B}_{m+1})\) thus \(l_{\delta }\ne 0\) and the matrix \(\mathcal {A}\) is nonsingular. Therefore, from (2.12), we obtain

$$\begin{aligned} \Vert E\Vert =\left\| \mathcal {A}^{-1}\right\| O(h^{m+2}). \end{aligned}$$
(2.14)

For \(m\le 6\), the system is strictly diagonally dominant and using Lemma 2.1 we have \(\left\| \mathcal {A}^{-1}\right\| _{\infty }<46\) thus, obviously in this case the proof is complete. Let \(m>6,\) it can be readily verified that for each B-spline basis function, all \(l_i\) values are positive and \(\sum _{i=0}^{m}l_i=1\), so \(\Vert \mathcal {A}\Vert _{_1}=1\) and \(\Vert \mathcal {A}\Vert _{_\infty }=1\). By leveraging the properties of the B-spline basis function \(\mathcal {B}_{m+1}\), it results

$$\begin{aligned} l_{i}=l_{m-i}, ~~~~~ i=0,\ldots ,\frac{m}{2}, \end{aligned}$$

thus, \(\mathcal {A}\) is a symmetric matrix. On the other hand, the nonsingularity of \(\mathcal {A}\) implies that all of its eigenvalues are nonzero. Specifically, we have \(\mu (\mathcal {A}) \ne 0\), where \(\mu \) represents smallest absolute of eigenvalue of \(\mathcal {A}\). Drawing upon concepts from linear algebra, we have

$$\begin{aligned} \Vert \mathcal {A}\Vert _{_2}=\sqrt{\rho (\mathcal {A}^T.\mathcal {A})}=\rho (\mathcal {A}) \end{aligned}$$
(2.15)

where \(\rho \) represents the largest absolute eigenvalue of \(\mathcal {A}\). On the other hand using the Gershgorin’s Theorem, all eigenvalues \(\lambda \) of \(\mathcal {A}\) must lie within the Gershgorin’s discs

$$\begin{aligned}|\lambda -l_{\delta }|\le \sum _{i=0, i\ne \delta }^{m}|l_{i}|\end{aligned}$$

which results

$$\begin{aligned} -1<l_{\delta }-\sum _{i=0, i\ne \delta }^{m}|l_{i}| \le \lambda \le l_{\delta }+\sum _{i=0, i\ne \delta }^{m}|l_{i}|=1 ~\Rightarrow ~ \rho (\mathcal {A}) \le 1. \end{aligned}$$

It is noteworthy that the numerical experiments show that for fixed m, by varying n, we have \(\rho (\mathcal {A})\ne 1\) which means that \(\Vert \mathcal {A}\Vert _{_2}<1\). Consequently, based on the principles of linear algebra for symmetric matrices, we have

$$\begin{aligned} \Vert \mathcal {A }\Vert _{_2}\Vert \mathcal {A}^{-1} \Vert _{_2}=\frac{\rho (A)}{\mu (A)} \end{aligned}$$
(2.16)

which means that \(\Vert \mathcal {A}^{-1} \Vert _{_2}\) cannot be infinite. Let denote by \(M_{2}\) the upper bound of \(\Vert \mathcal {A}^{-1} \Vert _{_2}\), then substituting into (2.14) we obtain

$$\begin{aligned} \Vert E\Vert _{_\infty } \le \Vert E\Vert _{_2}=\left\| \mathcal {A}^{-1}\right\| _{_2}O(h^{m+2})=M_{2}O\left( h^{m+2}\right) =O\left( h^{m+2}\right) . \end{aligned}$$
(2.17)

This together with (2.4) complete the proof of (2.6). \(\square \)

3 Numerical Method

In this section, we present a cutting-edge numerical methodology centered around the innovative utilization of B-spline methods for solving integral equations. The proposed approach offers a robust and computationally efficient framework for tackling integral equations. By employing B-splines to represent the unknown functions in the integral equation, we can construct a concise and highly accurate representation, leading to a significant reduction in computational effort while preserving solution accuracy. Throughout this section, we detail the theoretical foundation, algorithmic implementation, and numerical consideration of our B-spline-based method. We further substantiate the effectiveness and superiority of our approach through comprehensive examples including nonlinear integral equations in Sect. 5.

The operator representation of the integral Eq. (1.1) can be illustrated as follows

$$\begin{aligned} u(x)=\mathcal {K}u(x)+g(x), \end{aligned}$$
(3.1)

where \(\mathcal {K}:C^{m+1}\rightarrow C^{m+1}\) is a nonlinear integral operator in the following form:

$$\begin{aligned} \mathcal {K}u(x)=\int _{0}^{x}G\left( x,t,u(t)\right) \textrm{d}t, ~~~ x\in [0,T]. \end{aligned}$$
(3.2)

Assuming the existence and uniqueness of the solution to Eq. (3.1), our objective is to employ an m-th degree B-spline of the following form

$$\begin{aligned}s(x)=\sum _{j=0}^{n+m-1}c_{j}\mathcal {B}_{m+1}\left( \frac{x-x_0}{h}-j+2\right) ,\end{aligned}$$

to approximate the solution.

3.1 Even Degree B-Spline

Let m be even. Upon replacing u(x) with s(x) in Eq. (3.1), we take the approach of collocating the obtained equation at the midpoints of the partition, we have

$$\begin{aligned} s(\tau _{i})=\int _{0}^{\tau _{i}}G\left( \tau _{i},t,s(t)\right) \textrm{d}t+g(\tau _{i}), \hspace{19.91684pt}1 \le i \le n. \end{aligned}$$
(3.3)

Then employing the B-spline basis functions at points \(\tau _{i}\), the Eq. (3.3) can be expressed as

$$\begin{aligned}{} & {} \sum _{j=0}^{n+m-1}c_{j}\mathcal {B}_{m+1}\left( \frac{\tau _i-x_0}{h}-j+2\right) \nonumber \\{} & {} \quad =\int _{0}^{\tau _{i}}G\left( \tau _{i},t,\sum _{j=0}^{n+m-1}c_{j}\mathcal {B}_{m+1}\left( \frac{t-x_0}{h}-j+2\right) \right) \textrm{d}t+g(\tau _{i}),\qquad \end{aligned}$$
(3.4)

for \(1 \le i \le n\). The system described above consists of n equations and \(m+n\) unknowns. In order to ensure the unique determination of the spline approximation, we require an additional m equations. To this end, we employ the collocation equation at m boundary and near-boundary grid points in the following manner

$$\begin{aligned} s(x_{i})=\int _{0}^{x_{i}}G\left( x_{i},t,s(t)\right) \textrm{d}t+g(x_{i}), ~~~~ i\in \{0,1,\ldots ,\delta \}\cup \{n-\delta ,\ldots ,n\}. \end{aligned}$$
(3.5)

Upon utilizing the expressive B-spline basis functions, we can express Eq. (3.5) as follows

$$\begin{aligned}{} & {} \sum _{j=0}^{n+m-1}c_{j}\mathcal {B}_{m+1}\left( \frac{x_i-x_0}{h}-j+2\right) \nonumber \\{} & {} \quad =\int _{0}^{x_{i}}G\left( x_{i},t,\sum _{j=0}^{n+m-1}c_{j}\mathcal {B}_{m+1}\left( \frac{t-x_0}{h}-j+2\right) \right) \textrm{d}t+g(x_{i}),\qquad \end{aligned}$$
(3.6)

for \(i\in \{0,1,\ldots ,\delta \}\cup \{n-\delta ,\ldots ,n\}\). The relation (3.4) together with (3.6) yields a system of \(n+m\) equations involving \(n+m\) unknowns \(c_{j},~ j=0,\ldots ,n+m-1\), which must be diligently solved to define s(x). The solution to this system will lead us to the desired approximation of the function s(x) using proposed B-spline basis functions.

To effectively handle the integrals in (3.4) and (3.6), we employ the q-point Gauss–Legendre quadrature method. This numerical integration technique is well-suited for accurately approximating integrals over specific intervals by distributing q quadrature’s points in a way that optimally captures the underlying function’s behavior [39]. By introducing the notation \(z(t)=G\left( \tau _{i},t,\sum _{j=0}^{n+m-1}c_{j}\mathcal {B}_{m+1}\left( \frac{t-x_0}{h}-j+2\right) \right) \) and change of variable, the integrals in (3.4) can be reformulated as follows

$$\begin{aligned} I_{i}= & {} \int _{0}^{\tau _{i}}z(t)\textrm{d}t=\int _{0}^{\tau _{1}}z(t)\textrm{d}t+\int _{\tau _{1}}^{\tau _{i}}z(t)\textrm{d}t \nonumber \\= & {} \frac{\tau _{1}}{2}\int _{-1}^{1}z\left( \frac{\tau _{1}}{2}(\eta +1)\right) d\eta +\sum _{r=1}^{i-1}\frac{\tau _{r+1}-\tau _{r}}{2}\int _{-1}^{1}z\left( \frac{\tau _{r+1}- \tau _{r}}{2}\eta +\frac{\tau _{r+1}+\tau _{r}}{2}\right) d\eta . \end{aligned}$$

Now by incorporating the Gauss–Legendre quadrature into our system of equations, we can efficiently compute the unknown coefficients \(c_j\), ultimately leading to a highly accurate approximation of s(x). We end up with

$$\begin{aligned} I_{i}= & {} \frac{\tau _{1}}{2}\sum _{k=1}^{q}\vartheta _{k} \hspace{2.84544pt}z\left( \frac{\tau _{1}}{2}(\eta _{k}+1)\right) \nonumber \\{} & {} +\sum _{r=1}^{i-1}\frac{\tau _{r+1}-\tau _{r}}{2}\sum _{k=1}^{q}\vartheta _{k}\hspace{2.84544pt}z\left( \frac{\tau _{r+1}- \tau _{r}}{2}\eta _k+\frac{\tau _{r+1}+\tau _{r}}{2}\right) , \end{aligned}$$
(3.7)

where \(\vartheta _{k}\) and \(\eta _{k}\) represent the weights and abscissas associated with q-point Gauss–Legendre integrating formula. In a same manner by introducing the notation \(\tilde{z}(t)=G\left( x_{i},t,\sum _{j=0}^{n+m-1}c_{j}\mathcal {B}_{m+1}\left( \frac{t-x_0}{h}-j+2\right) \right) \) the integrals in (3.6) can be approximated as follows

$$\begin{aligned} \tilde{I}_{i}=\int _{0}^{x_{i}}\tilde{z}(t)\textrm{d}t=\sum _{r=0}^{i-1}\frac{x_{r+1}-x_{r}}{2}\sum _{k=1}^{q}\vartheta _{k}\hspace{2.84544pt}\tilde{z}\left( \frac{x_{r+1}- x_{r}}{2}\eta _k+\frac{x_{r+1}+x_{r}}{2}\right) . \end{aligned}$$
(3.8)

3.2 Odd Degree B-Spline

For odd m, we collocate the problem at grid points of the partition as follows

$$\begin{aligned}{} & {} \sum _{j=0}^{n+m-1}c_{j}\mathcal {B}_{m+1}\left( \frac{x_i-x_0}{h}-j+2\right) \nonumber \\{} & {} \quad =\int _{0}^{x_{i}}G\left( x_{i},t,\sum _{j=0}^{n+m-1}c_{j}\mathcal {B}_{m+1}\left( \frac{t-x_0}{h}-j+2\right) \right) \textrm{d}t+g(x_{i}), \qquad \end{aligned}$$
(3.9)

for \( 0 \le i \le n\). Obviously, the above system of equations requires \(m-1\) extra relations to be uniquely solvable. In this case, the collocated problem

$$\begin{aligned}{} & {} \sum _{j=0}^{n+m-1}c_{j}\mathcal {B}_{m+1}\left( \frac{\tau _i-x_0}{h}-j+2\right) \nonumber \\{} & {} \quad =\int _{0}^{\tau _{i}}G\left( \tau _{i},t,\sum _{j=0}^{n+m-1}c_{j}\mathcal {B}_{m+1}\left( \frac{t-x_0}{h}-j+2\right) \right) \textrm{d}t+g(\tau _{i}),\qquad \quad \end{aligned}$$
(3.10)

can be utilized at the near boundary mid points of the partition for \(i \in \{1,\ldots ,\sigma \}\cup \{n-\sigma +1,\ldots ,n\}\). It is worth noting that the integrals in (3.9) and (3.10) can be approximated in the same manner as (3.7) and (3.8).

4 Convergence Analysis and Error Estimation

4.1 Spline Approximation Error

In this section, we embark on an in-depth exploration of the error estimation and convergence rates pertaining to the scheme proposed within this paper. Our objective is to rigorously assess the accuracy and convergence properties of the introduced methodology. We constructed a collocation method based on B-splines for the solution of

$$\begin{aligned} u=\mathcal {K}u+g \end{aligned}$$
(4.1)

where \(\mathcal {K}\) is the Volterra integral operator (3.2). We will prove the convergence for even degree splines and the proof is similar for odd degrees. Therefore, let m be even, in order to achieve an error bound and analyze the convergence of the proposed method, we define the operator \(\mathcal {P}_n u:C^{m+1}\rightarrow {\mathcal{S}\mathcal{P}_{m}(\Delta )}\) such that, for any function \(\alpha \in C^{m+1}\) we have

$$\begin{aligned} \left\{ \begin{array}{llllllll} (\mathcal {P}_{n}\alpha )(x_i)=\alpha (x_{i}), &{}&{}&{} i=0,1,...,\delta , \\ (\mathcal {P}_{n}\alpha )(\tau _i)=\alpha (\tau _i), &{}&{}&{} 1 \le i \le n, \\ (\mathcal {P}_{n}\alpha )(x_{i})=\alpha (x_{i}), &{}&{}&{} i=n-\delta ,...,n. \end{array} \right. \end{aligned}$$
(4.2)

Rewriting (3.3) and (3.5), we have

$$\begin{aligned} s(x_{i})= & {} \int _{0}^{x_{i}}G\left( x_{i},t,s(t)\right) \textrm{d}t+g(x_{i}), \quad i\in \{0,1,\ldots ,\delta \} \\ s(\tau _{i})= & {} \int _{0}^{\tau _{i}}G\left( \tau _{i},t,s(t)\right) \textrm{d}t+g(\tau _{i}), \quad 1 \le i \le n, \\ s(x_{i})= & {} \int _{0}^{x_{i}}G\left( x_{i},t,s(t)\right) \textrm{d}t+g(x_{i}), \quad i\in \{n-\delta ,\ldots ,n\} \end{aligned}$$

which by means of operator \(\mathcal {P}_n\) and (4.1) can be written in the operator form as

$$\begin{aligned} \mathcal {P}_{n}s=\mathcal {P}_{n}\mathcal {K}s+\mathcal {P}_{n}g. \end{aligned}$$

Also \(s\in \mathcal{S}\mathcal{P}_{m}(\Delta )\), thus \(\mathcal {P}_{n}s=s\) and we have

$$\begin{aligned} s=\mathcal {P}_{n}\mathcal {K}s+\mathcal {P}_{n}g. \end{aligned}$$
(4.3)

Lemma 4.1

The operator \(\mathcal {P}_{n}\) is uniformly bounded in \(C^{k}[0,T]\).

Proof

Using [38], Theorem 6.22, for every \(\alpha \in C[a,b]\), we have

$$\begin{aligned}\Vert \mathcal {P}_{n}\alpha \Vert _{_{C[a,b]}} \le (2m)^m \Vert \alpha \Vert _{_{C[a,b]}},\end{aligned}$$

so \(\mathcal {P}_{n}\) is uniformly bounded. \(\square \)

Lemma 4.2

The sequence of operators \(\mathcal {P}_{n}\) uniformly converges to identity operator.

Proof

Let \(\alpha \in C[0,T]\), then according to Lebesgue Lemma, we have

$$\begin{aligned}\Vert \mathcal {P}_{n}\alpha -\alpha \Vert \le (1+\Vert \mathcal {P}_{n}\Vert )\inf _{_{u \in \Pi _{m}}}\Vert \alpha -u\Vert .\end{aligned}$$

But the right hand side of the above inequality represents the error of best approximation for \(\alpha \) in \(\Pi _{m}.\) Suppose that \(u^{*}\) be the best approximation for f then using Jackson’s Theorem in each interval \([\tau _{i},\tau _{i+1}]\) we have

$$\begin{aligned}\sup _{\tau _{i}\le x \le \tau _{i+1}}|\alpha (x)-u^{*}(x)|\le 6\omega \left( \alpha ,\frac{h}{2m}\right) .\end{aligned}$$

Thus we have

$$\begin{aligned}\Vert \mathcal {P}_{n}\alpha -\alpha \Vert \le 6(1+\Vert \mathcal {P}_{n}\Vert )\omega \left( \alpha ,\frac{h}{2m}\right) \rightarrow 0,\end{aligned}$$

and the proof is complete. \(\square \)

In fact from Theorem 2.2 for \(\alpha \in C^{m+1}[0,T]\) we have

$$\begin{aligned}||\mathcal {P}_{n}\alpha -\alpha ||=O(h^{m+1})\end{aligned}$$

and if \(\alpha \in C^{m+2}[0,T]\), for even m we have the following local bound

$$\begin{aligned}|\mathcal {P}_{n}\alpha -\alpha |_{_{x_{i}}}=O(h^{m+2}).\end{aligned}$$

We will show that (4.3) has a unique solution that converges to the solution of (4.1) and finally, we will obtain the rate of convergence. Let us restate the following Theorem from [43].

Theorem 4.1

Suppose that operators T and \(T_{n}\) on the Banach space B can be represented as

$$\begin{aligned} T=PK,\hspace{8.5359pt}T_{n}=P_{n}K \end{aligned}$$

where K is a nonlinear, completely continuous operator mapping B into another Banach space \(B'\) and P and \(P_{n}\) are continues linear operators taking \(B'\) into B. Suppose that the operator equation

$$\begin{aligned} w=PKw \end{aligned}$$
(4.4)

has a solution \(\upsilon \). A sufficient condition that \(\upsilon \) be an isolated solution of (4.4) in some sphere \(\Vert \omega -\upsilon \Vert \le \mu \hspace{2.84544pt}( \mu >0)\) is that K be differentiable at the point \(\upsilon \) the homogeneous equation

$$\begin{aligned} w-PK'(\upsilon )w=0, \end{aligned}$$
(4.5)

has only the trivial solution \(w=0.\) Suppose further that the sequence of operators \(P_{n}\) converges strongly to the operator P, then the equation

$$\begin{aligned} w=T_{n}w \end{aligned}$$
(4.6)

has a solution \(\upsilon _{n}\) satisfying \(\Vert \upsilon _{n}-\upsilon \Vert \le \mu \) for all sufficiently large n, \(\upsilon _{n}\rightarrow \upsilon \) as \(n\rightarrow \infty ,\) and the rate of convergence is bounded by

$$\begin{aligned} \Vert \upsilon _{n}-\upsilon \Vert \le M'\Vert (P_{n}-P)K\upsilon \Vert \hspace{28.45274pt}\hbox {(} M'=\hbox {constant)}. \end{aligned}$$
(4.7)

Proof

See [43]. \(\square \)

Theorem 4.2

Let for even m, \(u\in C^{m+1}[a,b]\) and \(s\in \mathcal{S}\mathcal{P}_{m}(\Delta )\) be the exact and spline solutions of problem (1.1), respectively. If s is obtained by (3.4)–(3.6), then it converges to u as n tends to infinity and for some constant C, we have

$$\begin{aligned} \Vert s-u\Vert \le C\hspace{1.42271pt}h^{m+1}, ~~~~ h\rightarrow 0.\end{aligned}$$

Also, if \(u\in C^{m+2}[a,b]\), the following local error bound holds

$$\begin{aligned} |s-u|_{x_i}= O\left( h^{m+2}\right) , ~~ x_i\in \Delta .\end{aligned}$$

Proof

Since \(\mathcal {P}_{n}\rightarrow I\) thus we have \(\mathcal {P}_{n}\mathcal {K}\rightarrow \mathcal {K}\) and the assumptions of Theorem 4.1 are satisfied. Hence, the collocation equation

$$\begin{aligned}s=\mathcal {P}_{n}\mathcal {K}s+\mathcal {P}_{n}g\end{aligned}$$

has a solution s for which \(\Vert s- u\Vert \rightarrow 0\). Also, using Theorem 4.1 we have the following bound

$$\begin{aligned} \Vert s_{n}-u\Vert \le M'\left\| (\mathcal {P}_{n}-I)Ku \right\| . \end{aligned}$$
(4.8)

where \(M'\) is some finite constant. On the other hand, we have

$$\begin{aligned}\Vert \mathcal {P}_{n}g-g \Vert =O(h^{m+1})\end{aligned}$$

which combining with complete continuity of \(\mathcal {K}\) gives

$$\begin{aligned} \Vert s-u\Vert =O(h^{m+1}). \end{aligned}$$

Also since \(|\mathcal {P}_{n}g-g|_{_{x_{i}}}=O(h^{m+2}),\) it results that

$$\begin{aligned}|s-u|_{x_i}=O(h^{m+2}), ~~~ x_i\in \Delta .\end{aligned}$$

\(\square \)

4.2 Effect of Quadrature Rule on Error

We used a q-point Gauss–Legendre quadrature method to approximate integrals arose in the collocation system. It is well known that the error associated with q-point Gauss–Legendre quadrature for \(f\in C^{2q}[a,b]\) is bounded by [44]

$$\begin{aligned}E(f)=\frac{(b-a)^{2n+1}(n!)^4}{(2n+1)(2n!)^3}f^{(2q)}(\theta ), ~~~ a<\theta <b.\end{aligned}$$

Since we used quadrature in each subinterval thus \(b-a\le h\). Having that for q in range, the term \(\frac{(n!)^4}{(2n+1)(2n!)^3}\) remains bounded, thus

$$\begin{aligned}E(f)=O(h^{2q+1}),\end{aligned}$$

for small q. On the other hand, we proved that the error of the collocation system is \(O(h^{m+1})\) for odd m and \(O(h^{m+2})\) for even m. Thus, q should be chosen such that it satisfies

$$\begin{aligned} \left\{ \begin{array}{llllllll} 2q+1 \ge m+1, &{}&{}&{} \text {odd } m, \\ 2q+1 \ge m+2, &{}&{}&{} \text {even } m. \\ \end{array} \right. \end{aligned}$$

While the Newton-Cotes quadrature can be employed to approximate integrals in the collocation system, it is worth noting that its accuracy is inferior to Gauss quadrature method. Consequently, a greater number of quadrature nodes will be required for satisfactory results.

5 Some Test Problems

In this section, we solve some test problems of second kind Volterra integral equations to show the applicability of the proposed method,. Two examples with smooth kernel and one with non-smooth kernel are solved. The acquired results are compared with various existing methods to demonstrate the efficacy of our approach. The maximum absolute errors and the experimental orders of convergence are presented in Tables 1, 2, 3, 4, 5, 6, 7. The experimental orders are calculated by

$$\begin{aligned}Order= \frac{\log \left( \frac{E_{n_1}}{E_{n_2}}\right) }{\log \left( \frac{n_2}{n_1}\right) }\end{aligned}$$

where \(E_n\) denotes the maximum absolute error obtained with \(n+1\) collocation points. All the programs are written in Mathematica 12, and run on a system with Intel Core i7-2670 2.20 GHz and 8 GB of RAM.

Example 5.1

As first test problem consider the linear second kind Volterra integral equation

$$\begin{aligned} u(x)=\bar{\lambda }\int _{0}^{x}u(t)\textrm{d}t+f(x), ~~~ x\in [0,1], \end{aligned}$$

with the exact solution \(u(x)=\frac{1}{2}\left( \sin (x)+\cos (x)+e^{x}\right) \), where \(\bar{\lambda }\) is a constant parameter. In [30] and [31], it is show that step-by-step spline collocation procedure based on quadratic and cubic spline may diverge for some choices of collocation parameters. It is shown that the nonlocal approach based on spline can be better comparing to step-by-step algorithm. Here, we solve the test problem for various values of \(\bar{\lambda }\) using quadratic and cubic spline approach and compared the results with splines in [30] and [31]. The results are tabulated in Tables 1 and 2. It is obvious that using our approach more accurate results can be achieved. Also, the method is stable and the solution remains bounded. We also run the problem in larger domain [0, 5] and tabulated the results in Table 3. The results show that the method is still accurate and remains stable in larger domains.

Table 1 Maximum absolute errors using quadratic spline, \(m=2\) for example 5.1 in [0,1]
Table 2 Maximum absolute errors using cubic spline, \(m=3\) for example 5.1 in [0,1]
Table 3 Maximum absolute errors using cubic spline, \(m=3\) for example 5.1 in [0,5]

Example 5.2

Consider the following nonlinear second kind Volterra integral equation

$$\begin{aligned} u(x)=\phi (x,u(x))\int _{0}^{x}\Phi (x,t,u(t))\textrm{d}t+f(x), ~~~ x\in [0,1], \end{aligned}$$

where \(\Phi (x,t,u(t))=xt+u(t)\) and \(\phi (x,t,u(x))=\frac{x}{1+x}\ln (1+u(x))\). The exact solution is \(u(x)=\frac{1}{10}x^{10}\). The problem is already solved using cubic spline [45] and a method based on mean-value Theorem [46]. We solved the problem using cubic spline approach. The number of collocation points for reaching the tolerance are tabulated in Table 4. Obviously, our method may require fewer collocation points comparing to the existing methods.

Table 4 Number of collocation points to obtain the tolerance with \(m=3\) for example 5.2

Example 5.3

Consider the following second kind singular cordial Volterra integral equation

$$\begin{aligned} u(x)=\int _{0}^{x}x^{-1}\varphi \Bigl (\frac{t}{x}\Bigr )u(t)\textrm{d}t+f(x), ~~~ x\in [0,1], \end{aligned}$$

with \(\varphi (t/x)=(t/x)^{\xi -1}\) and \(\xi >0\). In [43], for \(\varphi \in L^{1}[0,1]\), it is proved that \(V_{\varphi }\in \mathcal {L}(C^{m})\) where \(V_{\varphi }\) is the cordial integral operator

$$\begin{aligned}V_{\varphi }=\int _{0}^{x}x^{-1}\varphi \Bigl (\frac{t}{x}\Bigr )u(t)\textrm{d}t\end{aligned}$$

and \(\mathcal {L}(C^{m})\) is the space of linear bounded operators from \(C^{m}\) to \(C^{m}\). So the new approach can be used successfully without any modifications.

We solved this problem for various kinds of exact solutions. The problem is solved for \(\xi =\frac{3}{2}\) using various degrees of splines and maximum absolute errors and orders of convergence are tabulated in tables 4-7. Based on Theorem 4.2, the order of convergence depends on the regularity of the exact solution. The results in tables 4-7 verify the theoretical results as well. Also, it is obvious that using even degree splines we obtain super-convergent approximations.

Table 5 Maximum absolute errors for example 5.3 with \(u(x)=x^3+x^{3.5}\)
Table 6 Maximum absolute errors for example 5.3 with \(u(x)=x^4+x^{4.5}\)
Table 7 Maximum absolute errors for example 5.3 with \(u(x)=x^5+x^{5.5}\)
Table 8 Maximum absolute errors for example 5.3 with \(u(x)=x^6+x^{6.5}\)

Example 5.4

Consider the following nonlinear second kind Volterra integral equation

$$\begin{aligned} u(x)=\int _{0}^{x}3\sin (t-x)u^2(x)\textrm{d}t+1+\sin ^{2}(x), ~~~ x\in [0,T], \end{aligned}$$

with the exact solution \(u(x)=\cos (x)\). The problem is already solved by a cubic spline method with standard end conditions in [47]. In Table 9, we compared the results obtained by our cubic spline method (\(m=3\)) with those reported in [47]. It is obvious that our results are more accurate than the standard cubic spline. Also, we solved the problems in the larger domain \([0,T], T=2,3,4,5,\) and tabulated the maximum absolute errors in Table 10. The results show that the method is applicable for larger domains as well.

Table 9 Absolute errors using cubic spline, \(m=3\) for example 5.4 in [0,1]
Table 10 Maximum absolute errors for example 5.4 in different intervals [0, T]

6 Conclusion

The paper has been concerned with the application of B-splines for the solution of second kind Volterra integral equations. It is already known from literature that using splines with maximum order of regularity for the solution of second kind Volterra integral equations may lead to unstable approximations depend on the choice of collocation parameters [30, 31]. We developed a method based on B-splines for the second kind nonlinear Volterra integral equations which uses the grid points and mid points of the uniform partition as collocation points. The method is stable and convergent. Also, we found a super-convergence using even degree splines. We used the method for some test problems for which the step-by-step and nonlocal spline approaches are unstable. It is obvious from the obtained results that the method efficient and the theory is supported by numerical experiments as well. It is noteworthy that the method presented in this paper can be readily applied for approximating solutions to Fredholm integral equations, as well as integro-differential equations.