1 Introduction

Recently, boundary value methods for solving the second-kind Volterra integral equation (VIE) are investigated by several authors. This idea comes from numerical studies of ordinary differential equations (ODE) [1]. In contrast to its linear multistep counterpart, the boundary value method for solving ODE reconstructs the initial value problem as a boundary value problem. By developing the reducible quadrature rule with such methodology, Chen and Zhang devised boundary value methods for solving Volterra integral and related functional equations [2]. Afterwards, researchers gave comprehensive studies on such class of algorithms [3,4,5]. On the other hand, with the help of multistep collocation methods, Ma and Xiang constructed a class of collocation boundary value methods for the second-kind Volterra integral equations in [6]. Numerical experiments showed this class of algorithms had a wide stable region and was able to solve highly oscillatory integral equations efficiently. With the help of fractional Lagrange interpolation, its further extension to the weakly singular problem was studied in [7].

In this paper, we consider the numerical solution of Volterra integral equation of the first kind,

$$ {{\int}_{0}^{t}}K(t-s)u(s)\text{ds}=g(t),~t\in [0,T]. $$
(1)

Here, K(t) and g(t) are sufficiently smooth in [0,T],g(0) = 0, and |K(0)| > 0. These conditions guarantee a unique solution of (1) (see [8, pp.64]). Applications of VIE (1) frequently arise in geological prospecting, fault movement, scattering problem, and analysis of causal processes [9,10,11]. Although VIE (1) can be solved by Laplace transform techniques in some special cases (see [12,13,14]), we have to resort to numerical methods in the general case.

Among existing algorithms, the collocation method is well-known for its low computational cost and attracts much attention [8]. Let

$$ I_{h}:=\{t_{n}: 0=t_{0}<t_{1}<\cdots<t_{N}=T\} $$

denote a uniform mesh, where tn = t0 + nh,h = T/N, and define the set of collocation parameters \(\{c_{j}\}_{j=1}^{m}\) with 0 ≤ cj ≤ 1. Then, the collocation grid is determined by Ih and \(\{c_{j}\}_{j=1}^{m},\) that is,

$$ X_{h}:=\{t_{n}+c_{j}h,j=1,\cdots,m,n=0,1,\cdots,N-1\}. $$

Immediately, the collocation solution uh(t) is computed by the collocation equation

$$ {\int}_{0}^{t_{n}+c_{j}h}K({t_{n}+c_{j}h}-s)u_{h}(s)ds=g({t_{n}+c_{j}h}), $$
(2)

where

$$ u_{h}(s)=u_{h}(t_{n}+vh)=\sum\limits_{j=1}^{m}u_{h}(t_{n}+c_{j}h)\prod\limits_{k=1,k\neq j}^{m}\frac{v-c_{k}}{c_{j}-c_{k}},v\in (0,1]. $$

By choosing 0 < c1 < c2 < ⋯ < cm ≤ 1 with

$$ \displaystyle (-1)^{m}\prod\limits_{j=1}^{m}\frac{1-c_{j}}{c_{j}}\in [-1,1), $$

the classical collocation method results in an approximation with a global order of O(hm) as \(h\rightarrow 0\) in the piecewise polynomial space [8, pp.123]. Particularly, in the case of cm = 1 and

$$ \displaystyle (-1)^{m}\prod\limits_{j=1}^{m-1}\frac{1-c_{j}}{c_{j}}\in [-1,1), $$

the convergence rate of the collocation method is able to attain O(hm+ 1) [8, pp.130].

On the other hand, to develop high-order algorithms without increasing collocation points, researchers usually employ multistep algorithms (see [15,16,17]). In [18], Zhang and Liang considered a class of multistep collocation methods for the first-kind Volterra integral equation by using approximated values of the solution in previous steps. To compute the solution u(t) in [tn,tn+ 1],n = r,...,N − 1, they rewrote the collocation polynomial uh(t) as follows:

$$ u_{h}(t_{n}+vh)=\sum\limits_{i=0}^{r-1}B_{i}(v)u_{h}(t_{n-i}) +\sum\limits_{j=1}^{m}\hat{B}_{j}(v)u_{h}(t_{n}+c_{j}h),~v\in [0,1], $$

where the basis functions \(B_{i}(v),\hat {B}_{j}(v)\) were constructed by satisfying the following:

$$ \left\{\begin{array}{ll} B_{l}(-p)=\delta_{lp},~B_{l}(c_{j})=0,~l,p=0,...,r-1,\\ \hat{B}_{i}(-p)=0,~\hat{B}_{i}(c_{j})=\delta_{ij},~i,j=1,...,m. \end{array}\right. $$

Then, the multistep collocation solution was obtained by imposing the collocation polynomial uh(t) at collocation points {tn + cjh, n = 0,1,⋯ ,N − 1, j = 1,...,m},

$$ \begin{array}{@{}rcl@{}} g(t_{n}+c_{j}h)=&h&\sum\limits_{j=1}^{m}u_{h}(t_{n}+c_{j}h){\int}_{0}^{c_{j}}K(h(c_{j}-v))\hat{B}_{j}(v)\text{dv} \\ &+&h\sum\limits_{p=0}^{r-1}u_{h}(t_{n-p}){\int}_{0}^{c_{j}}K(h(c_{j}-v))B_{p}(v)\text{dv} \\ &+&h\sum\limits_{l=0}^{r-2}{{\int}_{0}^{1}}K(h(n-l+c_{j}-v))u_{h}(t_{l}+\text{vh})\text{dv} \\ &+&h\sum\limits_{l=r-1}^{n-1}\sum\limits_{p=0}^{r-1}u_{h}(t_{l-p}){\int}_{0}^{1}K(h(n-l+c_{j}-v))B_{p}(v)\text{dv} \\ &+&h\sum\limits_{l=r-1}^{n-1}\sum\limits_{j=1}^{m}u_{h}(t_{l}+c_{j}h){\int}_{0}^{1}K(h(n-l+c_{j}-v))\hat{B}_{j}(v)\text{dv}. \end{array} $$

Zhang and Liang discussed the existence and uniqueness of the collocation solution for VIE (1). Furthermore, they developed the convergence condition for 2- and 3-step collocation methods.

The main purpose of this paper is to study the block collocation boundary value method for VIE (1). The remaining parts are organized as follows. In Section 2, we construct the k-step collocation boundary value method with \(\hat {N}\) blocks (\(B_{\hat {N}}\text {CBVM}_{k}\)). In Section 3, the solvability and convergence analysis are studied with the help of the theory of Toeplitz matrices. Experiments contained in Section 4 give a numerical illustration and verify theoretical results derived in the previous section. Finally, we present some concluding remarks.

2 Block collocation boundary value method

In this section, we will focus on the construction of \(B_{\hat {N}}\text {CBVM}_{k}\) for VIE (1), which can be considered as an extension of the classical collocation method by relaxing the restriction of cj ≤ 1. To make the statement more readable, we discuss B1CBVM1 firstly, which corresponds to c1 = 0,c2 = 1,andc3 = 2 in the collocation parameter set Xh.

By defining the local Lagrange basis

$$ {\phi}_{i}^{1}(v)=\prod\limits_{j=0,j\neq i}^{2}\frac{v-j}{i-j}, ~v\in [0,1], ~i = 0, 1, 2, $$

we rewrite the collocation solution uh(t) in [tn,tn+ 1] as follows:

$$ u_{h}(t_{n}+vh)=y_{n}{\phi}_{0}^{1}(v)+y_{n+1}{\phi}_{1}^{1}(v)+y_{n+2}{\phi}_{2}^{1}(v), ~v\in [0,1], n=0,1,\cdots,N-3. $$

where yn := uh(tn),h = T/N. In the last subinterval [tN− 2,tN], we rewrite the collocation polynomial uh(t) as follows:

$$ u_{h}(t_{N-2}+vh)=y_{N-1}\hat{\phi}_{0}^{1}(v)+y_{N}\hat{\phi}_{1}^{1}(v),~v\in [0,2], $$

where

$$ \hat{\phi}_{i}^{1}(v)=\prod\limits_{j=0,j\neq i}^{1}\frac{v-(j+1)}{i-j}, ~v\in [0,2],~ i=0,1. $$

The above approximations guarantee the fact that previous values y1,y2,⋯ ,yN− 2 do not appear in the collocation polynomial uh(t) in [tN− 2,tN], which helps to prove the solvability and analyze the convergence property of \(B_{\hat {N}}\text {CBVM}_{k}\).

Now, we arrive at the collocation equation,

$$ {\int}_{0}^{t_{n}}K(t_{n}-s)u_{h}(s)ds=g(t_{n}),~n=1,2,\cdots,N, $$
(3)

or equivalently,

$$ \left\{ \begin{array}{ll} \displaystyle g(t_{n}) =h\sum\limits_{j=0}^{n-1}{{\int}_{0}^{1}}K(h(n-j-v))\left( \sum\limits_{i=0}^{2}y_{j+i}{\phi}_{i}^{1}(v)\right)\text{dv},n=1,\cdots,N-2,\\ \displaystyle g(t_{n})=h\sum\limits_{j=0}^{N-3}{{\int}_{0}^{1}}K(h(n-j-v))\left( \sum\limits_{i=0}^{2}y_{j+i}{\phi}_{i}^{1}(v)\right)\text{dv}\\ \displaystyle +h{\int}_{0}^{n-N+2}K(h(n-N+2-v))\left( \sum\limits_{i=0}^{1}y_{N-1+i}\hat{\phi}_{i}^{1}(v)\right)\text{dv},n=N-1,N. \end{array} \right. $$
(4)

Before illustrating \(B_{\hat {N}}\text {CBVM}_{k},\) we introduce some notations firstly. Let \(\displaystyle \mathfrak {a}_{m}(t)=\sum \limits _{i=-m+1}^{m-1}a_{i}t^{i},\) then m × m Toeplitz matrix \(T[\mathfrak {a}_{m}]\) is defined by the following:

$$ T[\mathfrak{a}_{m}]=\left( \begin{array}{cccc} a_{0} & a_{-1} & {\cdots} & a_{-m+1} \\ a_{1} & a_{0} & {\cdots} & a_{-m+2} \\ {\cdots} & {\cdots} & {\cdots} & {\cdots} \\ a_{m-1} & a_{m-2} & {\cdots} & a_{0} \end{array} \right). $$

Furthermore, denote the following:

$$ \displaystyle d_{-j,i}:={{\int}_{0}^{1}}K(h(j-v)){\phi^{1}_{i}}(v)\text{dv}, $$
(5)
$$ \hat{d}_{j,i}:={\int}_{0}^{j-N+2}K(h(j-N+2-v))\hat{\phi}_{i}^{1}(v)\text{dv}, $$
(6)

with i = 1,2,⋯ ,N, and define Laurent polynomial \(\mathfrak {d}_{m}^{i}(t)\) as follows:

$$ \mathfrak{d}_{m}^{i}(t):=\sum\limits_{j=-m+1}^{0}d_{j-1,i}t^{j},~i=0,1,2. $$
(7)

In the remaining part, we let \(\hat {M}:=M(a:b,c:d)\) denote the submatrix formed by taking a block of entries of (ba + 1) × (dc + 1) from the original matrix M, where its (i,j) element is determined by \( \hat {M}(i,j):=M(a+i-1,c+j-1), \) and let Oa×b denote a zero matrix of size a × b.

Letting

$$ \begin{array}{@{}rcl@{}} C_{0}&=&\left( \begin{array}{ll} T[\mathfrak{d}_{N}^{0}](1:N,1:N-2) & O_{N\times 3} \end{array} \right), \\ C_{1}&=&\left( \begin{array}{llll} O_{N\times 1} &T[\mathfrak{d}_{N}^{1}](1:N,1:N-2) & O_{N\times 2} \end{array} \right), \\ C_{2}&=&\left( \begin{array}{lll} O_{N\times 2} & T[\mathfrak{d}_{N}^{2}](1:N,1:N-2) &O_{N\times 1} \end{array} \right), \end{array} $$

we can get the compact form of (4)

$$ hA(1:N, 2:N+1)\mathbf{y}=\mathbf{b}-hy_{0}A(1:N, 1:1), $$
(8)

where

$$ \mathbf{y}=\left( \begin{array}{cc} y_{1} \\ y_{2} \\ {\vdots} \\ y_{N} \end{array} \right),~~ \mathbf{b}=\left( \begin{array}{cc} g(t_{1}) \\ g(t_{2}) \\ {\vdots} \\ g(t_{N}) \end{array} \right), $$
$$ A = C_{0}+C_{1}+C_{2}+S,~~ y_{0}=g^{\prime}(0), $$

and S is a N × (N + 1) sparse matrix with the only nonzero elements

$$ S(N-1:N, N:N+1)=\left( \begin{array}{cc} \hat{d}_{N-1,0} & \hat{d}_{N-1,1} \\ \hat{d}_{N,0} & \hat{d}_{N,1} \end{array} \right). $$

The coefficient matrix in (8) is a Toeplitz matrix without regard to the last 2 × 2 submatrix, or equivalently, A(1 : N,2 : N + 1) can be represented by the summation of a Toeplitz matrix and a sparse matrix. For the Toeplitz matrix, we are able to embed it to a circulant matrix, which allows a fast calculation of matrix-vector multiplication [19]. Hence, it is expected that (8) can be solved efficiently with Krylov subspace methods such as GMRES.

Let us turn to the general \(B_{\hat {N}}\text {CBVM}_{k}\). Firstly, we divide [0,T] into \(\hat {N}\) parts, that is, \([T_{0},T_{1}], [T_{1},T_{2}],\cdots ,[T_{\hat {N}-1},T_{\hat {N}}]\) with \(T_{0}=0,T_{\hat {N}}=T\). Secondly, applying k-step collocation boundary value method to VIE (1) on each interval \([T_{\hat {n}},T_{\hat {n}+1}]\) with the stepsize \(\displaystyle h_{\hat {n}}=(T_{\hat {n}+1}-T_{\hat {n}})/N,\) we obtain, for \(n=1,\cdots ,N,\hat {n}=0,\cdots ,\hat {N}-1,\)

$$ \sum\limits_{\hat{j}=0}^{\hat{n}-1}{\int}_{T_{\hat{j}}}^{T_{\hat{j}+1}}K(t^{\hat{n}}_{n}-s)u_{h}(s)\text{ds} +{\int}_{T_{\hat{n}}}^{t^{\hat{n}}_{n}}K(t^{\hat{n}}_{n}-s)u_{h}(s)\text{ds}=g(t^{\hat{n}}_{n}). $$
(9)

Here, for \(j=0,1,\cdots ,N,\hat {m}=0,1,\cdots ,\hat {N}-1,\) we let \(t_{j}^{\hat {m}}=T_{\hat {m}}+jh_{\hat {m}},\) and uh(t) denotes the collocation solution. In \([t_{j}^{\hat {m}},t_{j+1}^{\hat {m}}],j=0,1,\cdots ,N-k-2,\) we express uh(t) as follows:

$$ u_{h}(t^{\hat{m}}_{j}+vh_{\hat{m}})=\sum\limits_{i=0}^{k+1}u_{h}(t^{\hat{m}}_{j+i}){\phi}_{i}^{k}(v), $$
(10)

and in \([{t}_{N-k-1}^{\hat {m}},{t}_{N}^{\hat {m}}],\) we rewrite uh(t) as follows:

$$ u_{h}({t}^{\hat{m}}_{N-k-1}+vh_{\hat{m}})=\sum\limits_{i=0}^{k}u_{h}({t}^{\hat{m}}_{N-k+i})\hat{\phi}_{i}^{k}(v). $$
(11)

Here,

$$ {\phi}_{i}^{k}(v)=\prod\limits_{j=0,j\neq i}^{k+1}\frac{v-j}{i-j},i=0,1,\cdots,k+1, $$

and

$$ \hat{\phi}_{i}^{k}(v)=\prod\limits_{j=0,j\neq i}^{k}\frac{v-(j+1)}{i-j},i=0,1,\cdots,k. $$

For n = 1,⋯ ,Nk − 1, substituting (10) and (11) into (9) results in the following:

$$ \begin{array}{@{}rcl@{}} g({t}_{n}^{\hat{n}})&=&\sum\limits_{\hat{j}=0}^{\hat{n}-1}h_{\hat{j}} \sum\limits_{j=0}^{N-k-2}{{\int}_{0}^{1}}K(t_{n}^{\hat{n}}-t_{j}^{\hat{j}}-\text{vh}_{\hat{j}}) \sum\limits_{i=0}^{k+1}u_{h}({t}_{j+i}^{\hat{j}}){\phi}_{i}^{k}(v)\text{dv}\\ &+&\sum\limits_{\hat{j}=0}^{\hat{n}-1}h_{\hat{j}} {\int}_{0}^{k+1}K({t}_{n}^{\hat{n}}-{t}_{N-k-1}^{\hat{j}}-\text{vh}_{\hat{j}}) \sum\limits_{i=0}^{k}u_{h}({t}_{N-k+i}^{\hat{j}})\hat{\phi}_{i}^{k}(v)\text{dv}\\ &+&h_{\hat{n}} \sum\limits_{j=0}^{n-1}{{\int}_{0}^{1}}K(h_{\hat{n}}(n-j-v)) \sum\limits_{i=0}^{k+1}u_{h}(t_{j+i}^{\hat{n}}){\phi}_{i}^{k}(v)dv, \end{array} $$

and for n = Nk,⋯ ,N, substituting (10) and (11) into (9) results in the following:

$$ \begin{array}{@{}rcl@{}} g(t_{n}^{\hat{n}})&=&\sum\limits_{\hat{j}=0}^{\hat{n}-1}h_{\hat{j}} \sum\limits_{j=0}^{N-k-2}{{\int}_{0}^{1}}K(t_{n}^{\hat{n}}-t_{j}^{\hat{j}}-\text{vh}_{\hat{j}}) \sum\limits_{i=0}^{k+1}u_{h}({t}_{j+i}^{\hat{j}}){\phi}_{i}^{k}(v)\text{dv}\\ &+&\sum\limits_{\hat{j}=0}^{\hat{n}-1}h_{\hat{j}} {\int}_{0}^{k+1}K({t}_{n}^{\hat{n}}-{t}_{N-k-1}^{\hat{j}}-\text{vh}_{\hat{j}}) \sum\limits_{i=0}^{k}u_{h}({t}_{N-k+i}^{\hat{j}})\hat{\phi}_{i}^{k}(v)\text{dv}\\ &+&h_{\hat{n}} \sum\limits_{j=0}^{N-k-2}{{\int}_{0}^{1}}K(h_{\hat{n}}(n-j-v)) \sum\limits_{i=0}^{k+1}u_{h}({t}_{j+i}^{\hat{n}}){\phi}_{i}^{k}(v)\text{dv}\\ &+&h_{\hat{n}} {\int}_{0}^{n-N+k+1}K(h_{\hat{n}}(n-N+k+1-v)) \sum\limits_{i=0}^{k}u_{h}(t_{N-k+i}^{\hat{n}})\hat{\phi}_{i}^{k}(v)\text{dv}, \end{array} $$

Furthermore, denoting

$$ \begin{array}{@{}rcl@{}} {d}_{-j,i}^{k,\hat{n}}:&=&{{\int}_{0}^{1}}K(h_{\hat{n}}(j-v)){\phi}_{i}^{k}(v)\text{dv}, \end{array} $$
(12)
$$ \begin{array}{@{}rcl@{}} \hat{d}_{j,i}^{k,\hat{n}}:&=&{\int}_{0}^{j-N+k+1}K(h_{\hat{n}}(j-N+k+1-v))\hat{\phi}_{i}^{k}(v)\text{dv}, \end{array} $$
(13)

we have the the following:

$$ \mathfrak{d}_{m}^{i,k,\hat{n}}(t) =\sum\limits_{j=-m+1}^{0}{d}_{j-1,i}^{k,\hat{n}}t^{j},i=0,1,\cdots,k+1. $$
(14)

Then, we derive k + 2 Toeplitz-like matrices,

$$ C_{i}^{k,\hat{n}}=\left( \begin{array}{ccc} O_{N\times i} & T[\mathfrak{d}_{N}^{i,k,\hat{n}}](1:N,1:N-k-1) &O_{N\times (k-i+2)} \end{array} \right),i=0,1,\cdots,k+1, $$

and a sparse N × (N + 1) matrix \(S^{k,\hat {n}}\) with the only nonzero elements

$$ S^{k,\hat{n}}(N-k:N,N-k+1:N+1)=\left( \begin{array}{llll} {\hat{d}}_{N-k,0}^{k,\hat{n}} & {\hat{d}}_{N-k,1}^{k,\hat{n}} & {\cdots} & {\hat{d}}_{N-k,k}^{k,\hat{n}} \\ {\hat{d}}_{N-k+1,0}^{k,\hat{n}} & {\hat{d}}_{N-k+1,1}^{k,\hat{n}} & {\cdots} & {\hat{d}}_{N-k+1,k}^{k,\hat{n}} \\ {\vdots} & {\vdots} & {\vdots} & {\vdots} \\ {\hat{d}}_{N,0}^{k,\hat{n}} & {\hat{d}}_{N,1}^{k,\hat{n}} & {\cdots} & {\hat{d}}_{N,k}^{k,\hat{n}} \end{array} \right). $$

Letting

$$ \begin{array}{@{}rcl@{}} {d}_{i,j,l}^{k,\alpha,\upbeta}:&=&{{\int}_{0}^{1}}K({t}_{i}^{\alpha}-{t}_{j}^{\upbeta}-\text{vh}_{\upbeta}){\phi}_{l}^{k}(v)\text{dv}, \end{array} $$
(15)
$$ \begin{array}{@{}rcl@{}} {\hat{d}}_{i,j,l}^{k,\alpha,\upbeta}:&=&{\int}_{0}^{j-N+k+2}K({t}_{i}^{\alpha}-{t}_{N-k-1}^{\upbeta}-\text{vh}_{\upbeta}){\hat{\phi}}_{l}^{k}(v)\text{dv}, \end{array} $$
(16)

we can construct, for l = 0,1,⋯ ,k + 1,

$$ \hat{C}^{k,\alpha,\upbeta}_{l}=\left( \begin{array}{lll} O_{N\times l} & \tilde{C}^{k,\alpha,\upbeta}_{l}(1:N,1:N-k-1) &O_{N\times(k-l+2)} \end{array} \right) $$

with a N × N matrix

$$ {\tilde{C}}^{k,\alpha,\upbeta}_{l}= \left( \begin{array}{cccc} {d}_{1,0,l}^{k,\alpha,\upbeta} & {d}_{1,1,l}^{k,\alpha,\upbeta} &{\cdots} & {d}_{1,N-1,l}^{k,\alpha,\upbeta} \\ {d}_{2,0,l}^{k,\alpha,\upbeta} & {d}_{2,1,l}^{k,\alpha,\upbeta} & {\cdots} & {d}_{2,N-1,l}^{k,\alpha,\upbeta} \\ {\vdots} & {\vdots} & {\vdots} & {\vdots} \\ {d}_{N,0,l}^{k,\alpha,\upbeta} & {d}_{N,1,l}^{k,\alpha,\upbeta} & {\cdots} & {d}_{N,N-1,l}^{k,\alpha,\upbeta} \end{array} \right), $$

and a sparse N × (N + 1) matrix \(\hat {S}^{\alpha ,\upbeta }\) with the only nonzero elements

$$ \hat{S}^{k,\alpha,\upbeta}(1:N,N-k+1:N+1)=\left( \begin{array}{cccc} {\hat{d}}_{1,N-1,0}^{k,\alpha,\upbeta} & {\hat{d}}_{1,N-1,1}^{k,\alpha,\upbeta} & {\cdots} & {\hat{d}}_{1,N-1,k}^{k,\alpha,\upbeta} \\ {\hat{d}}_{2,N-1,0}^{k,\alpha,\upbeta} & {\hat{d}}_{2,N-1,1}^{k,\alpha,\upbeta} & {\cdots} & {\hat{d}}_{2,N-1,k}^{k,\alpha,\upbeta} \\ {\vdots} & {\vdots} & {\vdots} & {\vdots} \\ {\hat{d}}_{N,N-1,0}^{k,\alpha,\upbeta} & {\hat{d}}_{N,N-1,1}^{k,\alpha,\upbeta} & {\cdots} & {\hat{d}}_{N,N-1,k}^{k,\alpha,\upbeta} \end{array} \right). $$

Letting \(\mathbf {y}_{\hat {n}}=\left (\begin {array}{cccc} u_{h}(t^{\hat {n}}_{0}), & u_{h}(t^{\hat {n}}_{1}), & \cdots , & u_{h}(t^{\hat {n}}_{N}) \end {array} \right )^{T} \) denote the approximate values in the interval \([T_{\hat {n}},T_{\hat {n}+1}],\) we arrive at the compact form of (9),

$$ A_{\hat{n}}^{k}(1:N,2:N+1)\mathbf{y}_{\hat{n}}(2:N+1)=\mathbf{g}_{\hat{n}}-\mathbf{y}_{\hat{n}}(1)A_{\hat{n}}^{k}(1:N,1:1) -\sum\limits_{\upbeta=0}^{\hat{n}-1}\hat{A}_{\hat{n},\upbeta}^{k}\mathbf{y}_{\upbeta}, $$
(17)

where

$$ \mathbf{g}_{\hat{n}}=\left( \begin{array}{l} g(t^{\hat{n}}_{1}) \\ g(t^{\hat{n}}_{2}) \\ {\vdots} \\ g(t^{\hat{n}}_{N}) \end{array} \right),~~ A_{\hat{n}}^{k}=S^{k,\hat{n}}+\sum\limits_{j=0}^{k+1}C_{j}^{k,\hat{n}},~~ \hat{A}_{\hat{n},\upbeta}^{k}=\hat{S}^{k,\hat{n},\upbeta}+\sum\limits_{j=0}^{k+1}\hat{C}_{j}^{k,\hat{n},\upbeta}.\\ $$

According to the definition of the local Lagrange interpolation polynomial, we know that \(\mathbf {y}_{\hat {n}}(1)=\mathbf {y}_{\hat {n}-1}(N+1)\) for \(\hat {n}=1,\cdots ,\hat {N}-1\).

3 Solvability and convergence analysis

In this section, we study the existence and convergence of the collocation approximation computed by (17). In contrast to classical collocation methods, the boundary value solution cannot be obtained step-by-step or in a recurrence relation. All values are computed simultaneously by solving the linear system. Therefore, the existence, uniqueness, and the convergence property of the collocation solution uh(t) in (9) should be reconsidered in detail.

To begin with, we study the special case for which the kernel in VIE (1) is K(t) = 1. The more general theory of the existence of the block collocation boundary value solution can be established with the help of such special case.

3.1 The case of K(t) = 1

Let us consider the \(\hat {n}\)th block. It follows that

$$ g({t}_{n}^{\hat{n}})=\sum\limits_{\hat{j}=0}^{\hat{n}-1}h_{\hat{j}}\sum\limits_{j=0}^{N-1}{{\int}_{0}^{1}}u_{h}({t}_{j}^{\hat{j}}+\text{vh}_{\hat{j}})\text{dv} +h_{\hat{n}}\sum\limits_{j=0}^{n-1}{{\int}_{0}^{1}}u_{h}({t}_{j}^{\hat{n}}+\text{vh}_{\hat{n}})\text{dv}. $$
(18)

For n = N − 1,N − 2,⋯ ,0, computing \(g(t_{n+1}^{\hat {n}})-g(t_{n}^{\hat {n}})\) successively results in the following:

$$ {h_{\hat{n}}{{\int}_{0}^{1}}u_{h}(t^{\hat{n}}_{n}+\text{vh}_{\hat{n}})\text{dv}=g(t_{n+1}^{\hat{n}})-g(t_{n}^{\hat{n}}),n=0,1,\cdots,N-1.} $$
(19)

Rewriting the coefficient matrix of the above linear system with notations in Section 2 leads to the following:

$$ {B}_{N}^{k}=\left( \begin{array}{cc} {T_{N}^{k}} & \mathbf{r}\\ 0 & {R^{k}_{N}} \end{array} \right). $$
(20)

Here, r is a (Nk − 1) × (k + 1) matrix, \({T_{N}^{k}}\) is a (Nk − 1) × (Nk − 1) Toeplitz matrix generated by the Laurent polynomial

$$ \mathfrak{c}_{N-k-1}^{k,\hat{n}}(t)=\sum\limits_{j=-k}^{1}d_{1,1-j}^{k,\hat{n}}t^{j}, $$

and

$$ {R^{k}_{N}}=\left( \begin{array}{llll} \hat{d}_{N-k,0}^{k,\hat{n}} &\hat{d}_{N-k,1}^{k,\hat{n}} & {\cdots} & \hat{d}_{N-k,k}^{k,\hat{n}}\\ \hat{d}_{N-k+1,0}^{k,\hat{n}} &\hat{d}_{N-k+1,1}^{k,\hat{n}} & {\cdots} & \hat{d}_{N-k+1,k}^{k,\hat{n}} \\ {\vdots} & {\vdots} & {\vdots} & {\vdots} \\ \hat{d}_{N,0}^{k,\hat{n}} &\hat{d}_{N,1}^{k,\hat{n}} & {\cdots} & \hat{d}_{N,k}^{k,\hat{n}} \end{array} \right). $$

Now, (19) is transformed into the following:

$$ {B_{N}^{k}} \left( \begin{array}{c} y^{\hat{n}}_{1} \\ y^{\hat{n}}_{2} \\ {\vdots} \\ y^{\hat{n}}_{N} \end{array} \right)= \left( \begin{array}{c} g(t_{1}^{\hat{n}})-g(t_{N}^{\hat{n}-1})-y^{\hat{n}}_{0} d_{1,0}^{k,\hat{n}}\\ g(t_{2}^{\hat{n}})-g(t_{1}^{\hat{n}}) \\ {\vdots} \\ g(t_{N}^{\hat{n}})-g(t_{N-1}^{\hat{n}}) \end{array} \right), $$
(21)

To examine the solvability of (21), it is enough to study the inverse of \({T}_{N}^{k}\) and \({R}^{k}_{N}\). Now let us review some auxillary results. Suppose that there is a Laurent polynomial \(\displaystyle \mathfrak {b}(t)=\sum \limits _{n=-\infty }^{\infty }b_{n}t^{n}\). Then, we can define an infinite Toeplitz matrix by the following:

$$T[\mathfrak{b}]=\left( \begin{array}{cccc} b_{0} & b_{-1} & b_{-2} & {\cdots} \\ b_{1} & b_{0} & b_{-1} & {\cdots} \\ b_{2} & b_{1} & b_{0} & {\cdots} \\ {\vdots} & {\vdots} & {\vdots} & \ddots\ \end{array} \right). $$

Let T denote the complex unit circle. Then, as t moves once around the counterclockwise-oriented \(\mathbf {T}, \mathfrak {b}(t)\) traces out a continuous and closed curve. The winding number of \(\mathfrak {b}(t)\) denoted by \(\mathbf {wind}_{\mathfrak {b}}\) is the number of times this curve surrounds the origin counterclockwise. Particularly, assume \(\mathfrak {b}(t) \neq 0\) for all tT and \(\mathfrak {b}(t)\) has only finitely many nonzero coefficients, that is,

$$ \mathfrak{b}(t)=\sum\limits_{j=-r}^{s}b_{j}t^{j}, $$

then, we can obtain by a direct calculation as follows:

$$ \mathfrak{b}(t)=t^{-r}b_{s}\prod\limits_{j=1}^{J}(t-\delta_{j})\prod\limits_{i=1}^{I}(t-\mu_{i})=t^{-r}\bar{\mathfrak{b}}(t), $$

where |δj| < 1 for all j and |μi| > 1 for all i, and \(\bar {\mathfrak {b}}(t)\) is called the condition polynomial for \(\mathfrak {b}(t)\). Let \(\mathfrak {p}_{j}(t)=t-\delta _{j}\) and \(\mathfrak {q}_{i}(t)=t-\mu _{i}\) Then, we have \(\mathbf {wind}_{\mathfrak {p}_{j}}=1,\mathbf {wind}_{\mathfrak {q}_{i}}=0\) for j = 1,⋯ ,J, i = 1,⋯ ,I. Therefore, the winding number of \(\mathfrak {b}(t)\) can be computed by \(\mathbf {wind}_{\mathfrak {b}}=J-r\). Moreover, since \(\mathfrak {b}(t) \neq 0\) for all tT,we know the operator \(T[\mathfrak {b}]\) is an invertible modulo compact operator, that is, there exists an operator B such that both \(BT[\mathfrak {b}]-I\) and \(T[\mathfrak {b}]B-I\) are compact (see [20, Theorem 1.9]). Besides, its index is \(-\mathbf {wind}_{\mathfrak {b}}\). Hence, we arrive at the following fact.

Lemma 1

[20, pp.10] The operator \(T[\mathfrak {b}]\) is invertible on \(\mathfrak {l}^{p}(1\leq p \leq \infty )\) if and only if \(\mathfrak {b}(t)\neq 0\) for all tT and \(\mathbf {wind}_{\mathfrak {b}}=0\).

Denoting \(T_{n}[\mathfrak {b}]\) by \(T[\mathfrak {b}](1:n,1:n),\) we have \(\displaystyle \underset {n\rightarrow \infty }{\lim }T_{n}[\mathfrak {b}]=T[\mathfrak {b}]\). It is noted that the invertibility of \(T_{n}[\mathfrak {b}]\) is determined by \(T[\mathfrak {b}],\) that is,

Lemma 2

[20, pp.63] Let \(\mathfrak {b}(t)\) belong to Wiener algebra. Then,

$$ \begin{array}{@{}rcl@{}} \underset{n\rightarrow \infty}{\limsup}\|(T_{n}[\mathfrak{b}])^{-1}\|<\infty ~\text{if} ~T[\mathfrak{b}] ~\text{is invertible,}\\ \underset{n\rightarrow \infty}{\lim}\|(T_{n}[\mathfrak{b}])^{-1}\|=\infty ~\text{if} ~T[\mathfrak{b}] ~\text{is not invertible.} \end{array} $$

On the other hand, \({R^{k}_{N}}\) is non-singular due to the linear independence of the local basis functions. Therefore, the coefficient matrix in the linear system (19) is invertible when \(\mathfrak {d}_{\mathfrak {c}_{N-k-1}^{k,\hat {n}}}(t)\) belongs to Wiener algebra and \(\mathbf {wind}_{\mathfrak {c}_{N-k-1}^{k,\hat {n}}}=0\).

Next, to study the convergence property of the collocation error, we introduce the remainder representation from the approximation theory.

Lemma 3

[8, pp.43] Assume

  • For given abscissa aξ1 < ... < ξmb, let

    $$ \varepsilon_{m}(f;t)=f(t)-\sum\limits_{j=1}^{m}L_{j}(t)f(\xi_{j}),~t\in[a,b] $$

    denote the error between f(t) and the Lagrange interpolation polynomial of degree m − 1 with respect to the given points {ξj}.

  • f(t) ∈ Cd[a,b] with 1 ≤ dm.

Then, εm(f;t) possesses the integral representation as follows:

$$ \varepsilon_{m}(f;t)={{\int}_{a}^{b}}\kappa_{d}(t,s)f^{(d)}(s)\text{ds},~~t\in [a,b], $$
(22)

where the Peano kernel κd(t,s) is given by the following:

$$ \kappa_{d}(t,s):=\frac{1}{(d-1)!}\left\{(t-s)^{d-1}_{+}-\sum\limits_{j=1}^{m}L_{j}(t)(\xi_{j}-s)_{+}^{d-1}\right\}. $$

Here,

$$ {(t-s)}^{p}_{+}:=\left\{\begin{array}{l} 0,~~t<s, \\ (t-s)^{p},~~t\geq s. \end{array}\right. $$

With the winding number and remainder theory in hand, we give the condition for the solvability of \(B_{\hat {N}}\text {CBVM}_{k}\) and its corresponding convergence rate in the following theorem.

Theorem 1

Assume the given functions K(t) and g(t) describing VIE (1) satisfy K(t) = 1 and g(t) ∈ Ck+ 2([0,T]). Furthermore, suppose that the winding number of \(\displaystyle \mathfrak {c}_{N-k-1}^{k,\hat {n}}(t)\) is zero. Then, \(B_{\hat {N}}\text {CBVM}_{k}\) leads to a unique solution as \(h\rightarrow 0,\) and the collocation error admits as follows:

$$ \|\mathbf{e}_{\hat{n}}\|_{\infty} = O(h^{k+1}), \hat{n}=0,1,\cdots,\hat{N}-1, $$
(23)

where \(\displaystyle h=\max \limits _{\hat {j}=0,\cdots ,\hat {N}-1}\{h_{\hat {j}}\}, \mathbf {e}_{\hat {n}}=(e_{h}({t}^{\hat {n}}_{1}) , e_{h}({t}^{\hat {n}}_{2}) , \cdots ,e_{h}({t}^{\hat {n}}_{N}))^{T}\).

Proof

Since the winding number of \(\displaystyle \mathfrak {c}_{N-k-1}^{k,\hat {n}}(t)\) is zero, we get \({B_{N}^{k}}\) in (19) is invertible due to Lemmas 1 and 2. Therefore, (19) can be solved uniquely, or equivalently, (18) has a unique solution.

The remaining work is to study the convergence property of \(B_{\hat {N}}\text {CBVM}_{k}\). Let eh(t) := u(t) − uh(t), then we get the error equation

$$ 0=\sum\limits_{\hat{j}=0}^{\hat{n}-1}h_{\hat{j}}\sum\limits_{j=0}^{N-1}{{\int}_{0}^{1}}e_{h}({t}^{\hat{j}}_{j}+\text{vh}_{\hat{j}})\text{dv} +h_{\hat{n}}\sum\limits_{j=0}^{n-1}{{\int}_{0}^{1}}e_{h}({t}^{\hat{n}}_{j}+\text{vh}_{\hat{n}})\text{dv}. $$
(24)

A similar differentiation as that employed in (19) leads to the following:

$$ h_{\hat{n}}{{\int}_{0}^{1}}e_{h}(t^{\hat{n}}_{j}+\text{vh}_{\hat{n}})\text{dv}=0,~j=0,1,\cdots,N-1. $$
(25)

According to Lemma 3, we have the following:

$$ \begin{array}{@{}rcl@{}} h_{\hat{n}}\left( {{\int}_{0}^{1}} \sum\limits_{i=0}^{k+1}e_{h}({t}^{\hat{n}}_{j+i}){\phi}_{i}^{k}(v)\text{dv}+O({h}_{\hat{n}}^{k+2})\right)&=&0,~j=0,\cdots,N-k-2, \\ h_{\hat{n}}\left( {\int}_{0}^{j-N+k+2} \sum\limits_{i=0}^{k}e_{h}({t}^{\hat{n}}_{N-k+i}){\hat{\phi}}_{i}^{k}(v)\text{dv}+O({h}_{\hat{n}}^{k+1})\right)&=&0,j=N-k-1,\cdots,N-1. \end{array} $$

Since \(e_{h}({t}^{\hat {j}}_{i})=O(h^{k+1})\) for \(\hat {j}=0,1,\cdots ,\hat {n}-1,i=0,1,\cdots ,N-1,\) we get the following:

$$ {B}_{N}^{k}\mathbf{e}_{\hat{n}}=\mathbf{r}_{\hat{n}}. $$
(26)

Here, \({B}_{N}^{k}\) is defined by (20), \(\mathbf {e}_{\hat {n}}\) is defined by \(\left (\begin {array}{llll} e_{h}({t}^{\hat {n}}_{1}), & e_{h}({t}^{\hat {n}}_{2}), & \cdots , & e_{h}({t}^{\hat {n}}_{N}) \end {array} \right )^{T}, \) and the elements of \(\displaystyle \mathbf {r}_{\hat {n}}\) are O(hk+ 1). Since the norm of the inverse of \({B}_{N}^{k}\) is bounded as \(N\rightarrow \infty \) by Lemma 2, we have the following:

$$ \|\mathbf{e}_{\hat{n}}\|_{\infty}=O(h^{k+1}), ~h\rightarrow 0. $$

This completes the proof. □

By taking k = 1,2,3,4 in \(B_{\hat {N}}\text {CBVM}_{k},\) we arrive at the fact that corresponding condition polynomials \(\bar {\mathfrak {c}}_{1}(t),\bar {\mathfrak {c}}_{2}(t),\bar {\mathfrak {c}}_{3}(t), \text {and} \bar {\mathfrak {c}}_{4}(t)\) for \(\mathfrak {c}_{N-2}^{1,\hat {n}}(t),\mathfrak {c}_{N-3}^{2,\hat {n}}(t), \text {and} \mathfrak {c}_{N-4}^{3,\hat {n}}(t), \mathfrak {c}_{N-5}^{4,\hat {n}}(t)\) are as follows:

$$ \begin{array}{@{}rcl@{}} \bar{\mathfrak{c}}_{1}(t)&=&\frac{5}{12}t^{2}+\frac{2}{3}t-\frac{1}{12}, \\ \bar{\mathfrak{c}}_{2}(t)&=&\frac{3}{8}t^{3}+\frac{2}{3}t^{2}-\frac{5}{24}t+\frac{1}{24}, \\ \bar{\mathfrak{c}}_{3}(t)&=&\frac{251}{720}t^{4}+\frac{323}{360}t^{3}-\frac{11}{30}t^{2}+\frac{53}{360}t-\frac{19}{720}, \\ \bar{\mathfrak{c}}_{4}(t)&=&\frac{95}{288}t^{5}+\frac{1427}{1440}t^{4}-\frac{133}{240}t^{3}+\frac{241}{720}t^{2}-\frac{173}{1440}t-\frac{95}{288}. \end{array} $$

In Fig. 1, we show the roots distribution of above polynomials. It can be seen that winding numbers of \({\mathfrak {c}}_{N-2}^{1,\hat {n}}(t),{\mathfrak {c}}_{N-3}^{2,\hat {n}}(t),{\mathfrak {c}}_{N-4}^{3,\hat {n}}(t),\) and \({\mathfrak {c}}_{N-5}^{4,\hat {n}}(t)\) are zero, which coincide with the condition in Theorem 1.

Fig. 1
figure 1

Distribution of roots of condition polynomials for various k

3.2 The case of general K(t)

In this subsection, we consider a general kernel function, which can be written as follows:

$$ K(t)=K(0)+t\bar{K}(t) $$
(27)

with K(0)≠ 0. We summarize the main theoretical result in the following theorem.

Theorem 2

Assume the given functions K(t) and g(t) describing VIE (1) satisfy K(0)≠ 0,K(t) ∈ Ck+ 2([0,T]), and g(t) ∈ Ck+ 2([0,T]). Furthermore, suppose that the winding number of \(\mathfrak {c}_{N-k-1}^{k,\hat {n}}(t)\) is zero. Then, \(B_{\hat {N}}\text {CBVM}_{k}\) leads to a unique solution as \(h\rightarrow 0,\) and the collocation error admits to the following:

$$ \|\mathbf{e}_{\hat{n}}\|_{\infty} = O(h^{k+1}), ~\hat{n}=0,1,\cdots,\hat{N}-1, $$
(28)

where \(\displaystyle h=\max \limits _{\hat {j}=0,\cdots ,\hat {N}-1}\{h_{\hat {j}}\}, \mathbf {e}_{\hat {n}}=(e_{h}(t^{\hat {n}}_{1}) , e_{h}(t^{\hat {n}}_{2}) , \cdots ,e_{h}(t^{\hat {n}}_{N}))^{T}\).

Proof

Noting the local representation of the collocation solution uh(t) for fixed \(\hat {n}\) and n = 0,1,⋯ ,N − 1, we have as follows:

$$ \begin{array}{@{}rcl@{}} g(t_{n}^{\hat{n}})&=&\sum\limits_{\hat{j}=0}^{\hat{n}-1}h_{\hat{j}}\sum\limits_{j=0}^{N-1}{{\int}_{0}^{1}}K({t}_{n}^{\hat{n}}-({t}_{j}^{\hat{j}}+\text{vh}_{\hat{j}})) u_{h}({t}_{j}^{\hat{j}}+vh_{\hat{j}})\text{dv}\\ &&+h_{\hat{n}}\sum\limits_{j=0}^{n-1}{{\int}_{0}^{1}} K({t}_{n}^{\hat{n}}-({t}_{j}^{\hat{n}}+\text{vh}_{\hat{n}})) u_{h}({t}_{j}^{\hat{n}}+\text{vh}_{\hat{n}})\text{dv}. \end{array} $$
(29)

For n = N − 1,N − 2,⋯ ,0, computing \(g({t}_{n+1}^{\hat {n}})-g({t}_{n}^{\hat {n}})\) successively results in the following:

$$ \begin{array}{@{}rcl@{}} &g&(t_{n+1}^{\hat{n}})-g(t_{n}^{\hat{n}})\\ &=&\sum\limits_{\hat{j}=0}^{\hat{n}-1}h_{\hat{j}}\sum\limits_{j=0}^{N-1}{{\int}_{0}^{1}} K({t}_{n+1}^{\hat{n}}-({t}_{j}^{\hat{j}}+\text{vh}_{\hat{j}})) u_{h}({t}_{j}^{\hat{j}}+\text{vh}_{\hat{j}})\text{dv}\\ & &-\sum\limits_{\hat{j}=0}^{\hat{n}-1}h_{\hat{j}}\sum\limits_{j=0}^{N-1}{{\int}_{0}^{1}} K({t}_{n}^{\hat{n}}-({t}_{j}^{\hat{j}}+\text{vh}_{\hat{j}})) u_{h}({t}_{j}^{\hat{j}}+\text{vh}_{\hat{j}})\text{dv}\\ &&+h_{\hat{n}}\sum\limits_{j=0}^{n}{{\int}_{0}^{1}}K({t}_{n+1}^{\hat{n}}-({t}_{j}^{\hat{n}}+\text{vh}_{\hat{n}}))u_{h}({t}_{j}^{\hat{n}}+vh_{\hat{n}})\text{dv}\\ &&-h_{\hat{n}}\sum\limits_{j=0}^{n-1}{{\int}_{0}^{1}}K({t}_{n}^{\hat{n}}-({t}_{j}^{\hat{n}}+\text{vh}_{\hat{n}}))u_{h}({t}_{j}^{\hat{n}}+vh_{\hat{n}})\text{dv} \end{array} $$
(30)

By denoting

$$ \mathbf{b}^{\hat{n}}=\left( \begin{array}{c} g(t_{1}^{\hat{n}})- g(t_{0}^{\hat{n}})-\textsc{Lag}_{1}^{\hat{n}}+\textsc{Lag}_{N}^{\hat{n}-1}\\ g(t_{2}^{\hat{n}})- g(t_{1}^{\hat{n}}) -\textsc{Lag}_{2}^{\hat{n}}+\textsc{Lag}_{1}^{\hat{n}} \\ {\vdots} \\ g(t_{N}^{\hat{n}})- g(t_{N-1}^{\hat{n}}) -\textsc{Lag}_{N}^{\hat{n}}+\textsc{Lag}_{N-1}^{\hat{n}} \end{array} \right) ~\text{and}~ \mathbf{1}=\left( \begin{array}{c} 1 \\ 1 \\ {\vdots} \\ 1 \end{array} \right), $$

where

$$ \textsc{Lag}_{m}^{\hat{m}}=\sum\limits_{\hat{j}=0}^{\hat{n}-1}h_{\hat{j}}\sum\limits_{j=0}^{N-1}{{\int}_{0}^{1}} K({t}_{m}^{\hat{m}}-({t}_{j}^{\hat{j}}+\text{vh}_{\hat{j}})) u_{h}({t}_{j}^{\hat{j}}+\text{vh}_{\hat{j}})\text{dv}, $$

we obtain the matrix form of (30)

$$ \mathbf{M}^{\hat{n}}\cdot\mathbf{1}=\mathbf{b}^{\hat{n}}. $$
(31)

or equivalently,

$$ \left( \begin{array}{cccc} {M}_{1,0}^{\hat{n}} & 0 & {\cdots} & 0 \\ {M}_{2,0}^{\hat{n}}-{M}_{1,0}^{\hat{n}} & {M}_{2,1}^{\hat{n}} & {\cdots} & 0 \\ {\vdots} & {\vdots} & {\vdots} & {\vdots} \\ {M}_{N,0}^{\hat{n}}-{M}_{N-1,0}^{\hat{n}} & {M}_{N,1}^{\hat{n}}-{M}_{N-1,1}^{\hat{n}} & {\cdots} & {M}_{N,N-1}^{\hat{n}} \end{array} \right) \left( \begin{array}{c} 1 \\ 1 \\ {\vdots} \\ 1 \end{array} \right)= \left( \begin{array}{c} b_{1}^{\hat{n}} \\ b_{2}^{\hat{n}} \\ {\vdots} \\ b_{N}^{\hat{n}} \end{array} \right), $$
(32)

with

$$ M_{i,j}^{\hat{n}}=h_{\hat{n}}{{\int}_{0}^{1}} K(t_{i}^{\hat{n}}-(t_{j}^{\hat{n}}+\text{vh}_{\hat{n}}))u_{h}(t_{j}^{\hat{n}}+\text{vh}_{\hat{n}})\text{dv}. $$

Noting that \(M_{i,j}^{\hat {n}}=O(h_{\hat {n}})\) and utilizing the mean value theorem, we have as \(h_{\hat {n}}\rightarrow 0, M_{i,j-1}^{\hat {n}}-M_{i-1,j-1}^{\hat {n}}=O(h_{\hat {n}}^{2})\) for j < i. Employing Gaussian elimination leads to the following:

$$ \left( \begin{array}{cccc} M_{1,0}^{\hat{n}} & 0 & {\cdots} & 0 \\ 0 & M_{2,1}^{\hat{n}} & {\cdots} & 0 \\ {\vdots} & {\vdots} & {\vdots} & {\vdots} \\ 0 & 0 & {\cdots} & M_{N,N-1}^{\hat{n}} \end{array} \right) \left( \begin{array}{c} 1 \\ 1 \\ {\vdots} \\ 1 \end{array} \right)= \left( \begin{array}{c} b_{1}^{\hat{n}} \\ b_{2}^{\hat{n}}+O(h_{\hat{n}}) \\ {\vdots} \\ b_{N}^{\hat{n}}+O(h_{\hat{n}}) \end{array} \right). $$
(33)

According to (27), Mj,j− 1 can be decomposed into the following:

$$ M_{j,j-1}^{\hat{n}}=h_{\hat{n}}K(0){{\int}_{0}^{1}}u_{h}(t_{j}^{\hat{n}}+\text{vh}_{\hat{n}})\text{dv} +h_{\hat{n}}^{2}{{\int}_{0}^{1}}(1-v)K(h_{\hat{n}}(1-v))u_{h}(t_{j}^{\hat{n}}+\text{vh}_{\hat{n}})\text{dv}. $$

Hence, for sufficiently small \(h_{\hat {n}},\) the linear system (29) has a unique solution when \({B}_{N}^{k}\) defined in (20) is invertible and the norm of its inverse is bounded as \(N\rightarrow \infty \).

Now, let us consider the collocation error eh(t) := u(t) − uh(t). By a similar deduction, we have as follows:

$$ \begin{array}{@{}rcl@{}} 0&=&\sum\limits_{\hat{j}=0}^{\hat{n}-1}h_{\hat{j}}\sum\limits_{j=0}^{N-1}{{\int}_{0}^{1}} K({t}_{n+1}^{\hat{n}}-({t}_{j}^{\hat{j}}+\text{vh}_{\hat{j}})) e_{h}({t}_{j}^{\hat{j}}+\text{vh}_{\hat{j}})\text{dv}\\ &&-\sum\limits_{\hat{j}=0}^{\hat{n}-1}h_{\hat{j}}\sum\limits_{j=0}^{N-1}{{\int}_{0}^{1}} K({t}_{n}^{\hat{n}}-({t}_{j}^{\hat{j}}+\text{vh}_{\hat{j}})) e_{h}({t}_{j}^{\hat{j}}+\text{vh}_{\hat{j}})\text{dv}\\ &&+h_{\hat{n}}\sum\limits_{j=0}^{n}{{\int}_{0}^{1}} K({t}_{n+1}^{\hat{n}}-({t}_{j}^{\hat{n}}+\text{vh}_{\hat{n}}))e_{h}({t}_{j}^{\hat{n}}+vh_{\hat{n}})\text{dv}\\ &&-h_{\hat{n}}\sum\limits_{j=0}^{n-1}{{\int}_{0}^{1}} K({t}_{n}^{\hat{n}}-({t}_{j}^{\hat{n}}+\text{vh}_{\hat{n}}))e_{h}({t}_{j}^{\hat{n}}+vh_{\hat{n}})\text{dv} \end{array} $$
(34)

For j = 0,1,⋯ ,Nk − 2, with Lemma 3 in mind, interpolating \(e_{h}({t}_{j}^{\hat {j}}+\text {vh}_{\hat {j}})\) at \({t}_{j}^{\hat {j}},{t}_{j+1}^{\hat {j}},\cdots ,{t}_{j+k+1}^{\hat {j}}\) leads to the following:

$$ e_{h}(t_{j}^{\hat{j}}+\text{vh}_{\hat{j}})=\sum\limits_{i=0}^{k+1}e_{h}({t}_{j+i}^{\hat{j}}){\phi}_{i}^{k}(v)+O({h}^{k+2}_{\hat{j}}), $$
(35)

On the other hand, we have, for j = Nk − 1,⋯ ,N − 1,

$$ e_{h}(t_{j}^{\hat{j}}+\text{vh}_{\hat{j}})=\sum\limits_{i=0}^{k}e_{h}({t}_{N-k+i}^{\hat{j}})\hat{\phi}_{i}^{k}(v)+O({h}^{k+1}_{\hat{j}}). $$
(36)

By the mean value theorem, we have the following:

$$ K(t_{n+1}^{\hat{n}}-(t_{j}^{\hat{j}}+\text{vh}_{\hat{j}})) -K(t_{n}^{\hat{n}}-(t_{j}^{\hat{j}}+\text{vh}_{\hat{j}})) = (h_{\hat{j}}+h_{\hat{n}})K^{\prime}(\xi_{n}^{\hat{j}}),\xi_{n}^{\hat{j}}\!\in\! (t_{n}^{\hat{n}}-t_{j+1}^{\hat{j}},t_{n+1}^{\hat{n}}-t_{j}^{\hat{j}}). $$

For \(\hat {j}=0,\cdots ,\hat {n}-1,\) and i = 1,⋯ ,N, the fact that \(e_{h}(t^{\hat {j}}_{i})=O(h^{k+1})\) implies

$$ \textsc{Err}_{n+1}^{\hat{n}}-\textsc{Err}_{n}^{\hat{n}}=O(h^{k+2}), $$

where

$$ {\textsc{Err}}_{m}^{\hat{n}}=\sum\limits_{\hat{j}=0}^{\hat{n}-1}h_{\hat{j}}\sum\limits_{j=0}^{N-1}{{\int}_{0}^{1}} K({t}_{m}^{\hat{n}}-({t}_{j}^{\hat{j}}+\text{vh}_{\hat{j}})) e_{h}({t}_{j}^{\hat{j}}+\text{vh}_{\hat{j}})\text{dv}. $$

Let d denote a N × 1 vector with its n th element being \({\textsc {Err}}_{n}^{\hat {n}}-{\textsc {Err}}_{n-1}^{\hat {n}}\). Then, (34) can be rewritten in the compact form,

$$ \mathbf{E}^{\hat{n}}\cdot\mathbf{1}=\mathbf{d}^{\hat{n}}. $$
(37)

or equivalently,

$$ \left( \begin{array}{cccc} {E}_{1,0}^{\hat{n}} & 0 & {\cdots} & 0 \\ {E}_{2,0}^{\hat{n}}-{E}_{1,0}^{\hat{n}} & E_{2,1}^{\hat{n}} & {\cdots} & 0\\ {\vdots} & {\vdots} & {\vdots} & {\vdots} \\ {E}_{N,0}^{\hat{n}}-{E}_{N-1,0}^{\hat{n}} & {E}_{N,1}^{\hat{n}}-{E}_{N-1,1}^{\hat{n}} & {\cdots} & {E}_{N,N-1}^{\hat{n}} \end{array} \right) \left( \begin{array}{c} 1 \\ 1 \\ {\vdots} \\ 1 \end{array} \right)= \left( \begin{array}{c} {d}_{1}^{\hat{n}} \\ {d}_{2}^{\hat{n}} \\ {\vdots} \\ {d}_{N}^{\hat{n}} \end{array} \right), $$
(38)

with

$$ E_{i,j}^{\hat{n}}=h_{\hat{n}}{{\int}_{0}^{1}} K(t_{i}^{\hat{n}}-(t_{j}^{\hat{n}}+\text{vh}_{\hat{n}}))e_{h}(t_{j}^{\hat{n}}+\text{vh}_{\hat{n}})\text{dv}. $$

Noting that Ei,j goes to zero for ij as \(h\rightarrow 0,\) and Ei,j with i < j is one order higher than Ej,j, we have by Gaussian elimination as follows:

$$ \left( \begin{array}{cccc} E_{1,0}^{\hat{n}} & 0 & {\cdots} & 0 \\ 0 & E_{2,1}^{\hat{n}} & {\cdots} & 0 \\ {\vdots} & {\vdots} & {\vdots} & {\vdots} \\ 0 & 0 & {\cdots} & E_{N,N-1}^{\hat{n}} \end{array} \right) \left( \begin{array}{c} 1 \\ 1 \\ {\vdots} \\ 1 \end{array} \right)= \left( \begin{array}{c} d_{1}^{\hat{n}} \\ d_{2}^{\hat{n}}+O(h^{k+3}) \\ {\vdots} \\ d_{N}^{\hat{n}}+O(h^{k+3}) \end{array} \right). $$
(39)

Hence, for sufficiently small h, we arrive at the fact by utilizing Lemmas 1 and 2 as follows:

$$ \|\mathbf{e}_{\hat{n}}\|_{\infty}=O(h^{k+1}), ~h\rightarrow 0. $$

This completes the proof. □

4 Numerical experiments

In this section, we carry out some tests to illustrate the numerical performance of \(B_{\hat {N}}\text {CBVM}_{k}\) firstly. Then, the first-kind Volterra integral equation arising in the scattering problem is solved by collocation boundary value methods.

Example 1

We utilize \(B_{\hat {N}}\text {CBVM}_{k}\) to solve the following VIE of the first kind,

$$ {{\int}_{0}^{t}}\cos(t-s)u(s)\text{ds}=t\cos t, t\in [0,2]. $$
(40)

The solution of this equation is as follows:

$$ u(t)=2\cos t-1. $$

We firstly consider B1CBVMk with k = 1 and k = 2, respectively. Computed results are shown in Table 1, where “error” denotes \(\|\mathbf {e}_{\hat {n}}\|_{\infty }\), and “order” is computed by \(\displaystyle \log _{2}\frac {\text {error}_{\text {previous}}}{\text {error}_{\text {current}}}\). Then, we divide the interval [0,2] uniformly into 4 parts and apply CBVMk to each subinterval. The convergence rates are given in Table 2. In Table 3, we present the computed results by utilizing B3CBVMk with T1 = T/5,T2 = 3T/4 in the coarse grid.

Table 1 Absolute errors and convergence rates of B1CBVMk for example 1
Table 2 Absolute errors and convergence rates of B4CBVMk for example 1
Table 3 Absolute errors and convergence rates of B3CBVMk for example 1

It can be found in Tables 12, and 3 that convergence orders for \(B_{\hat {N}}\text {CBVM}_{1}\) and \(B_{\hat {N}}\text {CBVM}_{2}\) are 2 and 3, respectively, which are in accordance with the theoretical estimates given in Theorem 2.

Example 2

In the study of acoustic scattering, a class of single-layer potential equations can be changed into the following:

$$ {{\int}_{0}^{t}}J_{0}(\omega(t-s))u(s)\text{ds}=g(t), t\in [0,T], $$
(41)

by polar coordinate and spatial Fourier transforms [9]. Here, J0(t) denotes the first-kind Bessel function of order zero. Furthermore, with the help of Laplace transform [21], the exact solution can be represented as follows:

$$ u(t)=g^{\prime}(t) +{\omega{\int}_{0}^{t}}\frac{J_{1}(\omega (t-s))}{t-s}g(s)\text{ds}. $$

Letting T = 2 and ω = 10, we illustrate the performance of \(B_{\hat {N}}\text {CBVM}_{k}\) in Tables 4 and 5. Moreover, we show the convergence property of B1CBVMk with respect to ω in Fig. 2, where the moment integrals are computed by Lommel function [22].

Table 4 Absolute errors and convergence rates of B1CBV Mk for example 2
Table 5 Absolute errors and convergence rates of B3CBVMk for example 2
Fig. 2
figure 2

The asymptotic order of B1CBVMk with respect to the frequency ω

Numerical results given in Tables 4 and 5 verify the estimates in Theorem 2 again. In Fig. 2, we can find \(\|\mathbf {e}_{\hat {n}}\|_{\infty }\) scaled by the asymptotic order, that is, \(\displaystyle \|\mathbf {e}_{\hat {n}}\|_{\infty }\cdot \omega ^{1/2}. \) The moderate varying circles in this figure imply \(\|\mathbf {e}_{\hat {n}}\|_{\infty }\cdot \omega ^{1/2}\) behaves as O(1) when ω goes to infinity, or equivalently, \(\|\mathbf {e}_{\hat {n}}\|_{\infty }\) behaves as O(ω− 1/2). Therefore, \(B_{\hat {N}}\text {CBVM}_{k}\) shares the property that the higher the oscillation of the kernel function, the better the approximation. By employing the methodology in [23], it is expected to develop the convergence rate of \(B_{\hat {N}}\text {CBVM}_{k}\) in terms of the frequency ω. The details are ignored for simplicity.

5 Final remark

The main results of this paper are contained in Sections 2 and 3. By employing multistep interpolation, we have derived the block collocation boundary value method. For the first-kind VIE of convolution-type, the sufficient condition for uniqueness and convergence property of collocation boundary value solutions is established.

To construct the collocation solution, we rewrite uh(t) in [tNk− 1,tN] as follows:

$$ u_{h}(t_{N-k-1}+\text{vh})=\sum\limits_{i=0}^{k}u_{h}(t_{N-k+i})\hat{\phi}_{i}(v), $$

which leads to relatively poor approximations in the final several steps (see Fig. 3). In fact, the numerical performance of the collocation boundary value method can be improved by employing the local representation in [tNk− 1,tN] as follows:

$$ \bar{u}_{h}(t_{N-k-1}+\text{vh})=\sum\limits_{i=0}^{k+1}u_{h}(t_{N-k-1+i})\phi_{i}(v), $$

which leads to a modified block collocation boundary value method. According to the remainder theory of Lagrange interpolation, we have as follows:

$$ \bar{u}_{h}(t_{N-k-1}+vh)=\sum\limits_{i=0}^{k}\bar{u}_{h}(t_{N-k+i})\hat{\phi}_{i}(v)+O(h^{k+1}), $$

Therefore, \(\bar {u}_{h}(t)\) can be considered as a perturbation of uh(t). The existence and convergence rate of the modified collocation solution \(\bar {u}_{h}(t)\) can be deduced with the help of \(B_{\hat {N}}\text {CBVM}_{k}\). We employ the modified collocation method with k = 1 to solve VIE in examples 1 and 2 again. Computed results are presented in Table 6. Here, “E1” and “E2” implied tested problems are the same as those in example 1 and example 2, respectively. It can be found that the convergence rate increases to k + 2 in these numerical experiments. Furthermore, we show pointwise errors of the modified collocation method in Fig. 4. We find solutions of (40) in the last several steps to have been properly approximated (compared with Fig. 3).

Fig. 3
figure 3

Pointwise errors of B1CBVM1 for example 1

Table 6 Absolute errors and convergence rates of the modified B2CBVM1
Fig. 4
figure 4

Pointwise errors of modified B1CBVM1 for example 1