1 Introduction

Integral equations with highly oscillatory kernels occur in a number of applications in electromagnetics, acoustic scattering, and engineering. For example, many problems of scattering of time-harmonic acoustic or electromagnetic waves can be formulated as the Helmholtz equation

$$ \Delta u + \omega^2u = 0, \quad\mbox{in $R^{d}\setminus \varOmega$, $d=2$, $3$}, $$
(1.1)

subject to appropriate boundary conditions [3, 9, 19, 24]. Here, Ω is the scattering object and the wave number ω>0 is an arbitrary positive constant, proportional to the frequency of the incident wave. Standard schemes for solving this problem become prohibitively expensive as ω→∞. Langdon and Chandler-Wilde [19] reformulated it as an integral equation,

$$u(\mathbf{x})=\int_{T_H}\frac{\partial H_0^{(1)}(\omega|\mathbf{x-y}|)}{\partial y_2}\phi(\mathbf{y})ds( \mathbf{y}),\quad \mathbf{x}\in U_H, $$

for some density ϕL (T H ), where \(H^{(1)}_{0}\) is the Hankel function of the first kind of order zero, U H ={(x 1,x 2):x 2>H>0} and T H ={(x 1,H):x 1R}. Moreover, as mentioned in [3], for the two-dimensional Helmholtz equation (1.1) in the exterior domain, the solution can be written as the sum of the incoming wave u i and a scattered wave u s, u(x)=u i(x)+u s(x). Due to the linearity of the problem, the function u s itself satisfies the Helmholtz equation with the boundary condition

$$u^s(\mathbf{x}) =-u^i(\mathbf{x}), \quad\mathbf{x}\in \varGamma. $$

The unknown scattered wave with the single-layer potential can then be computed by means of

$$u^s(\mathbf{x}) = (Sq) (\mathbf{x}) =\frac{i}{4}\int _{\varGamma}H_0^{(1)}\bigl(\omega|\mathbf{x-y}|\bigr)q( \mathbf {y})ds(\mathbf{y}), $$

where q is the single-layer potential density function found from an integral equation of the first kind [3, 24],

$$ \frac{i}{4}\int_{\varGamma}H_0^{(1)}\bigl( \omega|\mathbf{x}-\mathbf {y}|\bigr)q(\mathbf{y})ds(\mathbf{y}) =u^i( \mathbf{x}),\quad\mathbf{x}\in\varGamma, $$
(1.2)

or an integral equation of the second kind,

$$ \frac{q(\mathbf{x})}{2}+\frac{i}{4}\int_{\varGamma} \biggl( \frac{\partial H_0^{(1)}(\omega|\mathbf{x-y}|)}{\partial n_{\mathbf{x}}}+i\eta H_0^{(1)}(\omega|\mathbf{x-y}|) \biggr)q(\mathbf{y})ds(\mathbf{y})= \frac{\partial u^i}{\partial n}(\mathbf{x})+i\eta u^i(\mathbf{x}), $$
(1.3)

with ηR denoting a coupling parameter.

For the study of the numerical solution of a scalar retarded potential integral equation posed on an infinite flat surface,

$$\int_{\varGamma}\frac{u(x',t-|x'-x|)}{|x'-x|}dx'=a(x,t)\quad \mathrm{on}\ \varGamma\times(0,T), $$

Davies and Duncan showed in their 2004 paper [10] that by taking the continuous Fourier transform the problem can be transformed into a Volterra integral equation of the first kind,

$$ 2\pi\int_{0}^{x}\hat{u}( \omega,x-t)J_0(\omega t)dt=\hat{a}(\omega ,x),\quad \omega>0, $$
(1.4)

with highly oscillatory Bessel kernel.

In 1985 Beezley and Krueger [5] considered direct and inverse scattering problems in dispersive media which can be reformulated, using Green’s function and invariant embedding techniques for the physical region [0,L] as L→∞, as a Volterra integral equation of second kind,

$$ 4R(t)+G(t)+\bigl(G*(2R+R*R)\bigr) (t)=0,\quad t>0, $$
(1.5)

where fg denotes the convolution \((f*g)(x)=\int_{0}^{x}f(t)g(x-t)dt\). In some cases the equation can be solved explicitly, for example when

$$G(t)=\gamma e^{\beta t}\quad\Longleftrightarrow\quad R(t)=-e^{(\beta-\gamma/2)t} \frac{I_1(\gamma t/2)}{t} $$

or

$$G(t)=\gamma t e^{\beta t}\quad\Longleftrightarrow\quad R(t)=-2e^{\beta t} \frac{J_2(\sqrt{\gamma} t)}{t}. $$

Here, I 1 is the modified Bessel function of the first kind of order one, and J 2 is the Bessel function of the first kind of order two. For more details, see [5, 18]. However, in the nonhomogeneous case,

$$ 4R(t)+G(t)+\bigl(G*(2R+R*R)\bigr) (t)=f(t),\quad t>0, $$
(1.6)

the solution of direct and inverse scattering problems is much more complicated, in particular for large values of γ when \(R(t)=-2e^{\beta t}\frac{J_{2}(\sqrt{\gamma} t)}{t} \) or \(R(t)=-e^{(\beta-\gamma/2)t}\frac{I_{1}(\gamma t/2)}{t}\).

One feature of the integral equations (1.2)–(1.4) and (1.6) is of particular relevance: when ω≫1, the kernel function is highly oscillatory, and then the computation of integrals by standard quadrature methods is exceedingly difficult and the cost steeply increases with ω (see for example [15, 20, 27]). This means that the numerical methods based on standard numerical quadrature formulas [4, 6, 8, 14, 21, 23] are not feasible for solving these equations.

In order to obtain high-order accurate time-stepping methods for the single-layer potential equation (1.4), Brunner, Davies and Duncan [7] employed the discontinuous Galerkin (DG) method for first-kind integral equations

$$ \int_0^xK(x-t)y(t)dt=a(x),\quad K(0)=1,\quad x\in [0,1], $$
(1.7)

and analyzed its application to (1.4). However, the computational use of this method for very large values of ω (e.g. the appropriate approximation of the inner products and the discretization of the Volterra integral operator) has not yet been studied.

Volterra integral equations with highly oscillatory kernels that also contain weak singularities arise in solving various problems of mathematical physics; see for example [2, 17].

In this paper we are concerned with the numerical solution of Volterra integral equations of the second kind with highly oscillatory Bessel kernels,

$$ y(x)+\int_{0}^{x}\frac{J_m(\omega (x-t))}{(x-t)^{\alpha}}y(t)dt=f(x), \quad x\in[0,1],\quad0\le \alpha<1,\quad\omega\gg1, $$
(1.8)

with x∈[0,1]. Here, y is the unknown function, f a given smooth function, and J m the Bessel function of the first kind of order m≥0.

The purpose of this paper is to present efficient methods for (1.8). In Sect. 2, we show that the solution of (1.8) is uniformly bounded for ω≥0. Based on these results, we present its asymptotics and approximations. In Sect. 3, we introduce efficient algorithms, a Filon method, and collocation methods using piecewise constant and piecewise linear polynomials. We show that these methods achieve higher accuracy as the frequency increases. The efficiency of these methods is illustrated, in Sect. 4, by a broad range of numerical examples.

2 Asymptotics of the solution of (1.8)

The theoretical aspects of the solutions of the general Volterra integral equation of the second kind,

$$y(x)+\int_{0}^{x}\frac{K(x,t)}{(x-t)^{\alpha}}y(t)dt=f(x), \quad x\in[0,1],\quad0\le\alpha<1, $$

have been investigated extensively. For further reference we cite the following regularity result.

Lemma 2.1

[6, 13, 14, 23, 25]

Assume that the functions f=f(x) and K=K(x,t) are continuous on their respective domains [0,1] and D={0≤tx≤1}. Then the above equation possesses a unique continuous solution y=y(x). Furthermore, if fC q[0,1] and KC q(D), then yC q(0,1]∩C[0,1], with |y′(x)|≤C α x α (x∈(0,1]) for some constant C α .

The existence of a continuous solution y=y ω of the integral equation (1.8) now follows immediately from Lemma 2.1.

Theorem 2.1

(i) For every fC[0,1] and 0≤α<1, the solution y ω (x) of (1.8) is uniformly bounded for ω≥0; that is,

$$ \sup_{\omega\in[0,+\infty)}\big\|y_{\omega}(x)\big\|_{\infty}<\infty. $$
(2.1)

(ii) If fC 1[0,1] and 0≤α<m, then y ω C 1[0,1] and \(y'_{\omega}(x)\) is uniformly bounded for ω≥0.

(iii) Let fC q[0,1] (q≥1) and 0≤m<α<1. Then y ω (x) satisfies y ω C[0,1]∩C q(0,1] with \(|y_{\omega}'(x)|\le C_{\alpha}x^{-\alpha}\) (x∈(0,1]), where C α is a constant not depending on ω.

Proof

(i) Define

Then for all z 1,z 2C[0,1],

(2.2)

Set \({\beta(\omega)=\frac{1}{\omega^{1-\alpha}}\int_{0}^{\omega }\frac{|J_{m}(u)|}{u^{\alpha}}du}\). Since |J m (s)|≤As −1/3 uniformly for m and s≥1, for some constant A [26, p. 357] and |J m (s)|≤1 for m≥0 [1, Eq. (9.1.60)], it follows that

This shows that there exists a constant ω 0≥1 such that \({\beta(\omega)\le\frac{1}{2}}\) for ωω 0. Defining

$$\bar{\beta}={\max_{\omega\in[\omega_0,+\infty)}\beta (\omega)}, $$

Equation (2.2) implies that

$${ \big\|F(z_1)-F(z_2)\big\|_{\infty}\le\bar{\beta} \|z_1-z_2\|_{\infty}}, $$

and hence F:C[0,1]→C[0,1] is a contraction mapping. Thus, the sequence {z n } defined by the iteration z n+1=F(z n ), with z 0(x)≡0, converges to the solution y ω (x) of (1.8) satisfying

This reveals that y ω (x) is uniformly bounded by \({\frac{2-\bar{\beta}}{1-\bar{\beta}}\|f(x)\|_{\infty}}\) for ωω 0.

For ω∈[0,ω 0], the solution of y ω (x) can be represented by

$$ y_{\omega}(x)=f(x)-\int_0^xR_{\alpha}(x,t, \omega)f(t)dt,\quad t\in [0,1], $$
(2.3)

(cf. [6, p. 343-344]), where

(2.4)

Here, Φ n (x,t,ω;α)∈C[0,1;0,1;0,+∞) is defined by

(2.5)

with Φ 1(x,t,ω;α)=J m (ω(xt)). Since the series

$${ \sum_{n=1}^{\infty}(x-t)^{(n-1)(1-\alpha)} \varPhi_n(x,t,\omega;\alpha)}, $$

converges uniformly, it follows that Q(x,t,ω;α) is continuous in x, t and ω, and hence R α (t,x,ω) possesses the same property. Therefore, y ω (x) is uniformly bounded on [0,1] for all ω. This establishes (2.1).

(ii) From the definition of J m (x) (Abramowitz and Stegun [1, Eq. (9.1.10)]),

$$ J_m(x)= \biggl(\frac{x}{2} \biggr)^m{\sum _{n=0}^{\infty}\frac { (-\frac{1}{4}x^2 )^n}{n!\varGamma(m+n+1)}}, $$
(2.6)

we see that \(f'(x)- \frac{J_{m}(\omega x)y_{\omega}(0)}{x^{\alpha}}\in C[0,1]\) is uniformly bounded in ω and x. Here we have used that

$$\big|J_m(\omega x)\big|\le1\quad\mbox{for all $x\in[0,1]$ and $\omega\in [0,+\infty)$}\quad\mbox{[1, Eq.~(9.1.60)]} $$

and the limit 0 of \(\frac{J_{m}(\omega x)y_{\omega}(0)}{x^{\alpha}}\) as x→0. Thus, by the proof of (i), the integral equation

$$ z(x)+\int_0^x\frac{J_m(\omega t)}{t^{\alpha}}z(x-t)dt=f'(x)- \frac{J_m(\omega x)y_{\omega}(0)}{x^{\alpha}} $$
(2.7)

has a unique solution zC[0,1] and z(x) is uniformly bounded for ω≥0. It follows in particular from (2.7) that

which by f(0)=y ω (0) yields

$$y_{\omega}(0)+\int_0^xz(s)ds+\int _0^xJ_m(\omega t) \biggl(y_{\omega}(0)+\int_0^{x-s}z(s)ds \biggr)dt=f(x). $$

Thus \(y_{\omega}(x)=y_{\omega}(0)+\int_{0}^{x}z(t)dt\in C^{1}[0,1]\) and \(y_{\omega}'(x)=z(x)\) is uniformly bounded for ω≥0.

(iii) By (2.6), the kernel of (1.8) can be rewritten as

$$\frac{J_m(\omega (x-t))}{(x-t)^{\alpha}}= \biggl(\frac{\omega}{2} \biggr)^m(x-t)^{m-\alpha }{ \sum_{n=0}^{\infty}\frac{ (-\frac{\omega^2}{4}(x-t)^2 )^n}{ n!\varGamma(m+n+1)}}=:(x-t)^{m-\alpha}F(x-t). $$

Thus, FC [0,1] and Theorem 6.1.6 in [6, p. 346] lead to yC[0,1]∩C q(0,1].

Moreover, we see from (2.3)–(2.5) and the definition Φ 1(x,t,ω;α)=J m (ω(xt)) that Φ n (x,t,ω;α) can be represented as

and thus we obtain that

$$R_{\alpha}(x,t,\omega)=R_{\alpha}(x-t,\omega)=(x-t)^{-\alpha }Q(x-t, \omega;\alpha) $$
$$Q(x,t,\omega;\alpha)=Q(x-t,\omega;\alpha)={ \sum_{n=1}^{\infty}(x-t)^{(n-1)(1-\alpha)} \varPhi_n(x-t,\omega;\alpha)}. $$

In particular, setting \({\bar{K}=\max\{|J_{m}(\omega (x-t))|: (x,t)\in D\}\le1}\) and using Lemma 6.1.3 in [6, p. 344] we arrive at the estimate

$${\big|(x-t)^{(n-1)(1-\alpha)}\varPhi_n(x-t,\omega;\alpha)\big|\le \bar{K}^n\frac{\varGamma((1-\alpha))^n}{\varGamma(n(1-\alpha))}}. $$

This is independent of ω and yields

$$\big|Q(x-t,\omega;\alpha)\big|\le {\sum_{n=1}^{\infty} \frac{\varGamma((1-\alpha ))^n}{\varGamma(n(1-\alpha))}<\infty}. $$

Thus, differentiating both sides of (2.3),

$$y_{\omega}(x)=f(x)-\int_0^xR_{\alpha}(x-t, \omega)f(t)dt=f(x)-\int_0^xR_{\alpha}(t, \omega)f(x-t)dt, $$

we obtain

$$y_{\omega}'(x)=f'(x)- \frac{Q(x,\omega;\alpha)}{x^{\alpha}}f(0)- \int_0^xR_{\alpha}(t,\omega )f'(x-t)dt,\quad x\in(0,1]. $$

This leads to \(|y_{\omega}'(x)|\le C_{\alpha}x^{-\alpha}\) for some constant C α not depending on ω. □

For ease of notation, we will in the following write y(x) for the solution y ω (x) of (1.8).

Theorem 2.2

Suppose that fC 1[0,1] and 0≤α<1. Then

$$ y(x)-f(x)+\int_0^x\frac{J_m(\omega t)}{t^{\alpha}}f(x-t)dt= \left \{ \begin{array}{l@{\quad}l}O(\omega^{-1}),&{\alpha\not=\frac{1}{2}},\\[4pt] O (\frac{\ln^2\omega}{\omega} ),&{\alpha=\frac{1}{2}}, \end{array} \right . \quad \omega\gg1. $$
(2.8)

Proof

The estimate

follows from (1.8), Theorem 2.1 on the uniform boundedness of y(x), and the estimates |J m (t)|≤1, \(|J_{m}(x)|\le A_{m} x^{-\frac{1}{2}}\) for x≫1 [26, p. 357] and

$$\int_0^{\omega}\frac{|J_m(t)|}{t^{\alpha}}dt\le\int _0^1 \frac{1}{t^{\alpha}}dt+A_m \int _1^{\omega}\frac{dt}{t^{\frac{1}{2}+\alpha}}=\left \{ \begin{array}{l@{\quad}l}O(\omega^{\frac{1}{2}-\alpha}),&{\alpha\not=\frac{1}{2}},\\[4pt] O (\ln\omega ),&{\alpha=\frac{1}{2}}, \end{array} \right .\quad \omega\gg1. $$

 □

Based on the asymptotic (2.8) of the solution, we present the simplest approximation formula for y(x).

Corollary 2.1

Suppose that fC 1[0,1]. Then

$$ y(x)=f(x)+O\bigl(\omega^{-1+\alpha}\bigr),\quad\omega\gg1. $$
(2.9)

Proof

The following lemma forms the basis for the proof of Corollary 2.1

Lemma 2.2

For every function hC 1[0,1], m≥0 and ω≫1,

$$ \bigg|\int_0^1h(t)t^{\kappa}J_m( \omega t)dt\bigg|\le \left \{ \begin{array}{l@{\quad}l} C\omega^{-1-\kappa} (|h(1)|+\int_0^1|h'(t)|dt ),&-1<\kappa <\frac{1}{2},\\ [4pt] C\omega^{-3/2} (|h(1)|+\int_0^1|h'(t)|dt ),&\kappa\ge\frac{1}{2}, \end{array} \right . $$
(2.10)

where the constant C does not depend on h(t) and ω.

Proof

Since

$$\int_0^xt^{\kappa}J_m( \omega t)dt=\omega^{-1-\kappa}\int_0^{\omega x}u^{\kappa}J_m(u)du $$

for every x∈[0,1], we find that, using [1, Eq. (11.4.16)],

$$\int_0^{+\infty}u^{\mu}J_{\nu}(u)du={ \frac{ 2^\mu\varGamma (\frac{\mu+\nu+1}{2} )}{\varGamma (\frac{\nu -\mu+1}{2} )}<+\infty,\quad \Re(\mu+\nu)>-1\quad \mbox{and}\quad \Re(\mu)< \frac{1}{2}}. $$

This shows that \(\int_{0}^{\omega x}u^{\kappa}J_{m}(u)du\) is bounded for x∈[0,1] and thus there is a constant \(\widetilde{C}\) not depending on h(t) and ω such that

$$ \bigg|\int_0^xt^{\kappa}J_m( \omega t)dt\bigg|=\bigg|\omega^{-1-\kappa}\int_0^{\omega x}u^{\kappa}J_m(u)du\bigg| \le \widetilde{C} \omega^{-1-\kappa},\quad-1<\kappa<\frac{1}{2}. $$
(2.11)

For \(\kappa=\frac{1}{2}\) we find, using \({\frac{d}{dt} [t^{\nu+1}J_{\nu+1}(t) ]=t^{\nu +1}J_{\nu}(t)}\) and [1, Eq. (9.1.30)], that

Thus, we obtain

$$\frac{\kappa-m-1}{\omega}\int_0^xt^{\kappa-1}J_{m+1}( \omega t)dt=O\bigl(\omega^{-\frac{3}{2}}\bigr), $$

which, together with \(J_{m}(z)=O(z^{-\frac{1}{2}})\) [26, p. 357] and by setting x=ω η with η>0, yields

$$\frac{x^{\kappa}J_{m+1}(\omega x)}{\omega}=O\bigl(\omega^{-\frac{3}{2}}\bigr) $$

and hence

$$\int_0^xt^{\kappa}J_m( \omega t)dt=O\bigl(\omega^{-\frac{3}{2}}\bigr). $$

If \(\kappa>\frac{1}{2}\), we resort to the second mean value theorem for integration: it then follows for some ξ∈[0,1] that

Combining the above results we are led to

$$ \bigg|\int_0^xt^{\kappa}J_m( \omega t)dt\bigg|\le \left \{ \begin{array}{l@{\quad}l} C\omega^{-1-\kappa},&-1<\kappa<\frac{1}{2},\\[4pt] C\omega^{-3/2},&\kappa\ge\frac{1}{2}, \end{array} \right . $$
(2.12)

where the constant C does not depend on h(t) and ω.

The expression (2.10) follows by an argument similar to the one in the proof of Corollary in [26, p. 334], by letting \(F(t)=\int_{0}^{t}u^{k}J_{m}(\omega u)du\), integrating by parts,

$$\int_0^1h(t)tJ_m(\omega t)dt=h(1)F(1)+\int_0^1h'(t)F(t)dt, $$

and recalling (2.12). □

Figures 1 and 2 illustrate the asymptotics stated in Lemma 2.2 for h(t)≡1, which show the asymptotic orders on ω are attainable.

Fig. 1
figure 1

The values of the moments scaled by ω 4/5 for \(I=\int_{0}^{1}t^{-\frac{1}{5}}J_{1.3}(\omega t)dt\) and by ω 4/3 for \(I=\int_{0}^{1}t^{\frac {1}{3}}J_{1/3}(\omega t)dt\), respectively: ω from 1 to 1000

Fig. 2
figure 2

The values of the moments scaled by ω 3/2 for \(I=\int_{0}^{1}t^{2}J_{2}(\omega t)dt\) and for \(I=\int_{0}^{1}t^{\frac{1}{2}}J_{0}(\omega t)dt\), respectively: ω from 1 to 1000

Lemma 2.2 now implies that

$$\int_0^x\frac{J_m(\omega t)}{t^{\alpha}}f(x-t)dt=O\bigl( \omega^{-1+\alpha}\bigr),\quad\omega\gg1, $$

and this, together with Theorem 2.1, proves the desired result.  □

3 Efficient methods for the computation of the solution of (1.8)

The accuracy of the asymptotic approximations (2.8) or (2.9) is based on large values of ω. In order to obtain higher-order approximations we introduce, in the following subsections, a Filon method and two collocation methods based, respectively, on piecewise constant and piecewise linear polynomials.

Filon-type method for \(\int_{a}^{b}f(x)S(\omega x)dx\) [11, 15, 16, 30]: Let s be some positive integer and let \(\{m_{k}\}_{0}^{v}\) be a set of multiplicities associated with the node points a=c 0<c 1<⋯<c v =b such that m 0,m v s. Suppose that \(v(x)=\sum_{k=0}^{n}a_{k}x^{k}\), where \(n=\sum_{k=0}^{v}m_{k}-1\), is the solution of the system of equations

$$v(c_k)=f(c_k),\quad v'(c_k)=f'(c_k), \quad\ldots,\quad v^{(m_k-1)}(c_k)=f^{(m_k-1)}(c_k) $$

for every integer 0≤kv. Then Filon-type method is defined by

$$Q_{s}^F[f]\equiv I\bigl[v(x)\bigr]=\sum _{k=0}^na_kI\bigl[x^k \bigr],\quad I\bigl[x^k\bigr]=\int_{a}^{b}x^kS( \omega x)dx, \quad k=0,1,\ldots,n. $$

Notice that from Theorem 2.1, the solution of (1.8) is not differentiable at t=0 for general cases. In this section, we consider the Filon-type method for (1.8) with s=1 which was established by Filon [11].

3.1 The Filon method for (1.8)

Let \(\{t_{j}\}_{j=1}^{N}\) be a set of nodal points such that 0=t 0<t 1<t 2<⋯<t N =1, and let

$$L\bigl[y(0),y(t_j)\bigr]=y(0)+\frac{y(t_j)-y(0)}{t_j}t $$

denote the linear interpolant between y(0) and y(t j ). Since y(0)=f(0) (cf. (1.8)) it follows that for j=1,2,…,N,

(3.1)

We use this representation to introduce the Filon approximate scheme

$$ y_j+\int_0^{t_j}\frac{J_m(\omega (t_j-t))}{(t_j-t)^{\alpha}}L \bigl[y(0),y_j\bigr]dt=y_j+\int_0^{t_j} \frac {J_m(\omega t)}{t^{\alpha}}L\bigl[y_j,y(0)\bigr]dt =f(t_j) $$
(3.2)

(j=1,2,…,N), where y j denotes an approximation of y(t j ). This approximation is given by

$$ y_j=\frac{t_jf(t_j)- f(0)I[1-\alpha,m,\omega,t_j]}{t_j+ t_jI[-\alpha,m,\omega,t_j]- I[1-\alpha,m,\omega,t_j]},\quad j=1,2,\ldots,N , $$
(3.3)

where I[μ,m,ω,t j ] denotes the moment

(3.4)

Γ(z) is the gamma function and \(s_{\mu,\nu}^{(2)}(z)\) denotes the Lommel function of the second kind [1, 12, 22, 28].

The moment I[μ,m,ω,z] can be efficiently calculated [29, 30]. Note that \(s_{\mu,\nu}^{(2)}(z)\) admits the following asymptotic expansion (cf. [28, p. 351-352]):

(3.5)

Therefore, \(s_{\mu,\nu}^{(2)}(z)\) can be efficiently approximated by a few terms of

(3.6)

when z≫max{μ,ν}. In this paper, for z≥50, the moment is computed using (3.4), by truncating after the first 10 terms of (3.5). For z=ωb<50, we use

with the first 60 truncated terms [29, 30].

Theorem 3.1

Suppose that fC 1[0,1] and 0≤α<1. Then the error estimate for the Filon method for (1.8) is

$$ y_j-y(t_j)= O \biggl(\frac{1}{\omega^{1-\alpha}} \biggr). $$
(3.7)

Furthermore, if f(0)=0 and fC 2[0,1], the error estimate for the Filon method for (1.8) is

$$ y_j-y(t_j)= O \biggl(\frac{1}{\omega^{2-\alpha}} \biggr). $$
(3.8)

Proof

It follows from (3.1)–(3.3) that

(3.9)

Applying Lemma 2.2 and Theorem 2.1 we immediately obtain (3.7), since

$$E(t)|_{t=t_j}:=y(t_j-t)-y(t_j)+ \frac{(y(t_j)-y(0))t}{t_j}\bigg|_{t=t_j}=0, $$

and E′(t) is uniformly bounded for t∈[0,1] and large values of ω.

In particular, if f(0)=0 and fC 2[0,1], then by (1.8) and y′(0)=f′(0) we find (similarly to the proof of Theorem 2.1) that yC 1[0,1] and |y″(x)|≤C α x α, where

$$y''(x)+\int_0^x \frac{J_m(\omega t)}{t^{\alpha}}y''(x-t)dt=f''(x)- \frac{J_m(\omega x)}{t^{\alpha}}y'(0). $$

Integrating by parts and noting that E(0)=E(t j )=0 leads to

From the definition of E(t), we see that

$$E''(t)=y''(t_j-t)=O \bigl((t_j-t)^{-\alpha}\bigr),\quad\ \ \frac{E(t)}{t}= \frac{y(t_j-t)-y(t_j)}{t}+\frac{y(t_j)-y(0)}{t_j}, $$

and \(\frac{E(t)}{t}|_{t=0}=\lim_{t\rightarrow 0}\frac{E(t)}{t}=-y'(t_{j})+\frac{y(t_{j})-y(0)}{t_{j}}\) and

$$\biggl(\frac{E(t)}{t} \biggr)'=-\frac{y'(t_j-t)t+y(t_j-t)-y(t_j)}{t^2}\quad \mbox{for $t\not=0$},\quad \biggl(\frac{E(t)}{t} \biggr)'\bigg|_{t=0}= \frac{y''(t_j)}{2}, $$

which yields

$$\biggl[E'(t)-\frac{(1+\alpha+m)E(t)}{t}\biggr]'=O \bigl((t_j-t)^{-\alpha}\bigr). $$

This, together with (3.9) and Lemma 2.2 establishes the desired result. □

3.2 Piecewise constant and linear collocation methods

A direct improvement of the Filon method is the composite Filon method, that is, the sum of j Filon method for the subintervals [0,t 1],…,[t j−1,t j ]. The derived method coincides with the continuous linear collocation method. An alternative to the continuous linear collocation method is the piecewise constant collocation method.

Suppose that

$$I_{\Delta}=\{t_j: j=0,1,\ldots,N\} $$

and \(\hat{y}(x)\) is an approximation of y(x) such that

$$\hat{y}(x)|_{(t_j,t_{j+1}]} \quad \left \{ \begin{array}{l} \mbox{is a constant for $j=0,1,\ldots,N-1$}\\[4pt] \mbox{is linear for $j=0,1,\ldots,N-1$} \end{array} \right . $$

satisfying

$$\hat{y}(t_j)+\int_0^{t_j} \frac{J_m(\omega t)}{t^{\alpha}}\hat{y}(t_j-t)dt=f(t_j),\quad j=1,2, \ldots,N. $$

This leads to the piecewise constant collocation method

(3.10)

and the continuous linear collocation method

(3.11a)
(3.11b)

respectively, where

Theorem 3.2

Suppose that fC 1[0,1], 0≤α<1 and {t j } are uniform mesh points with h=1/N. Then the error bound for the above collocation methods is

$$\max_{1\le j\le N}\big|\hat{y}_j-y(t_j)\big|=O \biggl( \frac{h^{1-\alpha}}{\omega^{1-\alpha }} \biggr). $$

Proof

For the piecewise constant collocation method, y(t j ) satisfies

which, together with (3.10), yields

Here, \(\mathcal{E}_{j}=y(t_{j})-\hat{y}_{j}\). Using \(y(t_{j-i+1})-y(t_{j}-t)|_{t=t_{i-1}}=0\), \(\int_{t_{i-1}}^{t_{i}}t^{-\alpha}dt=O(h^{1-\alpha})\) and Lemma 2.2, we find

$$|\mathcal{E}_j|=O \biggl(\frac{h^{1-\alpha}}{\omega^{1-\alpha}} \biggr)+O \biggl( \frac{1}{\omega^{1-\alpha}} \biggr) \sum_{i=1}^{j-1}h^{1-\alpha}| \mathcal{E}_i| , $$

and the desired result is then found by employing the generalized discrete Gronwall inequality (cf. [6, p. 95]).

Similar arguments can be applied to the linear collocation method (3.11a)–(3.11b). □

4 Numerical examples

We now illustrate the proposed methods numerically for

Here Q A [y(x)]=f(x), \(Q_{N}^{F}\) denotes the Filon method (3.3), \(Q_{A}^{2}=f(x)-\int_{0}^{x}\frac{J_{m}(\omega t)}{t^{\alpha}}f(x-t)dt\) where \(\int_{0}^{x}\frac{J_{m}(\omega t)}{t^{\alpha}}f(x-t)dt\) is computed by two-point Filon method, \(Q_{N}^{L,0}\) the piecewise constant collocation method (3.10) and \(Q_{N}^{L,1}\) the linear collocation method (3.11a)–(3.11b) (Tables 16).

Table 1 Approximations at x=0.1,0.5,1 for \(y(x)+\int_{0}^{x}\frac{J_{0}(\omega t)}{\sqrt{t}}y(x-t)dt=\sin(x)\) with ω=104
Table 2 Approximations at x=0.1,0.5,1 for \(y(x)+\int_{0}^{x}\frac{J_{0}(\omega t)}{\sqrt{t}}y(x-t)dt=\sin(x)\) with ω=108
Table 3 Approximations at x=1 for α=0.1,0.5,0.8 for \(y(x)+\int_{0}^{x}\frac{J_{0.5}(\omega t)}{t^{\alpha}}y(x-t)dt=e^{x}\) with ω=104
Table 4 Approximations at x=1 for α=0.1,0.5,0.8 for \(y(x)+\int_{0}^{x}\frac{J_{0.5}(\omega t)}{t^{\alpha}}y(x-t)dt=e^{x}\) with ω=108
Table 5 Approximations at x=1 for m=0.1,1.1,2.1 for \(y(x)+\int_{0}^{x}J_{m}(\omega t)y(x-t)dt=x\) with ω=104
Table 6 Approximations at x=1 for m=0.1,1.1,2.1 for \(y(x)+\int_{0}^{x}J_{m}(\omega t)y(x-t)dt=x\) with ω=108

Figures 3, 4 and 5 illustrate the asymptotics of Theorems 3.1–3.2 for f(x)=e x and f(x)=x with respect to α=0.1,0.8 respectively. Here, we use the reference solution \(Q_{100}^{L,1}\) as the exact solution to compute the errors for large values of ω.

Fig. 3
figure 3

The absolute errors for Filon method \(Q_{10}^{F}\) scaled by ω 1−α and ω 2−α for \(y(x)+ \int_{0}^{x}\frac{J_{0.5}(\omega t)}{t^{\alpha}}y(x-t)dt=f(x)\) with f(x)=e x and f(x)=x, respectively: ω from 106 to 108

Fig. 4
figure 4

The absolute errors for \(Q_{10}^{L,0}\) and \(Q_{10}^{L,1}\) scaled by ω 1−α for \(y(x)+\int_{0}^{x}\frac{J_{0.5}(\omega t)}{t^{\alpha}}y(x-t)dt=e^{x}\) with α=0.1 and α=0.8, respectively: ω from 106 to 108

Fig. 5
figure 5

The absolute errors \(Q_{10}^{L,0}\) and \(Q_{10}^{L,1}\) scaled by ω 1−α for \(y(x)+\int_{0}^{x}\frac{J_{0.5}(\omega t)}{t^{\alpha}}y(x-t)dt=x\) with α=0.1 and α=0.8, respectively: ω from 106 to 108

5 Final remarks

The standard quadrature method, the collocation method and the discontinuous Galerkin method [4, 6, 8, 14, 21, 23] are not feasible for the numerical approximation of Volterra integral equations containing highly oscillatory kernels, since the computation of the highly oscillatory integrals by standard quadrature methods is exceedingly difficult and the cost steeply increases with the frequency.

This paper presents efficient numerical methods, by using Filon, piecewise constant and linear collocation techniques for the approximation of weakly singular Volterra integral equations with highly oscillatory Bessel kernels, in which the computational cost remains the same regardless of the size of the frequencies. Based on the asymptotics of the solutions, some simpler formulas for approximating the solutions for large values of the parameter ω are derived. A broad sample of numerical results confirms that these methods are efficient and become more accurate as the frequency increases.

Moreover, all the algorithms in Sect. 3 may directly be applied to

$$ \int_0^x\frac{J_m(\omega(x-t))}{(x-t)^{\alpha}}y(t)dt=f(x),\quad x \in[0,1],\quad0\le\alpha<1 . $$
(5.1)

Following Sect. 3, the Filon method and linear collocation methods for (5.1) are defined as follows

  • Filon method:

    $$ y_j=\frac{t_jf(t_j)-f(0)I[1-\alpha,m,\omega,t_j]}{ t_jI[-\alpha,m,\omega,t_j]- I[1-\alpha,m,\omega,t_j]},\quad j=1,2,\ldots,n $$
    (5.2)
  • piecewise constant collocation method:

    (5.3)
  • linear continuous collocation method:

    (5.4a)
    (5.4b)

    where Q 1 and Q 2 are the same as those in Sect. 3.

We now illustrate the proposed methods numerically for x∈[0,1]

$${\int_0^xJ_0(\omega t)y(x-t)dt=f(x),\quad f(0)=0,} $$

whose solution can be represented respectively by

$$y(x)=f'(x)-\omega\int_0^xf'(x-t)J_1( \omega t)dt+\omega^2\int_0^xf(x-t)J_0( \omega t)dt, $$

which can be computed for ω=104 by the Clenshaw-Curtis quadrature with 106 shifted Chebyshev points in [0,1] for each fixed x in (0,1] (see Tables 7, 8). Here \(Q_{N}^{F}\) denotes the Filon-type method, \(Q_{N}^{L,0}\) the piecewise constant collocation method, \(Q_{N}^{L,1}\) the linear collocation method.

Table 7 Relative errors at x=0.1,0.5,1 for \(\int_{0}^{x}J_{0}(\omega t)y(x-t)dt=\sin(x)\) with ω=104
Table 8 Relative errors at x=0.1,0.5,1 for \(\int_{0}^{x}J_{0}(\omega t)y(x-t)dt=\ln(1+x)\) with ω=104

In the future work, we will study better methods to solve the motivating problems in Sect. 1 as well as Fredholm integral equations, and the error bounds on the numerical schemes for the above Volterra integral equation of first kind.