1 Introduction

The paper deals with the approximation of integrals of the type

$$\begin{aligned} I(f;{\mathbf {t}})=\int _{{\mathrm {D}}} f({\mathbf {x}}) {\mathbf {K}}({\mathbf {x}},{\mathbf {t}}) {\mathbf {w}}({\mathbf {x}}) d{\mathbf {x}},\quad \quad {\mathbf {x}}=(x_1,x_2),\quad {\mathbf {t}}\in \mathrm {T}\subseteq \mathbb {R}^p, \ p\in \{1,2\},\nonumber \\ \end{aligned}$$
(1)

where f is a sufficiently smooth function inside \({\mathrm {D}}:=[-\,1,1]^2\) with possible algebraic singularities along the border \(\partial {\mathrm {D}}\), \({\mathbf {w}}\) is the product of two Jacobi weight functions, and \({\mathbf {K}}\) is one of the following kernels

$$\begin{aligned} {\mathbf {K}}_1({\mathbf {x}},\omega )= & {} \frac{1}{(|{\mathbf {x}}-{\mathbf {x}}_0|^2+\omega ^{-1})^\lambda },\quad \lambda \in \mathbb {R}^+,\quad {\mathbf {x}}_0=(s_0,t_0)\in {\mathrm {D}},\nonumber \\ {\mathbf {K}}_2({\mathbf {x}},\omega )= & {} g(\omega , {\mathbf {x}}), \end{aligned}$$
(2)

where \(0\ne \omega \in \mathbb {R},\) g is an oscillatory smooth function with frequency \(\omega \). Moreover, we will consider also kernels which can be possible combinations of the types \({\mathbf {K}}_1,\) \({\mathbf {K}}_2\). In what follows we will consider also

$$\begin{aligned} {\mathbf {K}}_3({\mathbf {x}},\omega )= {\mathbf {K}}_1({\mathbf {x}},\omega ){\mathbf {K}}_2({\mathbf {x}},\omega ). \end{aligned}$$
(3)

The numerical evaluation of these integrals presents difficulties for “large” \(\omega ,\) since \({\mathbf {K}}_1\) is close to be singular, \({\mathbf {K}}_2\) highly oscillates, while \({\mathbf {K}}_3\) includes both the aforesaid problematic behaviors. In all the cases, for these kernels the modulus of the derivatives grows as \(\omega \) grows.

\({\mathbf {K}}_1\)-type kernels appear, for instance, in two-dimensional nearly singular BEM integrals on quadrilateral elements (see for instance [8, 13]). Highly oscillating kernels of the type \({\mathbf {K}}_2\) are useful in computational methods for oscillatory phenomena in science and engineering problems, including wave scattering, wave propagation, quantum mechanics, signal processing and image recognition (see [7] and references therein). The combination of the two aspects, i.e. integrals with nearly singular and oscillating kernels appear for instance in the solution of problems of propagation in uniform waveguides with nonperfect conductors (see [5] and the references therein).

Here we propose a “product cubature formula” obtained by replacing the “regular” function f by a bivariate Lagrange polynomial based on a set of knots chosen to assure the stability and the convergence of the rule. Despite the simplicity of these formulas, the computation of their coefficients is not yet an easy task. In the analogous univariate case, to compute the corresponding coefficients one needs to determine “modified moments” by means of recurrence relations, and to examine the stability of these latter (see for instance [6, 9, 12, 17, 20]). This approach, however, does not appear feasible for multivariate not degenerate kernels. Here we present a unique approach for computing the coefficients of the aforesaid cubature rule when \({\mathbf {K}}\) belongs to the types (23). Such method, that we call 2D-dilation method, is based on a preliminary “dilation” of the domain and, by suitable transformations, on the successive reduction of the initial integral to the sum of ones on \({\mathrm {D}}\) again. These manipulations “relax” in some sense the “too fast” behavior of \({\mathbf {K}}\) as \(\omega \) grows. For a correct use of the 2D-dilation method, which could be also applied directly to compute integrals with kernels of the kind (23), we determine conditions under which the rule is stable and convergent.

Both the rules have advantages and drawbacks. The product integration rule requires a smaller number of evaluations of the integrand function f, while the number of samples involved in the 2D-dilation rule increases as \(\omega \) increases. On the other hand, the product rule involves the computation of \(m^2\) coefficients, which are integrals, and for this reason, in general its computational cost can be excessively high. However, as we will show in Sect. 4.2.1, this cost can be drastically reduced when the kernels present some symmetries.

We point out that many of the existing methods on the approximation of multivariate integrals are reliable for very smooth functions (see [1,2,3, 7, 19, 21] and references therein). Some of them treat degenerate kernels [18], others require changes of variable generally not adequate for weighted integrands [7, 8]. Our procedure allows to compute not degenerate weighted integrals, with oscillating and/or nearly singular kernels.

The paper is organized as follows. After some notations and preliminary results stated in Sects. 2 and 3 contains the product cubature rule with results on the stability and convergence for a wide class of kernels. In Sect. 4 we describe the 2D-dilation rule in a general form, proving results about the stability and the rate of convergence of the error. Moreover, we give some details for computing the coefficients of the product cubature rule with kernels as in (23). In Sect. 5 we present some numerical examples, where our results are compared with those achieved by other methods. In Sect. 6 we suggest some criteria on the choice of the stretching parameter in the 2D-dilation formula, and in Sect. 7 we propose a test for comparing our cubature formulae with respect to their CPU time. Section 8 is devoted to the proofs of the main results. Finally, the “Appendix” provides a detailed exposition of the formula 2D-dilation introduced in Sect. 4.

2 Notations and tools

Along all the paper the constant \(\mathcal {C}\) will be used several times, having different meaning in different formulas. Moreover from now on we will write \(\mathcal {C} \ne \mathcal {C}(a,b,\ldots )\) in order to say that \(\mathcal {C}\) is a positive constant independent of the parameters \(a,b,\ldots \), and \(\mathcal {C} = \mathcal {C}(a,b,\ldots )\) to say that \(\mathcal {C}\) depends on \(a,b,\ldots \). Moreover, if \(A,B > 0\) are quantities depending on some parameters, we write \(A \sim B,\) if there exists a constant \(0<\mathcal {C}\ne {\mathcal {C}}(A,B)\) s.t.

$$\begin{aligned} \frac{B}{\mathcal {C}} \le A \le \mathcal {C} B. \end{aligned}$$

\(\mathbb {P}_m\) denotes the space of the univariate algebraic polynomials of degree at most m and \(\mathbb {P}_{m,m}\) the space of algebraic bivariate polynomials of degree at most m in each variable.

In what follows we use the notation \(v^{\alpha ,\beta }\) for a Jacobi weight of parameters \(\alpha ,\beta \), i.e. \(v^{\alpha ,\beta }(z):=(1-z)^\alpha (1+z)^\beta , \ z\in (-\,1,1), \ \alpha ,\beta >-\,1.\)

Along the paper we set

$$\begin{aligned}&{\mathrm {D}}:=[-\,1,1]^2,\quad {\dot{\mathrm {D}}}={\mathrm {D}}\backslash \partial {\mathrm {D}}=(-\,1,1)^2,\\&{\mathbf {x}}=(x_1,x_2), \quad {\mathbf {t}}=(t_1,t_2)\in \mathrm {T}\subseteq \mathbb {R}^p, \quad p\in \{1,2\}. \end{aligned}$$

Finally, we will denote by \(N_1^m\) the set \(\{1, 2, \dots , m\}\).

2.1 Function spaces

From now on we set

$$\begin{aligned} \varvec{\sigma }({\mathbf {x}})=\varvec{\sigma }(x_1,x_2)=v^{\gamma _1,\delta _1}(x_1) v^{\gamma _2,\delta _2}(x_2)=:\sigma _1(x_1)\sigma _2(x_2), \end{aligned}$$
(4)

with \(\gamma _i,\delta _i \ge 0, i=1,2.\) Define

$$\begin{aligned} C_{\varvec{\sigma }}=\left\{ f\in C({\dot{\mathrm {D}}}): \lim _{{\mathbf {x}}\rightarrow \partial {\mathrm {D}}}(\varvec{\sigma }f)({\mathbf {x}})=0\right\} , \end{aligned}$$

equipped with the norm

$$\begin{aligned} \Vert f\Vert _{C_{\varvec{\sigma }}}=\Vert f\varvec{\sigma }\Vert _\infty =\max _{{\mathbf {x}}\in {\mathrm {D}}}|f({\mathbf {x}})| \varvec{\sigma }({\mathbf {x}}). \end{aligned}$$

Whenever one or more of the parameters \(\gamma _1,\delta _1,\gamma _2,\delta _2\) are greater than 0, functions in \(C_{\varvec{\sigma }}\) can be singular on one or more sides of \({\mathrm {D}}.\) In the case \(\gamma _1=\delta _1=\gamma _2=\delta _2=0\) we set \(C_{\varvec{\sigma }}=C({\mathrm {D}})\).

For smoother functions, i.e. for functions having some derivatives which can be discontinuous on \(\partial {\mathrm {D}}\), for \(r\ge 1\) we introduce the following Sobolev–type space

$$\begin{aligned} W_r(\varvec{\sigma })=\left\{ f \in C_{\varvec{\sigma }}:\mathcal {M}_r(f,\varvec{\sigma }):=\max \left\{ \left\| \frac{\partial ^r f}{\partial x_1^r}\varphi _1^r \varvec{\sigma }\right\| _\infty , \left\| \frac{\partial ^r f}{\partial x_2^r}\varphi _2^r \varvec{\sigma }\right\| _\infty \right\} <\infty \right\} , \end{aligned}$$
(5)

where \(\varphi _1(x_1)=\sqrt{1-x_1^2}\), \(\varphi _2(x_2)=\sqrt{1-x_2^2}\). We equip \(W_r(\varvec{\sigma })\) with the norm

$$\begin{aligned} \Vert f\Vert _{W_r(\varvec{\sigma })}=\Vert f\varvec{\sigma }\Vert _\infty +\mathcal {M}_r(f,\varvec{\sigma }). \end{aligned}$$

Denote by \(\displaystyle E_{m,m}(f)_{\varvec{\sigma }}\) the error of best polynomial approximation in \(C_{\varvec{\sigma }}\) by means of polynomials in \(\mathbb {P}_{m,m}\)

$$\begin{aligned} \displaystyle E_{m,m}(f)_{\varvec{\sigma }}=\inf _{P\in \mathbb {P}_{m,m}} \Vert (f-P)\varvec{\sigma }\Vert _{\infty }. \end{aligned}$$

In [15] (see also [11]) it was proved that for any \(f\in W_r(\varvec{\sigma })\)

$$\begin{aligned} E_{m,m}(f)_{\varvec{\sigma }}\le \mathcal {C} \frac{\mathcal {M}_r(f,\varvec{\sigma })}{m^r}, \end{aligned}$$
(6)

where \(0<{\mathcal {C}}\ne {\mathcal {C}}(m,f)\) and \(\mathcal {M}_r(f,\varvec{\sigma })\) defined in (5).

For \(f,g\in C_{\varvec{\sigma }}\), the following inequality can be easily proved

$$\begin{aligned} E_{m,m}(fg)_{\varvec{\sigma }}\le \Vert g\varvec{\sigma }\Vert E_{M,M}(f)_{\varvec{\sigma }}+\Vert f\varvec{\sigma }\Vert E_{M,M}(g)_{\varvec{\sigma }}, \quad f\in W_r(\varvec{\sigma }), \end{aligned}$$
(7)

where \(M=\left\lfloor \frac{m}{2}\right\rfloor .\)

In what follows we use the notation

$$\begin{aligned} \Vert F\varvec{\sigma }\Vert _{\infty ,A}:=\max _{(x,y)\in A} |F(x,y)|\varvec{\sigma }(x,y), \end{aligned}$$

where \(A\subseteq {\mathrm {D}}\) will be omitted if \(A\equiv {\mathrm {D}}\).

Finally, let us denote by \(L^p_{\varvec{\sigma }}({\mathrm {D}})\), \(1\le p<\infty \) the collection of the functions \(f({\mathbf {x}})\) defined on \({\mathrm {D}}\) such that \(\Vert f\varvec{\sigma }\Vert _p=\left( \int _{{\mathrm {D}}} |f({\mathbf {x}})\varvec{\sigma }({\mathbf {x}})|^p \ d{\mathbf {x}}\right) ^\frac{1}{p}<\infty \). For \(\varvec{\sigma }\equiv 1\), \(L^p_{\varvec{\sigma }}({\mathrm {D}})=L^p({\mathrm {D}})\).

2.2 Gauss–Jacobi cubature rules

For any Jacobi weight \(u=v^{\alpha ,\beta }\), let \(\{p_m(u)\}_{m=0}^\infty \) be the corresponding sequence of orthonormal polynomials with positive leading coefficients, i.e.

$$\begin{aligned} p_m(u,x)=\gamma _m(u)x^m+ \ \text { terms of lower degree},\quad \gamma _m(u)>0, \end{aligned}$$

and let \(\{\xi _i^{u}\}_{i=1}^m\) be the zeros of \(p_m(u)\).

From now on we set

$$\begin{aligned} {\mathbf {w}}({\mathbf {x}})=w_1(x_1)w_2(x_2), \quad w_1(x_1)=v^{\alpha _1,\beta _1}(x_1), \quad w_2(x_2)=v^{\alpha _2,\beta _2}(x_2), \end{aligned}$$
(8)

with \(\alpha _1,\beta _1,\alpha _2, \beta _2 >-\,1.\)

Consider now the tensor-product Gauss–Jacobi rule,

$$\begin{aligned} \int _{\mathrm {D}}f({\mathbf {x}}){\mathbf {w}}({\mathbf {x}})\ d{\mathbf {x}}= & {} \sum _{i=1}^m \sum _{j=1}^m \lambda _i^{w_1} \lambda _j^{w_2} f(\xi _{i,j}^{w_1,w_2})+ \mathcal {R}_{m,m}^{\mathcal {G}}(f)\nonumber \\= & {} \mathcal {G}_{m,m}^{w_1,w_2}(f)+\mathcal {R}_{m,m}^{\mathcal {G}}(f), \end{aligned}$$
(9)

where \(\xi _{i,j}^{w_1,w_2}:=(\xi _i^{w_1},\xi _j^{w_2}),\) \(\lambda _i^{w_j}\), \(i=1,\ldots ,m,\) denote the Christoffel numbers w.r.t \(w_j,\ j=1,2\) and \(\mathcal {R}_{m,m}^{\mathcal {G}}(P)=0\) for any \(P\in \mathbb {P}_{2m-1,2m-1}\).

For its remainder term, the following bound holds (see [15]).

Proposition 1

Let \(f\in C_{\varvec{\sigma }}\). Under the assumption

$$\begin{aligned} \frac{{\mathbf {w}}}{\varvec{\sigma }}\in L^1({\mathrm {D}}), \end{aligned}$$

we have

$$\begin{aligned} \left| \mathcal {R}_{m,m}^{\mathcal {G}}(f)\right| \le \mathcal {C} E_{2m-1,2m-1}(f)_{\varvec{\sigma }} , \end{aligned}$$
(10)

where \(\mathcal {C}\ne {\mathcal {C}}(m,f)\).

2.3 Bivariate Lagrange interpolation

Let \(\mathcal {L}_{m,m}({\mathbf {w}}, f,{\mathbf {x}})\) be the bivariate Lagrange polynomial interpolating a given function f at the grid points \(\{\xi _{i,j}^{w_1,w_2}, \ (i,j)\in N_1^m\times N_1^m\}\), i.e.

$$\begin{aligned} \mathcal {L}_{m,m}\big ({\mathbf {w}}, f,\xi _{i,j}^{w_1,w_2}\big )=f\big (\xi _{i,j}^{w_1,w_2}\big ),\quad (i,j)\in N_1^m\times N_1^m. \end{aligned}$$

The polynomial \(\mathcal {L}_{m,m}({\mathbf {w}}, f)\in \mathbb {P}_{m-1,m-1}\) and \(\mathcal {L}_{m,m}({\mathbf {w}}, P)= P\), for any \(P\in \mathbb {P}_{m-1,m-1}\). An expression of \(\mathcal {L}_{m,m}({\mathbf {w}}, f)\) is

$$\begin{aligned}&\mathcal {L}_{m,m}({\mathbf {w}}, f,{\mathbf {x}})= \displaystyle \sum _{i=1}^m \sum _{j=1}^m\mathbf {\ell }_{i,j}^{w_1,w_2}({\mathbf {x}}) f(\xi _{i,j}^{w_1,w_2}),\\&\quad \mathbf {\ell }_{i,j}^{w_1,w_2}({\mathbf {x}})=\ell _i^{w_1}(x_1)\ell _j^{w_2}(x_2), \ \ \ \ell _i^{w_k}(z)=\displaystyle \frac{p_m(w_k,z)}{p'_m\big (w_k,\xi _i^{w_k}\big )\big ( z -\xi _i^{w_k}\big )}, \qquad k=1,2. \end{aligned}$$

3 The product cubature rule

Let

$$\begin{aligned} I(f;{\mathbf {t}})=\int _{{\mathrm {D}}} f({\mathbf {x}}) {\mathbf {K}}({\mathbf {x}},{\mathbf {t}}) {\mathbf {w}}({\mathbf {x}}) d{\mathbf {x}}, \end{aligned}$$
(11)

\({\mathbf {w}}=w_1w_2\) defined in (8), \({\mathbf {t}}\in \mathrm {T}\) and \({\mathbf {K}}\) defined in \({\mathrm {D}}\times \mathrm {T}\). By replacing the function f in (11) with the Lagrange polynomial \(\mathcal {L}_{m,m}({\mathbf {w}}, f;{\mathbf {x}})\), we obtain the following product cubature rule

$$\begin{aligned} I(f;{\mathbf {t}})=\sum _{r=1}^m \sum _{s=1}^m A_{r,s}({\mathbf {K}},{\mathbf {t}}) f\left( \xi _{r,s}^{w_1,w_2}\right) +\mathcal {R}^{\varSigma }_{m,m}(f,{\mathbf {t}})=:\varSigma _{m,m}(f,{\mathbf {t}})+ \mathcal {R}_{m,m}^{\varSigma }(f,{\mathbf {t}}) \end{aligned}$$
(12)

where

$$\begin{aligned} A_{r,s}({\mathbf {K}},{\mathbf {t}})=\int _{{\mathrm {D}}} \ell _{r,s}^{w_1,w_2}({\mathbf {x}}){\mathbf {K}}({\mathbf {x}},{\mathbf {t}}){\mathbf {w}}({\mathbf {x}}) d{\mathbf {x}}\end{aligned}$$
(13)

and

$$\begin{aligned} \mathcal {R}_{m,m}^{\varSigma }(f,{\mathbf {t}})=: I(f;{\mathbf {t}})-\varSigma _{m,m}(f,{\mathbf {t}}) \end{aligned}$$

is the remainder term. About the stability and the convergence of the product rule we are able to prove the following

Theorem 1

Let \({\mathbf {w}}({\mathbf {x}})=w_1(x_1)w_2(x_2)\) be defined in (8) and \({\mathbf {t}}\in \mathrm {T}\). If there exist a \(\varvec{\sigma }\) as defined in (4) s.t. \(f\in C_{\varvec{\sigma }}\) and the following assumptions are satisfied

$$\begin{aligned}&{\mathbf {K}}({\mathbf {t}})\sqrt{{\mathbf {w}}}\in L^2({\mathrm {D}}), \end{aligned}$$
(14)
$$\begin{aligned} \frac{{\mathbf {w}}}{\varvec{\sigma }} ,&\frac{\varvec{\sigma }}{\sqrt{{\mathbf {w}}\varphi _1\varphi _2}}, \ \frac{1}{\varvec{\sigma }}\sqrt{\frac{{\mathbf {w}}}{\varphi _1\varphi _2}} \in L^2({\mathrm {D}}), \end{aligned}$$
(15)

then we have

$$\begin{aligned} \sup _m |{\varSigma }_{m,m}(f,{\mathbf {t}})|\le \mathcal {C} \Vert f\varvec{\sigma }\Vert _\infty , \end{aligned}$$
(16)

where \({\mathcal {C}}\ne {\mathcal {C}}(m,f)\). Moreover, the following error estimate holds true

$$\begin{aligned} |\mathcal {R}^\varSigma _{m,m}(f,{\mathbf {t}})|\le \mathcal {C} E_{m-1,m-1}(f)_{\varvec{\sigma }}, \end{aligned}$$
(17)

where \({\mathcal {C}}\ne {\mathcal {C}}(m,f)\).

Remark 1

From (17) it follows that for \(m\rightarrow \infty ,\) the product rule error rate of decay is bounded by that of the error of the best polynomial approximation of the only function f. This appealing speed of convergence holds under the “exact” computation of the coefficients in \({\varSigma }_{m,m}(f,{\mathbf {t}})\). Their (approximate) evaluation is however not a simple task; only for kernels having special properties it can be performed with a low computational cost. Details on the computation of the coefficients (13) for some kernels will be given in the next section.

4 The formula 2D-dilation and the computation of the product rule coefficients for some kernels

In this Section we fix our attention on the kernels

$$\begin{aligned} {\mathbf {K}}_1({\mathbf {x}},\omega )= & {} \frac{1}{(|{\mathbf {x}}-{\mathbf {x}}_0|^2+\omega ^{-1})^\lambda },\qquad {\mathbf {x}}_0=(s_0,t_0)\in {\mathrm {D}}, \ \lambda \in \mathbb {R}^+,\nonumber \\ {\mathbf {K}}_2({\mathbf {x}},\omega )= & {} g(\omega {\mathbf {x}}),\nonumber \\ {\mathbf {K}}_3({\mathbf {x}},\omega )= & {} {\mathbf {K}}_1({\mathbf {x}},\omega ){\mathbf {K}}_2({\mathbf {x}},\omega ), \end{aligned}$$
(18)

\(0\ne \omega \in \mathbb {R}, \) under the assumption that the function g is sufficiently smooth. The graphs in Figs. 1, 2, 3, 4, 5 and 6 show the behavior of some kernels of the types (18) for some choices of the parameter \(\omega \).

Fig. 1
figure 1

Kernel \({\mathbf {K}}_1({\mathbf {x}},\omega )=(x_1^2+x_2^2+\omega ^{-1})^{-1}\) with \(\omega =10^2\)

Fig. 2
figure 2

Kernel \({\mathbf {K}}_1({\mathbf {x}},\omega )=(x_1^2+x_2^2+\omega ^{-1})^{-1}\) with \(\omega =10^8\)

Fig. 3
figure 3

Kernel \({\mathbf {K}}_2({\mathbf {x}},\omega )=\sin (\omega x_1x_2)\), with \(\omega =10^8\)

Fig. 4
figure 4

Kernel \({\mathbf {K}}_2({\mathbf {x}},\omega )=\cos (\omega x_1x_2)\), with \(\omega =10^8\)

Fig. 5
figure 5

Kernel \({\mathbf {K}}_3({\mathbf {x}},\omega )=\sin (\omega x_1x_2)(x_1^2+x_2^2+\omega ^{-1})^{-1}\) with \(\omega =10^4\)

Fig. 6
figure 6

Section of the Kernel \({\mathbf {K}}_3({\mathbf {x}},\omega )=\sin (\omega x_1x_2)(x_1^2+x_2^2+\omega ^{-1})^{-1}\) with \(\omega =10^4\)

To compute the product rule coefficients (13), we will use the 2D-dilation formula that we describe next. Since the latter can be used in more general cases, we present the dilation formula in a general form, by proving its stability and convergence under suitable assumptions. Successively, we apply it for computing the coefficients \(A_{r,s}({\mathbf {K}}_i,\omega ),\ i=1,2,3\) of the product rule in (12). With regard to this last aspect, we will show also how to reduce the computational cost in some special cases.

4.1 The 2D-dilation formula

To perform the evaluation of integral (1), with \(p=2\) and a kernel of type (18), the main idea is to dilate the integration domain \({\mathrm {D}}\) by a change of variables in order to relax in some sense the “too fast” behavior of \({\mathbf {K}}\) when \(\omega \) grows. Successively the new domain \(\varOmega \) is divided into \(S^2\) squares \(\left\{ {\mathrm {D}}_{i,j}\right\} _{(i,j)\in N_1^S\times N_1^S}\) and each integral is reduced into \({\mathrm {D}}\) one more time. At last, the integrals are approximated by suitable Gauss–Jacobi rules. For 1-dimensional unweighted integrals with a nearly singular kernel in Love’s equation [16] and for highly oscillating kernels in [4], a “dilation” technique has been developed.

Here we describe a dilation method for weighted bivariate integrals having nearly singular kernels, highly oscillating kernels and also for their composition. Consider integrals of the type

$$\begin{aligned} \mathcal {I}(F,\omega )=\int _{{\mathrm {D}}}F({\mathbf {x}}) {\mathbf {K}}({\mathbf {x}},\omega ) {\mathbf {w}}({\mathbf {x}}) d{\mathbf {x}}, \quad \omega \in \mathrm {T}=\mathbb {R},\ \ F\in C_{\varvec{\sigma }}, \end{aligned}$$

where \({\mathbf {K}}({\mathbf {x}},\omega )\) is one of the types in (18). Setting \(\omega _1=|\omega |^\frac{1}{2}\), by the changes of variables \(x_1=\frac{\eta }{\omega _1},\quad x_2=\frac{\theta }{\omega _1},\) we get

$$\begin{aligned} \mathcal {I}(F,\omega )= & {} \omega _1^{2} \int _{\left[ -\omega _1,\omega _1\right] ^2} F\left( \frac{\eta }{\omega _1}, \frac{\theta }{\omega _1}\right) {\mathbf {K}}\left( \frac{\eta }{\omega _1}, \frac{\theta }{\omega _1},\omega \right) w_1\left( \frac{\eta }{\omega _1}\right) w_2\left( \frac{\theta }{\omega _1}\right) d\eta d\theta \end{aligned}$$

and choosing \(d\in \mathbb {R}^+\) s.t. \(S=\frac{ 2\omega _1}{d}\in \mathbb {N},\) we have

$$\begin{aligned} \mathcal {I}(F,\omega )= & {} \tau _0\sum _{i=1}^S\sum _{j=1}^S\int _{D_{i,j}}F\left( \frac{\eta }{\omega _1}, \frac{\theta }{\omega _1}\right) {\mathbf {K}}\left( \frac{\eta }{\omega _1}, \frac{\theta }{\omega _1},\omega \right) w_1\left( \frac{\eta }{\omega _1}\right) w_2\left( \frac{\theta }{\omega _1}\right) d\eta d\theta , \end{aligned}$$

where \(\tau _0=\omega _1^{2}\) and

$$\begin{aligned}&\displaystyle D_{i,j}:\left[ -\omega _1+(i-1)d,-\omega _1+id\right] \times \left[ -\omega _1+(j-1)d,-\omega _1+jd\right] , \\&\quad \forall (i,j)\in N_1^S\times N_1^S. \end{aligned}$$

By mapping each square \(D_{i,j}\) into the unit square \({\mathrm {D}}\) and setting for \( k\in N_1^S\)

$$\begin{aligned} \psi _k(z)=\left( \frac{z+1}{2}\right) d-\omega _1+(k-1)d \end{aligned}$$

and for \((i,j)\in N_1^S\times N_1^S\)

$$\begin{aligned} F_{i,j}({\mathbf {x}})= & {} F\left( \frac{\psi _i(x_1)}{\omega _1},\frac{\psi _j(x_2)}{\omega _1}\right) ,\quad K_{i,j}({\mathbf {x}})={\mathbf {K}}\left( \frac{\psi _i(x_1)}{\omega _1},\frac{\psi _j(x_2)}{\omega _1},\omega \right) ,\\ w_{i,j}({\mathbf {x}})= & {} w_1\left( \frac{\psi _i(x_1)}{\omega _1}\right) w_2\left( \frac{\psi _j(x_2)}{\omega _1}\right) , \end{aligned}$$

we get

$$\begin{aligned} \mathcal {I}(F,\omega )= & {} \frac{d^2\tau _0}{4}\sum _{i=1}^{S} \sum _{j=1}^{S} \int _{{\mathrm {D}}} F_{i,j}({\mathbf {x}})K_{i,j}({\mathbf {x}}) w_{i,j}({\mathbf {x}}) d{\mathbf {x}}\nonumber \\=: & {} \varPsi _{m,m}(F,\omega )+\mathcal {R}^\varPsi _{m,m}(F,\omega ), \end{aligned}$$
(19)

where the cubature formula \(\varPsi _{m,m}(F,\omega )\) has been obtained by applying suitable Gauss–Jacobi cubature rules to evaluate the \(S^2\) integrals in (19) and

$$\begin{aligned} \mathcal {R}^\varPsi _{m,m}(F,\omega )=\mathcal {I}(F,\omega ) -\varPsi _{m,m}(F,\omega ) \end{aligned}$$

is the remainder term. Deferring to the “Appendix” the explicit expression of \(\varPsi _{m,m}(F,\omega )\), we state now a result about the stability and the convergence of the rule.

Theorem 2

Assume \({\mathbf {w}}\) as in (8) and \({\mathbf {K}}(\cdot ,\omega )\) as in (18). Then, if there exist a \(\varvec{\sigma }\) as defined in (4) s.t. \(F\in C_{\varvec{\sigma }}\) and the following assumptions are satisfied

$$\begin{aligned} \frac{{\mathbf {w}}}{\varvec{\sigma }}\in L^1({\mathrm {D}}), \end{aligned}$$
(20)

then

$$\begin{aligned} |\varPsi _{m,m}(F,\omega )|\le {\mathcal {C}}\Vert F\varvec{\sigma }\Vert _\infty , \quad 0<{\mathcal {C}}\ne {\mathcal {C}}(F,m). \end{aligned}$$
(21)

Moreover, for any \(F\in W_r(\varvec{\sigma })\) under the assumptions \(g\in C^\infty (\varOmega )\), \(\varOmega \equiv [-\omega _1,\omega _1]^2\), \(S\ge 2\),

we get

$$\begin{aligned} |\mathcal {R}^{\varPsi }_{m,m}(F,\omega )| \le {\mathcal {C}}\left( \frac{d}{2}\left( \frac{1}{\omega _1}+1\right) \right) ^{r} \frac{\mathcal {N}_r(F,{\mathbf {K}})}{m^r}, \end{aligned}$$
(22)

where

$$\begin{aligned} \mathcal {N}_{r}(F,{\mathbf {K}})=\Vert F\varvec{\sigma }\Vert _\infty +\max _{h\in N_1^2}\max _{k\in N_0^r} \left( \left\| \frac{\partial ^{r-k} {\mathbf {K}}(\cdot ,\omega )}{(\partial x_h)^{r-k}}\right\| _{\varOmega ,\infty } \times \left\| \frac{\partial ^{k} F(\cdot ,\omega )}{(\partial x_h)^{k}}\right\| _{\infty }\right) , \end{aligned}$$

and \(0<{\mathcal {C}}\ne {\mathcal {C}}(F,m)\).

4.2 Computation of the product rule coefficients for kernels (18)

We provide now some details for computing the coefficients in (12) when the kernels are of the types in (18), i.e.

$$\begin{aligned} A_{r,s}({\mathbf {K}}_j,\omega )=\int _{{\mathrm {D}}} \ell _{r,s}^{w_1,w_2}({\mathbf {x}}) {\mathbf {K}}_j({\mathbf {x}},\omega ) {\mathbf {w}}({\mathbf {x}}) d{\mathbf {x}}, \quad j=1,2,3. \end{aligned}$$

We point out that without loss of generality, we assume \({\mathbf {x}}_0\) a fixed data in the problem, so that \(\mathrm {T}=\mathbb {R}\) and \({\mathbf {t}}=\omega \). By using 2D-dilation formula (19) of degree m with \(F=\ell _{r,s}^{w_1,w_2},\) we have

$$\begin{aligned} A_{r,s}({\mathbf {K}}_j,\omega )=\int _{{\mathrm {D}}} \ell _{r,s}^{w_1,w_2}({\mathbf {x}}){\mathbf {K}}_j({\mathbf {x}},\omega ){\mathbf {w}}({\mathbf {x}}) d{\mathbf {x}}= \varPsi _{m,m}(\ell _{r,s}^{w_1,w_2},\omega )+\mathcal {R}^\varPsi _{m,m}(\ell _{r,s}^{w_1,w_2},\omega ). \end{aligned}$$
(23)

About the rate of convergence of (23) we state the following

Corollary 1

Under the hypotheses of Theorem 2, for \(m>\frac{ d}{2} e^{\frac{1}{\omega _1}}\) and for \(d\ge 2\), \(\omega _1\ge 1\), the following error estimate holds

$$\begin{aligned} \big |\mathcal {R}^{\varPsi }_{m,m}(\ell _{r,s}^{w_1,w_2},\omega )\big |\le & {} {\mathcal {C}}\ \frac{\mathcal {T}_{2m}({\mathbf {K}})}{m^{m+1-\mu }}, \end{aligned}$$
(24)

where

$$\begin{aligned} \mathcal {T}_{2m}({\mathbf {K}})= & {} \max _{h\in N_1^2}\max _{k\in N_{m+1}^{2m}} \left\| \frac{\partial ^{k} {\mathbf {K}}(\cdot ,\omega )}{(\partial x_h)^{k}}\right\| _{\varOmega ,\infty }, \nonumber \\ \mu= & {} \max \left\{ \alpha _i+\frac{1}{2} -2\gamma _i,\beta _i+\frac{1}{2} -2\delta _i\right\} ,\ i\in \{1,2\}, \end{aligned}$$
(25)

and \({\mathcal {C}}\ne {\mathcal {C}}(m,\omega ).\)

Following the previous work-scheme, the evaluation of the coefficient \(A_{r,s}({\mathbf {K}},\omega )\) requires \(m^4S^2\) long operations, with S increasing as \(\omega \) increases. However, as the numerical tests will show, the implementation of the product rule for smooth integrands functions f and independently on the choice of the parameter \(\omega \), will give accurate results for “small” values of m.

4.2.1 Cases of complexity reduction

In some cases the computational complexity can be drastically reduced. Assume \(w_1(x)=v^{\alpha _1,\alpha _1}(x_1)\). Slightly changing the notation set in Sect. 2, let us denote by \(\{\xi _i^{w_1}\}_{i=-M}^M,\) \(M=\left\lfloor \frac{m}{2} \right\rfloor \), \(\xi _0^{w_1}=0\) for m odd, the zeros of \(p_m(w_1)\). Since \(w_1\) is an even weight function, it is \(\xi _i^{w_1}=-\xi _{-i}^{w_1},\quad i=1,2,\dots ,M\) we have

$$\begin{aligned} \ell _{r}^{w_1}(x_1)= \left\{ \begin{array}{ll} \displaystyle \prod _{i=1}^{\left\lfloor \frac{m}{2}\right\rfloor } \frac{\left( \xi ^{w_1}_i\right) ^2-x_1^2}{\left( \xi ^{w_1}_i\right) ^2}, \quad \text{ if } \ r=0, \ \textit{ m odd,}\\ \displaystyle \left( \frac{x_1}{\xi ^{w_1}_r}\right) ^{\frac{1-(-1)^m}{2}}\prod _{_{i\ne r}^{i=1}}^{\left\lfloor \frac{m}{2}\right\rfloor } \frac{x_1^2-\left( \xi ^{w_1}_i\right) ^2}{\left( \xi ^{w_1}_r\right) ^2-\left( \xi ^{w_1}_i\right) ^2} \left( \frac{x_1+\xi ^{w_1}_r}{2 \xi ^{w_1}_r}\right) , \quad \ 1\le r\le m. \end{array} \right. \end{aligned}$$

Thus, assuming \({\mathbf {w}}({\mathbf {x}})=w_1(x_1)w_2(x_2)\), \(w_1=v^{\alpha _1,\alpha _1}\), \(w_2=v^{\alpha _2,\alpha _2}\), if \({\mathbf {K}}({\mathbf {x}},\omega )\) is symmetric through the axes \(x_1=0\) and \(x_2=0\), i.e.

$$\begin{aligned} {\mathbf {K}}(-{\mathbf {x}},\omega )={\mathbf {K}}({\mathbf {x}},\omega ), \end{aligned}$$

it is

$$\begin{aligned} A_{r,s}({\mathbf {K}},\omega )= & {} A_{r,m-s+1}({\mathbf {K}},\omega ), \quad r\in N_1^m,\ s\in N_1^M,\\ A_{r,s}({\mathbf {K}},\omega )= & {} A_{m-r+1,s}({\mathbf {K}},\omega ), \quad s\in N_1^m,\ r\in N_1^M, \end{aligned}$$

and the global computational cost has a reduction of \(75\%\). If in addition \(\alpha _1 = \alpha _2\), i.e. \({\mathbf {w}}=v^{\alpha _1,\alpha _1}v^{\alpha _1,\alpha _1},\) since it is also

$$\begin{aligned} A_{r,s}({\mathbf {K}},\omega )=A_{s,r}({\mathbf {K}},\omega ), \quad (r,s)\in N_1^m\times N_1^m, \end{aligned}$$

a reduction of 87.5% is achieved.

In the case \({\mathbf {K}}({\mathbf {x}},\omega )\) is odd w.r.t. both the coordinate axes, i.e.

$$\begin{aligned} {\mathbf {K}}(-{\mathbf {x}},\omega )=-{\mathbf {K}}({\mathbf {x}},\omega ), \end{aligned}$$

and \({\mathbf {w}}=v^{\alpha _1,\alpha _1}v^{\alpha _2,\alpha _2},\) it is

$$\begin{aligned} A_{r,s}({\mathbf {K}},\omega )= & {} -A_{r,m-s+1}({\mathbf {K}},\omega ), \quad r\in N_1^m,\ s\in N_1^M\\ A_{r,s}({\mathbf {K}},\omega )= & {} -A_{m-r+1,s}({\mathbf {K}},\omega ), \quad s\in N_1^m,\ r\in N_1^M, \end{aligned}$$

CC (short for Computational Complexity) has a reduction of 75%. If, in addition, \(\alpha _1 = \alpha _2\), the following additional relations hold

$$\begin{aligned} A_{r,s}({\mathbf {K}},\omega )=A_{s,r}({\mathbf {K}},\omega ), \quad r,s\in N_1^m, \end{aligned}$$

and CC has a reduction of 87.5%.

4.3 Degenerate kernels

Whenever the kernels satisfy \({\mathbf {K}}({\mathbf {x}},\omega )=k_i(x_1,\omega )k_j(x_2,\omega )\), for \(i,j\in \{4,5\}\) where

$$\begin{aligned}&k_4(x,\omega )=\frac{1}{((x-x_0)^2+\omega ^{-1})^\lambda },\ \ k_5(x,\omega )=G(\omega x), \end{aligned}$$

with \(x_0\in (-\,1,1),\ \lambda \in \mathbb {R}^+,\) \(G\in C^\infty ([-\omega ,\omega ]),\) the coefficients take the form

$$\begin{aligned} A_{r,s}({\mathbf {K}},{\mathbf {t}})=B_r(k_i,\omega ) B_s(k_j,\omega ),\quad (r,s)\in N_1^m\times N_1^m, \end{aligned}$$

where

$$\begin{aligned} B_r(k_i,\omega )= & {} \int _{-1}^1\ell _r^{w_1}(x_1) k_i(x_1,\omega )w_1(x_1) dx_1,\\ B_s(k_j,\omega )= & {} \int _{-1}^1\ell _s^{w_1}(x_2) k_j(x_2,\omega ) w_2(x_2) dx_2 ,\quad r,s\in N_1^m. \end{aligned}$$

Also in this case the computation effort is drastically reduced, since \(B_r(k_j,\omega ),\) \( \ j=1,2\) can be approximated by implementing the 1-D dilation method (see [4, 16]).

5 Numerical examples

In this section we present some examples to test the cubature rule proposed in Sect. 3, comparing our results with those obtained by other methods. To be more precise, we approximate each integral by the cubature rule (12) for increasing values of m, choosing three different values of \(\omega \) and computing the coefficients in (12) via 2D-dilation rule. In each example we state also the numerical results obtained by the Gauss–Jacobi cubature rule (shortly GJ-rule) and those achieved by the straightforward application of the 2D-dilation rule (2D-d). About the first two tests involving nearly singular kernels \({\mathbf {K}}_1(\cdot ,\omega )\), we provide also the results obtained by the iterated \(\sinh \) transformation proposed by Johnston et al. [8] (shortly JJE-method). The integrals in Examples 3 and 4 involve oscillatory kernels of the type \({\mathbf {K}}_2(\cdot ,\omega )\). In Example 3 our results are compared with those achieved by the method proposed by Huybrechs and Vandevalle [7] (shortly HV-method), since the function f satisfies their assumptions of convergence. The last two tests involve “mixed” kernels and for them we compare our results with those achieved by the JJE-method related to the kernel \({\mathbf {K}}_1(\cdot ,\omega )\) with function f replaced by \(f{\mathbf {K}}_2(\cdot ,\omega )\).

We point out that all the computations were performed in double-machine precision (\(eps \approx 2.22044e{-}16\)) and in the tables the symbol “–” will mean that machine precision has been achieved.

Example 1

$$\begin{aligned} I(f;\omega )= & {} \int _{{\mathrm {D}}} \frac{e^{x_1 x_2}}{x_1^2+x_2^2+\omega ^{-1}} dx_1 dx_2\\ f({\mathbf {x}})= & {} e^{x_1 x_2}, \quad {\mathbf {K}}_1({\mathbf {x}},\omega )=\frac{1}{x_1^2+x_2^2+\omega ^{-1}}, \quad w_1=w_2=1, \end{aligned}$$

The function \(f\in W_r(\varvec{\sigma })\) with \(\varvec{\sigma }=1\) for any \(r\ge 1.\) In Table 1 the results obtained by implementing the product rule (12) show that the machine precision is attained at \(m=16\) for any choice of \(\omega \). A similar behavior presents the 2D-d, whose results are set in Table 2. Also the JJE-method (Table 3) fastly converges, achieving almost satisfactory results, even if it is required the use of the Gauss–Laguerre cubature rule of order \(m=1024\) to obtain 13 digits. Finally, as we can expect, by using the GJ-rule, as \(\omega \) increases a progressive loss of precision is detected, until results become completely wrong (Table 4).

Table 1 Example 1: results by the product rule \(\varSigma _{m,m}(f)\)
Table 2 Example 1: results by 2D-d
Table 3 Example 1: results by JJE-method
Table 4 Example 1: results by GJ-rule

Example 2

$$\begin{aligned}&I(f;\omega )=\int _{{\mathrm {D}}} \frac{\log ^{\frac{15}{2}}(x_1+x_2+4)}{x_1^2+x_2^2+\omega ^{-1}} \sqrt{(1-x_1^2)(1-x_2^2)} dx_1 dx_2\\&f({\mathbf {x}})=\log ^{\frac{15}{2}}(x_1+x_2+4) ,\quad {\mathbf {K}}({\mathbf {x}},\omega )=\frac{1}{x_1^2+x_2^2+\omega ^{-1}}\\&w_1=w_2=v^{\frac{1}{2},\frac{1}{2}},\\&\sigma _1=\sigma _2=v^{\frac{1}{4},\frac{1}{4}},\quad \varvec{\sigma }=\sigma _1\sigma _2 \end{aligned}$$

Also in this case the function \(f\in W_r(\varvec{\sigma })\) for any \(r\ge 1.\) In Table 5 the results obtained by implementing the product rule (12) show that the machine precision is attained for \(m=32\). In this case the value of the seminorm growth faster than the previous example. For instance, \(\mathcal {M}_{10}(f)\sim 2.5\times 10^4\). A similar behavior presents the 2D-d, whose results are given in Table 6. In this case the JJE-method in Table 7 converges lower than the previous example, achieving 8–9 exact digits. In this case the changes of variables are applied to the whole integrand, including two Chebyshev weights, and this explains this bad behavior. Similar to the previous test, by the GJ-rule a progressive loss of precision occurs as \(\omega \) increases, till \(\omega =10^6\) for which the values are completely wrong (Table 8).

Table 5 Example 2: results by the product rule \(\varSigma _{m,m}(f)\)
Table 6 Example 2: results by 2D-d
Table 7 Example 2: results by JJE-method
Table 8 Example 2: results by GJ-rule

Example 3

$$\begin{aligned}&I(f;\omega )=\int _{{\mathrm {D}}} \sinh (x_1 x_2)e^{i \omega _1(x_1+x_2)} dx_1 dx_2\\&f({\mathbf {x}})=\sinh (x_1 x_2),\ {\mathbf {K}}({\mathbf {x}},\omega )=e^{i \omega _1(x_1+x_2)},\\&w_1=w_2=1,\quad {\mathbf {w}}=w_1w_2\\&\sigma _1=\sigma _2=1,\quad \varvec{\sigma }=\sigma _1\sigma _2 \end{aligned}$$

The function \(f\in W_r(\varvec{\sigma })\) for any \(r\ge 1,\) with \(\varvec{\sigma }=1.\) By Table 9, containing the results of the product rule (12), the machine precision is attained with \(m=16\) for \(\omega _1=10, 10^2\), while for greater values of \(\omega _1\) the convergence is slower. Similar is the behavior of the 2D-d whose results are in Table 10, where, as well as in other examples, 1–2 final digits are lost w.r.t. the product rule \(\varSigma _{m,m}(f)\). HV-method in Table 11 gives very good results and this is not surprising, since, according to the convergence hypotheses of the HV-method, the oscillator (\(x+y\)) and the function f are both analytic. Finally, for large \(\omega _1\), the GJ-rule doesn’t give any correct result till \(m\le 512\), achieving acceptable results only for \(m=1024\) (see Table 12).

Table 9 Example 3: results by the product rule \(\varSigma _{m,m}(f)\)
Table 10 Example 3: results by 2D-d
Table 11 Example 3: results by HV-method
Table 12 Example 3: results by GJ-rule

Example 4

$$\begin{aligned}&I(f;\omega )=\int _{{\mathrm {D}}} |\sinh (x_1 x_2)|^{11.5} \sin (\omega x_1 x_2) v^{-\frac{1}{4},\frac{1}{4}}(x_1) v^{-\frac{1}{4},\frac{1}{4}}(x_2) dx_1 dx_2\\&f({\mathbf {x}})=|\sinh (x_1 x_2)|^{11.5}, \quad {\mathbf {K}}({\mathbf {x}},\omega )=\sin (\omega x_1 x_2)\\&w_1=w_2=v^{-\frac{1}{4},\frac{1}{4}}\\&\sigma _1=\sigma _2=1,\quad \varvec{\sigma }=\sigma _1\sigma _2 \end{aligned}$$

The function \(f\in W_{11}(\varvec{\sigma })\) for \(\sigma _1=\sigma _2=1.\) By Table 13 which contains the results of the product rule (12), the machine precision is attained with \(m=512\) for \(\omega =10^2\), while for greater values of \(\omega \) the convergence is slower, but 14 digits are taken. However, the results are coherent with the theoretical estimate (17) combined with (6), since the seminorm \(\mathcal {M}_{11}(f)\sim 10^{11}\). Similar is the behavior of the 2D-d whose results are in Table 14, where, as well as in other examples, 1–2 final digits are lost w.r.t. the cubature formula \(\varSigma _{m,m}(f)\). Since the assumptions of the HV-method are not satisfied, we didn’t implement it. Finally, for large \(\omega \), the GJ-rule doesn’t give any correct result till \(m\le 512\), achieving acceptable results for \(m=1024\) only (see Table 15).

Table 13 Example 4: results by the product rule \(\varSigma _{m,m}(f)\)
Table 14 Example 4: results by 2D-d
Table 15 Example 4: results by GJ-rule

Example 5

$$\begin{aligned}&I(f;\omega )=\int _{{\mathrm {D}}} (x_1+x_2)^{20}\frac{\sin (\omega x_1 x_2)}{x_1^2+x_2^2+\omega ^{-1}} dx_1 dx_2\\&f({\mathbf {x}})=(x_1+x_2)^{20},\ {\mathbf {K}}({\mathbf {x}},\omega )=\frac{\sin (\omega x_1 x_2)}{x_1^2+x_2^2+\omega ^{-1}},\\&w_1=w_2=1,\quad {\mathbf {w}}=w_1w_2\\&\sigma _1= \sigma _2=1,\quad \varvec{\sigma }=\sigma _1\sigma _2 \end{aligned}$$

The integral contains a mixed-type kernel, with \(f\in W_r(\varvec{\sigma })\) for any r. The results of the product rule (12) given in Table 16 are coherent with the theoretical estimates, since the values of the seminorms are too large. For instance for \(r=20\), it is \(\mathcal {M}_r(f)\sim 10^{18}.\) Comparing our results with those obtained with the 2D-d given in Table 17, we observe that more or less 2 digits are lost w.r.t. the product rule. In absence of other procedures, we have forced the use of the JJE-method, by which for \(\omega = 100\) the results present 12 correct digits, while with larger \(\omega \) the results are completely wrong (see Table 18). However this bad behavior is to be expected, since the oscillating factor is not covered within their method. Finally, the results in Table 19 evidence that GJ-rule is unreliable for \(\omega \) large.

Table 16 Example 5: results by the product rule \(\varSigma _{m,m}(f)\)
Table 17 Example 5: results by 2D-d
Table 18 Example 5: results by JJE-method
Table 19 Example 5: results by GJ-rule

Example 6

$$\begin{aligned}&I(f;\omega )=\int _{{\mathrm {D}}} |x_1-x_2|^{7.1} \frac{\sin (\omega x_1 x_2)}{x_1^2+x_2^2+\omega ^{-1}} v^{\frac{1}{2},\frac{1}{2}}(x_1) v^{-\frac{1}{4},-\frac{1}{4}}(x_2) dx_1 dx_2\\&f({\mathbf {x}})=|x_1-x_2|^{7.1}, K({\mathbf {x}},\omega )=\frac{\sin (\omega x_1 x_2)}{x_1^2+x_2^2+\omega ^{-1}},\\&w_1=v^{\frac{1}{2},\frac{1}{2}}, \ w_2= v^{-\frac{1}{4},-\frac{1}{4}}\\&\sigma _1=v^{\frac{1}{4},\frac{1}{4}}, \ \sigma _2=1,\quad \varvec{\sigma }=\sigma _1\sigma _2 \end{aligned}$$

We conclude with a test on a mixed-type kernel. Here the function \(f\in W_7(\varvec{\sigma })\). Since the seminorm \(\mathcal {M}_r(f)\sim 6\times 10^3\), according to the theoretical estimate, 15 exact (not always significant) digits are computed for \(m=512\) (Table 20). The results are comparable with those achieved by the 2D-d (Table 21), while the GJ-rule results in Table 23, as well as those achieved by the JJE-method in Table 22, give poor approximations.

Table 20 Example 6: results by the product rule \(\varSigma _{m,m}(f)\)
Table 21 Example 6: results by 2D-d
Table 22 Example 6: results by JJE-method
Table 23 Example 6: results by GJ-rule
Fig. 7
figure 7

Errors behaviors for different choices of d in Example 1

6 The choice of the parameter d

Now we want to discuss briefly how to choose the number \(S^2\) of the domain subdivisions in the 2D-dilation rule, or equivalently how to set the length d of the squares side, since \(S=\frac{2\omega _1}{d}\). By the error estimate (22), assuming negligible the contribute of \(\mathcal {N}_{r}(F,{\mathbf {K}})\) and fixing the desired computational accuracy toll, m and S are inversely proportional. Therefore, whenever let be useful to have m as small as possible, we have to take larger S. We point out that this behavior depends on the slower rate of convergence of the involved Gauss–Jacobi cubature rules when the “stretching” parameter S is “too small” or d is too large.

Of course, the previous considerations are not yet conclusive on the choice of S. However, by numerical evidence, a good “compromise” to reduce m seems to be \(S=\left\lfloor \omega _1\right\rfloor \) and therefore \(d=\frac{2\omega _1}{S}\sim 2\). To show this behavior, we propose the graphic of the relative errors achieved for some values of d chosen between 2 and \(\omega _1\), referred to the first two numerical tests produced in the Sect. 5 (see Figs. 7, 8).

Fig. 8
figure 8

Errors behaviors for different choices of d in Example 2

7 A comparison between product and 2D-dilation rules

We propose a comparison between the proposed rules w.r.t. the time complexity. The following Tables 24 and 25 contain the computational times (in seconds) obtained by implementing the product rule \(\varSigma _{m,m}(f)\) and the 2D-dilation rule \(\varPsi _{m,m}(f,\omega )\) for the integrals given in Examples 12 of Sect. 5. For each of them the times have been computed for \(\omega =10^2, 10^4, 10^6\), by implementing both the algorithms in Matlab version R2016a, on a PC with a Intel Core i7-2600 CPU 3.40 GHz and 8 GB of memory. We point out that times related to the product formula \(\varSigma _{m,m}(f)\) include those spent for computing the coefficients \(\{A_{r,s}({\mathbf {K}},\omega )\}_{(r,s)\in N_1^m\times N_1^m}\).

Table 24 Times for \(\varSigma _{m,m}(f)\) and \(\varPsi _{m,m}(f,\omega )\) in Example 1
Table 25 Times for \(\varSigma _{m,m}(f)\) and \(\varPsi _{m,m}(f,\omega )\) in Example 2

As one can see, the timings required by the product rule are a little bit longer, but not too much, than those required by the 2D-dilation rule, till m is small. However, in the Example 2, with \(m=32\) and for all the values of \(\omega \), the timings required by the product rule are a little bit smaller than those required by the 2D-dilation rule . Indeed, 2D-dilation requires \((mS)^2\) samples of the integrand function f, where S increases on \(\omega \). Thus the global time strongly depend on the computing time of the function. In Example 2 the time complexity for evaluating \(f({\mathbf {x}})=\log ^{\frac{15}{2}}(x_1+x_2+4)\) is longer than the time for computing \(f({\mathbf {x}})=e^{x_1x_2}\) in Example 1. This variability cannot happen in the product rule, where the number m of function samples is independent of \(\omega \). In any case, since in the product rule the main effort is mainly due to the computation of its coefficients, it should be preferable to use it when the kernels present some symmetry properties, by virtue of them, the number of the coefficients is drastically reduced (see Sect. 4.2.1).

8 Proofs

First we recall a result needed in the successive proof. Let

$$\begin{aligned} S_m(w_1,h,t)=\sum _{i=0}^{m}a_i(h)p_i(w_i,t),\qquad \displaystyle a_i(h)=\int _{-1}^1 h(\tau )p_i(w_1,\tau )w_1(\tau ) \ d\tau \end{aligned}$$

be the m-th Fourier sum of the univariate function \(h\in L^p_{\sigma _1}((-\,1,1))\) and let \(S_{m,m}({\mathbf {w}},G)\) be the bivariate \(m-\)th Fourier sum in the orthonormal polynomial systems \(\{p_m(w_1)\}_m\), \(\{p_m(w_2)\}_m\) of a given function \(G\in L^p_{\varvec{\sigma }}({\mathrm {D}}),\) i.e.

$$\begin{aligned} S_{m,m}({\mathbf {w}},G,{\mathbf {x}})=S_m(w_2,S_m(w_1,G_{x_2},x_1),x_2)\equiv S_m(w_1,S_m(w_2,G_{x_1},x_2),x_1), \end{aligned}$$

where \(S_{m,m}({\mathbf {w}},G)\in \mathbb {P}_{m,m}\). For \(1<p<\infty \), under the assumptions (see for instance [15, p.2332])

$$\begin{aligned} \frac{\varvec{\sigma }}{\sqrt{{\mathbf {w}}\varphi _1\varphi _2}}\in L^p({\mathrm {D}}), \quad \frac{{\mathbf {w}}}{\varvec{\sigma }} \in L^q({\mathrm {D}}), \quad \frac{1}{\varvec{\sigma }}\sqrt{\frac{{\mathbf {w}}}{\varphi _1\varphi _2}} \in L^q({\mathrm {D}}), \quad q=\frac{p}{p-1}, \end{aligned}$$

then, for any \(f\in C_{\varvec{\sigma }}\)

$$\begin{aligned} \Vert S_{m,m}({\mathbf {w}},f)\varvec{\sigma }\Vert _p\le {\mathcal {C}}\Vert f\varvec{\sigma }\Vert _\infty ,\quad {\mathcal {C}}\ne {\mathcal {C}}(m,f). \end{aligned}$$
(26)

Proof of Theorem 1

First we prove

$$\begin{aligned} || \mathcal {L}_{m,m} ({\mathbf {w}};f) {\mathbf {K}}(\cdot ,{\mathbf {t}}) {\mathbf {w}}||_1\le {\mathcal {C}}\Vert f\varvec{\sigma }\Vert _\infty , \end{aligned}$$
(27)

which implies (16). For any fixed \({\mathbf {t}}\in \mathrm {T}\), let \(g_{m}=sgn(\mathcal {L}_{m,m}({\mathbf {w}};f) K({\mathbf {x}},{\mathbf {t}}))\). Then,

$$\begin{aligned}&|| \mathcal {L}_{m,m} ({\mathbf {w}};f) {\mathbf {K}}(\cdot ,{\mathbf {t}}) {\mathbf {w}}||_1= \int _{{\mathrm {D}}} \mathcal {L}_{m,m} ({\mathbf {w}}; f; {\mathbf {x}}) {\mathbf {K}}({\mathbf {x}},{\mathbf {t}})({\mathbf {x}}) g_{m}({\mathbf {x}}) {\mathbf {w}}({\mathbf {x}}) d{\mathbf {x}}\\&\quad = \left| \sum _{i=1}^m \sum _{j=1}^m \lambda _{m,i}^{w_1} \lambda _{m,j}^{w_2} f\big (\xi _{i,j}^{w_1,w_2}\big )\sum _{k=0}^{m-1} p_k\big (w_1,\xi _{i}^{w_1}\big ) \sum _{r=0}^{m-1} p_r\big (w_2,\xi _{j}^{w_2}\big ) \right. \\&\qquad \times \left. \int _{{\mathrm {D}}} p_k(w_1,x_1) p_r(w_2, x_2) {\mathbf {K}}({\mathbf {x}},{\mathbf {t}}) g_m({\mathbf {x}}) {\mathbf {w}}({\mathbf {x}}) d{\mathbf {x}}\right| \\&\quad = \left| \sum _{i=1}^m \sum _{j=1}^m \lambda _{m,i}^{w_1} \lambda _{m,j}^{w_2}f\big (\xi _{i,j}^{w_1,w_2}\big ) S_{m,m}\big ({\mathbf {w}};{\mathbf {K}}(\cdot ,{\mathbf {t}}) g_m; \xi _{i,j}^{w_1,w_2}\big )\right| \end{aligned}$$

By Hölder inequality

$$\begin{aligned}&|| \mathcal {L}_{m,m} ({\mathbf {w}};f) {\mathbf {K}}(\cdot ,{\mathbf {t}}) {\mathbf {w}}||_1 \le \sum _{i=1}^m \lambda _i^{w_1}\left( \sum _{j=1}^m \lambda _j^{w_2} f^2\big (\xi _{i,j}^{w_1,w_2}\big )\right) ^{\frac{1}{2}}\nonumber \\&\quad \times \left( \sum _{j=1}^m \lambda _j^{w_2} S_{m,m}^2\left( {\mathbf {w}}; {\mathbf {K}}(\cdot ,{\mathbf {t}}) g_m; \xi _{i,j}^{w_1,w_2}\right) \right) ^{\frac{1}{2}} \end{aligned}$$
(28)
$$\begin{aligned}&\quad \le \left( \sum _{i=1}^m\sum _{j=1}^m \lambda _i^{w_1}\lambda _j^{w_2} f^2\big (\xi _{i,j}^{w_1,w_2}\big )\right) ^{\frac{1}{2}} \end{aligned}$$
(29)
$$\begin{aligned}&\quad \times \left( \sum _{i=1}^m\sum _{j=1}^m \lambda _i^{w_1} \lambda _j^{w_2} S_{m,m}^2\left( {\mathbf {w}}; K(\cdot ,{\mathbf {t}}) g_m; \xi _{i,j}^{w_1,w_2}\right) \right) ^{\frac{1}{2}}. \end{aligned}$$
(30)

Now, taking into account (26) and the assumptions (15)

$$\begin{aligned}&\left( \sum _{i=1}^m\sum _{j=1}^m \lambda _i^{w_1}\lambda _j^{w_2} S_{m,m}^2\big ({\mathbf {w}}; {\mathbf {K}}(\cdot ,{\mathbf {t}}) g_m; \xi _{i,j}^{w_1,w_2}\big )\right) ^{\frac{1}{2}}\nonumber \\&\quad = \left( \int _{{\mathrm {D}}} S_{m,m}^2\left( {\mathbf {w}}; {\mathbf {K}}(\cdot ,{\mathbf {t}}) g_m;{\mathbf {x}}\right) {\mathbf {w}}({\mathbf {x}}) d{\mathbf {x}}\right) ^{\frac{1}{2}} \end{aligned}$$
(31)
$$\begin{aligned}&\quad =\Vert S_{m,m}\left( {\mathbf {w}}; {\mathbf {K}}(\cdot ,{\mathbf {t}})\right) \sqrt{{\mathbf {w}}}\Vert _2\le {\mathcal {C}}\Vert {\mathbf {K}}(\cdot ,{\mathbf {t}})\sqrt{{\mathbf {w}}}\Vert _2. \end{aligned}$$
(32)

Moreover,

$$\begin{aligned} \left( \sum _{i=1}^m \sum _{j=1}^m f^2\big (\xi _{i,j}^{w_1,w_2}\big )\right) ^{\frac{1}{2}}\le ||f \varvec{\sigma }||_{\infty } \left( \sum _{i=1}^m \sum _{j=1}^m \frac{\lambda _i^{w_1} \lambda _j^{w_2}}{\varvec{\sigma }\big (\xi _{i,j}^{w_1,w_2}\big )} \right) ^{\frac{1}{2}} \end{aligned}$$

and taking into account the relationship (see [14, p. 673 (14)])

$$\begin{aligned} \displaystyle \lambda _i^{w_j}\sim w_j({\mathbf {x}}_i^{w_j})\varDelta \xi _i^{w_j},\quad \varDelta \xi _i^{w_j}=\xi _{i+1}^{w_j}-\xi _i^{w_j}, \quad j=1,2, \end{aligned}$$

it follows

$$\begin{aligned} \sum _{i=1}^m \sum _{j=1}^m\frac{\lambda _i^{w_1} \lambda _j^{w_2}}{\varvec{\sigma }\big (\xi _{i,j}^{w_1,w_2}\big )} \le \sum _{i=1}^m \sum _{j=1}^m \frac{\varDelta \xi _i^{w_1} w_1\big (\xi _i^{w_1}\big )}{\sigma _1\big (\xi _i^{w_1}\big )} \frac{\varDelta \xi _j^{w_2} w_2\big (\xi _j^{w_2}\big )}{\sigma _2\big (\xi _j^{w_2}\big )} \le \int _{{\mathrm {D}}} \frac{{\mathbf {w}}({\mathbf {x}})}{\varvec{\sigma }({\mathbf {x}})} \ d{\mathbf {x}}\le \mathcal {C}. \end{aligned}$$
(33)

Combining last inequality and (32) with (28), (27) follows. To prove (17), start from

$$\begin{aligned} \big |\mathcal {R}_{m,m}^\varSigma (f;{\mathbf {t}})\big |\le & {} \int _{{\mathrm {D}}} \big |\big [f({\mathbf {x}})-P_{m-1,m-1}^*({\mathbf {x}})\big ]{\mathbf {K}}({\mathbf {x}},{\mathbf {t}})\big | {\mathbf {w}}({\mathbf {x}}) \ d{\mathbf {x}}\nonumber \\&+ \int _{{\mathrm {D}}} \big |\mathcal {L}_{m,m}\big ({\mathbf {w}};f-P_{m-1,m-1}^*;{\mathbf {x}}){\mathbf {K}}({\mathbf {x}},{\mathbf {t}}\big )\big | {\mathbf {w}}({\mathbf {x}}) \ d{\mathbf {x}}\nonumber \\=: & {} A_1({\mathbf {t}})+A_2({\mathbf {t}}), \end{aligned}$$
(34)

where \(\displaystyle P_{m-1,m-1}^*({\mathbf {x}})\) is the best approximation polynomial of \(f\in C_{\varvec{\sigma }}\). By Hölder inequality and taking into account (14) and (15) it follows

$$\begin{aligned} A_1({\mathbf {t}})\le {\mathcal {C}}E_{m-1,m-1}(f)_{\varvec{\sigma }} \int _{{\mathrm {D}}} |{\mathbf {K}}({\mathbf {x}},{\mathbf {t}})|\frac{{\mathbf {w}}({\mathbf {x}})}{\varvec{\sigma }({\mathbf {x}})} \ d{\mathbf {x}}\le {\mathcal {C}}E_{m-1,m-1}(f)_{\varvec{\sigma }}. \end{aligned}$$
(35)

Since by (27)

$$\begin{aligned} A_2({\mathbf {t}})\le {\mathcal {C}}E_{m-1,m-1}(f)_{\varvec{\sigma }}, \end{aligned}$$
(36)

(17) follows combining (35), (36) with (34). \(\square \)

Proof of Theorem 2

First we prove (21). Starting from expression (38) given in the “Appendix”, we obtain the following bound:

$$\begin{aligned} |\varPsi _{m,m}(F,\omega )|\le & {} \frac{d^2\tau _0}{4}\mathcal {U}_1\max _{{\mathbf {x}}\in {\mathrm {D}}}|F({\mathbf {x}})K({\mathbf {x}},\omega )\varvec{\sigma }({\mathbf {x}})|\left\{ \tau _1 \sum _{r=1}^m\sum _{s=1}^m \frac{\lambda _r^{u_2}\lambda _s^{u_4}}{\varvec{\sigma }\big (\xi _{r,s}^{u_2,u_4}\big )}\right. \\&+\,\tau _2\sum _{r=1}^m\sum _{s=1}^m \frac{\lambda _r^{u_2}\lambda _s^{u_3}}{\varvec{\sigma }\big (\xi _{r,s}^{u_2,u_3}\big )}+\tau _3\sum _{r=1}^m\sum _{s=1}^m \frac{\lambda _r^{u_1}\lambda _s^{u_4}}{\varvec{\sigma }\big (\xi _{r,s}^{u_1,u_4}\big )}+ \tau _4 \sum _{r=1}^m\sum _{s=1}^m \frac{\lambda _r^{u_1}\lambda _s^{u_3}}{\varvec{\sigma }\big (\xi _{r,s}^{u_1,u_3}\big )}\\&+\, \tau _1 \sum _{j=2}^{S-1} \sum _{r=1}^m\sum _{s=1}^m \frac{\lambda _r^{u_2}\lambda _s^{u_0}}{\varvec{\sigma }\big (\xi _{r,s}^{u_2,u_0}\big )} + \tau _2 \sum _{i=2}^{S-1} \sum _{r=1}^m\sum _{s=1}^m \frac{\lambda _r^{u_0}\lambda _s^{u_3}}{\varvec{\sigma }\big (\xi _{r,s}^{u_0,u_3}\big )}\\&+\, \tau _1 \sum _{i=2}^{S-1} \sum _{r=1}^m\sum _{s=1}^m \frac{\lambda _r^{u_0}\lambda _s^{u_4}}{\varvec{\sigma }\big (\xi _{r,s}^{u_0,u_4}\big )} + \tau _3 \sum _{j=2}^{S-1} \sum _{r=1}^m\sum _{s=1}^m \frac{\lambda _r^{u_1}\lambda _s^{u_0}}{\varvec{\sigma }\big (\xi _{r,s}^{u_1,u_0}\big )} \\&+\, \left. \tau _1 \sum _{i=2}^{S-1} \sum _{j=2}^{S-1} \sum _{r=1}^m\sum _{s=1}^m \frac{\lambda _r^{u_0}\lambda _s^{u_0}}{\varvec{\sigma }\big (\xi _{r,s}^{u_0,u_0}\big )} \right\} , \end{aligned}$$

where

$$\begin{aligned} \mathcal {U}_1= & {} \max \left( \Vert U_1\Vert ,\Vert U_2\Vert ,\Vert U_3\Vert ,\Vert U_4\Vert , \max _{j\in N_1^m}(\Vert U_{5,j}\Vert , \Vert U_{6,j}\Vert ,\Vert U_{7,j}\Vert ,\Vert U_{8,j}\Vert )\right. ,\\&\left. \max _{i\in N_1^m,j\in N_1^m}\Vert U_{9,i,j}\Vert \right) , \end{aligned}$$

and using (33), we have

$$\begin{aligned} |\varPsi _{m,m}(F,\omega )| \le {\mathcal {C}}\mathcal {U}_1\Vert F{\mathbf {K}}(\cdot ,\omega )\varvec{\sigma }\Vert _\infty \left\{ \sum _{k=0,1,2}\sum _{j=0,3,4} \int _{{\mathrm {D}}}\frac{u_k(x_1)u_j(x_2)}{\varvec{\sigma }(x_1,x_2)}d x_1 dx_2 \right\} . \end{aligned}$$

Then, taking into account the assumption (20) and Proposition 1, in view of the boundedness of \({\mathbf {K}}(\cdot ,\omega )\), we can conclude

$$\begin{aligned} |\varPsi _{m,m}(F,\omega )| \le {\mathcal {C}}\Vert F\varvec{\sigma }\Vert _\infty . \end{aligned}$$

Now we prove (22). By (38) taking into account (10) and under the assumption (20) we have

$$\begin{aligned}&\big |\mathcal {R}^{\varPsi }_{m,m}(F,\omega )\big | \le {\mathcal {C}}\left\{ E_{2m-1,2m-1}(F_{1,1}K_{1,1} U_1)_{\varvec{\sigma }}+ E_{2m-1,2m-1}(F_{1,1}K_{1,1}U_2)_{\varvec{\sigma }} \right. \\&\quad + E_{2m-1,2m-1}(F_{S,1}K_{S,1} U_3)_{\varvec{\sigma }} + \sum _{j=2}^{S-1} E_{2m-1,2m-1}(F_{1,j}K_{1,j} U_{5,j})_{\varvec{\sigma }}\\&\quad +\, \sum _{i=2}^{S-1} E_{2m-1,2m-1}(F_{i,S}K_{i,S} U_{6,i} )_{\varvec{\sigma }}+ \sum _{i=2}^{S-1} E_{2m-1,2m-1}(F_{i,1}K_{i,1} U_{7,i})_{\varvec{\sigma }}\\&\quad + \sum _{j=2}^{S-1} E_{2m-1,2m-1}(F_{S,j}K_{S,j} U_{8,j})_{\varvec{\sigma }}\\&\quad + \sum _{i=2}^{S-1} \sum _{j=2}^{S-1} E_{2m-1,2m-1}(F_{i,j}K_{i,j} U_{9,i,j})_{\varvec{\sigma }}\\&\quad +\, \left. E_{2m-1,2m-1}(F_{S,S}K_{S,S} U_4)_{\varvec{\sigma }} \right\} . \end{aligned}$$

Taking into account (7) we get

$$\begin{aligned} |\mathcal {R}^{\varPsi }_{m,m}(F,\omega )|\le & {} {\mathcal {C}}\left\{ \mathcal {U}\sum _{j=1}^S \sum _{i=1}^S E_{m-1,m-1}(F_{i,j}K_{i,j})_{\varvec{\sigma }}\right. \nonumber \\&+\frac{\widetilde{\mathcal {M}}_r^{max}}{m^r}\sum _{j=1}^S\sum _{i=1}^S \Vert F_{i,j}K_{i,j}\varvec{\sigma }\Vert _\infty \left. \right\} \end{aligned}$$
(37)

where

$$\begin{aligned} \mathcal {U}= & {} \max \left( \Vert U_1\varvec{\sigma }\Vert ,\Vert U_2\varvec{\sigma }\Vert ,\Vert U_3\varvec{\sigma }\Vert ,\Vert U_4\varvec{\sigma }\Vert ,\right. \nonumber \\&\left. \max _{j\in N_1^m}(\Vert U_{5,j}\varvec{\sigma }\Vert , \Vert U_{6,j}\varvec{\sigma }\Vert ,\Vert U_{7,j}\varvec{\sigma }\Vert ,\Vert U_{8,j}\varvec{\sigma }\Vert ), \max _{i\in N_1^m,j\in N_1^m}\Vert U_{9,i,j}\varvec{\sigma }\Vert \right) \le {\mathcal {C}}\end{aligned}$$

and

$$\begin{aligned} \widetilde{\mathcal { M}}_r^{max}:= & {} \max \left\{ \max _{1\le k\le 4}\mathcal {M}_r(U_k), \max _{2\le i\le S-1}\left[ \mathcal {M}_r( U_{5,i}), \mathcal {M}_r( U_{6,i}), \mathcal {M}_r( U_{7,i}), \mathcal {M}_r( U_{8,i}), \right. \right. \\&\left. \left. \max _{2\le j\le S-1}\mathcal {M}_r( U_{9,i,j})\right] \right\} \le {\mathcal {C}}\left( \frac{d}{2\omega _1}\right) ^r \mathcal {U}. \end{aligned}$$

Since for \(h\in \{1,2\}\) and \((i,j)\in N_1^{S}\times N_1^{S}\)

$$\begin{aligned}&\left| \frac{\partial ^{r}}{(\partial x_h)^{r}}\left( F_{i,j}K_{i,j}\right) (x_1,x_2)\right| \le \sum _{k=0}^{r}\left( {\begin{array}{c}r\\ k\end{array}}\right) \left| \frac{\partial ^{k}}{(\partial x_h)^{k}} F_{i,j}(x_1,x_2)\right| \left| \frac{\partial ^{r-k}}{(\partial x_h)^{r-k}}K_{i,j}(x_1,x_2)\right| , \end{aligned}$$

we have

$$\begin{aligned}&\left| \frac{\partial ^{r}}{(\partial x_h)^{r}}\left( F_{i,j}K_{i,j}\right) (x_1,x_2)\right| \varphi _h(x_h)^r\varvec{\sigma }(x_1,x_2)\\&\quad \le \max _{k\in N_0^r}\left\{ \left\| \frac{\partial ^{k}F}{(\partial x_h)^{k}} \varphi _h^r\varvec{\sigma }\right\| _\infty \left\| \frac{\partial ^{r-k} {\mathbf {K}}(\cdot ,\omega )}{(\partial x_h)^{r-k}}\right\| _{\varOmega ,\infty }\right\} \sum _{k=0}^{r}\left( {\begin{array}{c}r\\ k\end{array}}\right) \left( \frac{d}{2\omega _1}\right) ^{k}\left( \frac{d}{2}\right) ^{r-k}\\&\quad = \max _{k\in N_0^r}\left\{ \left\| \frac{\partial ^{k}F}{(\partial x_h)^{k}}\varphi _h^r\varvec{\sigma }\right\| _\infty \left\| \frac{\partial ^{r-k} {\mathbf {K}}(\cdot ,\omega )}{(\partial x_h)^{r-k}}\right\| _{\varOmega ,\infty }\right\} \left( \frac{d}{2}\right) ^{r}\left( \frac{1}{\omega _1}+1\right) ^{r},\\&\quad \varOmega \equiv [-\omega _1,\omega _1]^2, \end{aligned}$$

and therefore, taking into account (6), by (37) it follows

$$\begin{aligned} |\mathcal {R}^{\varPsi }_{m,m}(F,\omega )|\le & {} \frac{{\mathcal {C}}}{m^r} \left\{ \mathcal {U} \max _{ h\in N_1^2} \max _{k\in N_0^r}\left( \left\| \frac{\partial ^{k}F}{(\partial x_h)^{k}}\varphi _h^r\varvec{\sigma }\right\| _\infty \left\| \frac{\partial ^{r-k} {\mathbf {K}}(\cdot ,\omega )}{(\partial x_h)^{r-k}}\right\| _{\varOmega ,\infty }\right) \right. \\&\quad \times&\left. \left( \frac{d}{2}\right) ^{r}\left( \frac{1}{\omega _1}+1\right) ^{r} +\widetilde{\mathcal { M}}_r^{max} \Vert F{\mathbf {K}}\varvec{\sigma }\Vert _\infty \right\} \\\le & {} \ \frac{{\mathcal {C}}}{m^r} \mathcal {N}_{r}(F,K)\left( \frac{d}{2}\right) ^{r}\left( \frac{1}{\omega _1}+1\right) ^{r}, \end{aligned}$$

where

$$\begin{aligned} \mathcal {N}_{r}(F,K)=\Vert F\varvec{\sigma }\Vert _\infty +\max _{h\in N_1^2}\max _{k\in N_0^r} \left( \left\| \frac{\partial ^{r-k} {\mathbf {K}}(\cdot ,\omega )}{(\partial x_h)^{r-k}}\right\| _{\varOmega ,\infty } \times \left\| \frac{\partial ^{k} F(\cdot ,\omega )}{(\partial x_h)^{k}}\right\| _{\infty }\right) . \end{aligned}$$

and the thesis follows. \(\square \)

Proof of Corollary 1

In order to use Theorem 2 with \(r=2m\), we have to estimate \(\mathcal {N}_{2m}(\ell _{r,s}^{w_1,w_2},{\mathbf {K}}).\) By iterating the weighted Bernstein inequality (see for instance [10, p.170])

$$\begin{aligned} \Vert (\ell _{r}^{w_1})^{(m-1)}\varphi _1^{m-1}\sigma _1\Vert _\infty \le {\mathcal {C}}\ m^{m-1}\Vert \ell _{r}^{w_1}\sigma _1\Vert _\infty \end{aligned}$$

and taking into account that under the hypotheses (15) [10, Th.4.3.3, p.274 and p.256]

$$\begin{aligned} \max _{|x|\le 1}\sum _{k=1}^m |\ell _k^{w_1}(x)|\frac{\sigma _1(x)}{\sigma _1(\xi _k^{w_1})}\le {\mathcal {C}}m^\mu , \end{aligned}$$

with \(\mu \) defined in (25), we can conclude

$$\begin{aligned} \Vert (\ell _{r}^{w_1})^{(m-1)}\varphi _1^{m-1}\sigma _1\Vert _\infty \le {\mathcal {C}}\ m^{m-1}\Vert \ell _{r}^{w_1}\sigma _1\Vert _\infty \le {\mathcal {C}}\ m^{m-1+\mu },\quad {\mathcal {C}}\ne {\mathcal {C}}(m). \end{aligned}$$

Hence,

$$\begin{aligned} \mathcal {N}_{2m}(\ell _{r,s}^{w_1,w_2},{\mathbf {K}}) \le {\mathcal {C}}\ m^{m-1+\mu } \max _{h\in N_1^2}\max _{k\in N_0^{m-1}} \left\| \frac{\partial ^{2m-k} {\mathbf {K}}(\cdot ,\omega )}{(\partial x_h)^{2m-k}}\right\| _{\varOmega ,\infty } \end{aligned}$$

and by (22) and using

$$\begin{aligned} \left( \frac{d}{2m}\right) ^{2m}\left( \frac{1}{\omega _1}+1\right) ^{2m}\le e^{-2m\log m\left( 1-\frac{\log (d/2)}{\log m}-\frac{1}{\omega _1\log m}\right) }\le \frac{1}{m^{2m}} \end{aligned}$$

for \(m>\frac{d}{4} e^\frac{1}{\omega _1}\), the thesis follows. \(\square \)