Keywords

Mathematics Subject Classification (2010)

1 Introduction and Preliminaries

A summation formula was discovered independently by Leonhard Euler [18, 19] and Colin Maclaurin [35] plays an important role in the broad area of numerical analysis, analytic number theory, approximation theory, as well as in many applications in other fields. This formula, today known as the Euler–Maclaurin summation formula,

$$\displaystyle\begin{array}{rcl} \sum _{k=0}^{n}f(k)& =& \int _{ 0}^{n}f(x)\,\mathrm{d}x + \frac{1} {2}(f(0) + f(n)) \\ & & \qquad \qquad +\sum _{ \nu =1}^{r} \frac{B_{2\nu }} {(2\nu )!}\left [f^{(2\nu -1)}(n) - f^{(2\nu -1)}(0)\right ] + E_{ r}(f),{}\end{array}$$
(1)

was published first time by Euler in 1732 (without proof) in connection with the problem of determining the sum of the reciprocal squares,

$$\displaystyle{ 1 + \frac{1} {2^{2}} + \frac{1} {3^{2}} + \cdots \, }$$
(2)

which is known as the Basel problem. The brothers Johann and Jakob Bernoulli, Leibnitz, Stirling, etc. also dealt intensively by such a kind of problems. In modern terminology, the sum (2) represents the zeta function of 2, where more generally

$$\displaystyle{\zeta (s) = 1 + \frac{1} {2^{s}} + \frac{1} {3^{s}} + \cdots \quad (s > 1).}$$

Although at that time the theory of infinite series was not exactly based, it was observed a very slow convergence of this series, e.g. in order to compute directly the sum with an accuracy of six decimal places it requires taking into account at least a million first terms, because

$$\displaystyle{ \frac{1} {n + 1} <\sum _{ k=n+1}^{+\infty } \frac{1} {k^{2}} < \frac{1} {n}.}$$

Euler discovered the remarkable formula with much faster convergence

$$\displaystyle{\zeta (2) =\log ^{2}2 +\sum _{ k=1}^{+\infty } \frac{1} {2^{k-1}k^{2}},}$$

and obtained the value ζ(2) = 1. 644944 (with seven decimal digits). But the discovery of a general summation procedure (1) enabled Euler to calculate ζ(2) to 20 decimal places. For details see Gautschi [25, 26] and Varadarajan [61].

Using a generalized Newton identity for polynomials (when their degree tends to infinity), Euler [19] proved the exact result ζ(2) = π 2∕6. Using the same method he determined ζ(s) for even s = 2m up to 12,

$$\displaystyle{\zeta (4) = \frac{\pi ^{4}} {90},\ \ \zeta (6) = \frac{\pi ^{6}} {945},\ \ \zeta (8) = \frac{\pi ^{8}} {9450},\ \ \zeta (10) = \frac{\pi ^{10}} {93555},\ \ \zeta (12) = \frac{691\pi ^{12}} {638512875}.}$$

Sometime later, using his own partial fraction expansion of the cotangent function, Euler obtained the general formula

$$\displaystyle{\zeta (2\nu ) = (-1)^{\nu -1}\frac{2^{2\nu -1}B_{ 2\nu }} {(2\nu )!} \pi ^{2\nu },}$$

where B 2ν are the Bernoulli numbers, which appear in the general Euler–Maclaurin summation formula (1). Detailed information about Euler’s complete works can be found in The Euler Archive ( http://eulerarchive.maa.org).

We return now to the general Euler–Maclaurin summation formula (1) which holds for any \(n,r \in \mathbb{N}\) and f ∈ C 2r[0, n]. As we mentioned before this formula was found independently by Maclaurin. While in Euler’s case the formula (1) was applied for computing slowly converging infinite series, in the second one Maclaurin used it to calculate integrals. A history of this formula was given by Barnes [5], and some details can be found in [3, 8, 25, 26, 38, 61].

Bernoulli numbers B k (B 0 = 1, B 1 = −1∕2, B 2 = 1∕6, B 3 = 0, B 4 = −1∕30, ) can be expressed as values at zero of the corresponding Bernoulli polynomials, which are defined by the generating function

$$\displaystyle{ \frac{t\mathrm{e}^{xt}} {\mathrm{e}^{t} - 1} =\sum _{ k=0}^{+\infty }B_{ k}(x)\frac{t^{k}} {k!}.}$$

Similarly, Euler polynomials can be introduced by

$$\displaystyle{ \frac{2\mathrm{e}^{xt}} {\mathrm{e}^{t} + 1} =\sum _{ k=0}^{+\infty }E_{ k}(x)\frac{t^{k}} {k!}.}$$

Bernoulli and Euler polynomials play a similar role in numerical analysis and approximation theory like orthogonal polynomials. First few Bernoulli polynomials are

$$\displaystyle\begin{array}{rcl} & & B_{0}(x) = 1,\ \ B_{1}(x) = x -\frac{1} {2},\ \ B_{2}(x) = x^{2} - x + \frac{1} {6},\ \ B_{3}(x) = x^{3} -\frac{3x^{2}} {2} + \frac{x} {2}, {}\\ & & B_{4}(x) = x^{4} - 2x^{3} + x^{2} - \frac{1} {30},\ \ B_{5}(x) = x^{5} -\frac{5x^{4}} {2} + \frac{5x^{3}} {3} -\frac{x} {6},\ \ \mbox{ etc.} {}\\ \end{array}$$

Some interesting properties of these polynomials are

$$\displaystyle{B'_{n}(x) = nB_{n-1}(x),\ \ B_{n}(1 - x) = (-1)^{n}B_{ n}(x),\ \ \int _{0}^{1}B_{ n}(x)\,\mathrm{d}x = 0\ \ (n \in \mathbb{N}).}$$

The error term E r (f) in (1) can be expressed in the form (cf. [8])

$$\displaystyle{E_{r}(f) = (-1)^{r}\sum _{ k=1}^{+\infty }\int _{ 0}^{n}\frac{\mathrm{e}^{\mathrm{i}2\pi kt} + \mathrm{e}^{-\mathrm{i}2\pi kt}} {(2\pi k)^{2r}} f^{(2r)}(x)\,\mathrm{d}x,}$$

or in the form

$$\displaystyle{ E_{r}(f) = -\int _{0}^{n}\frac{B_{2r}(x -\lfloor x\rfloor )} {(2r)!} f^{(2r)}(x)\,\mathrm{d}x, }$$
(3)

where ⌊x⌋ denotes the largest integer that is not greater than x. Supposing f ∈ C 2r+1[0, n], after an integration by parts in (3) and recalling that the odd Bernoulli numbers are zero, we get (cf. [28, p. 455])

$$\displaystyle{ E_{r}(f) =\int _{ 0}^{n}\frac{B_{2r+1}(x -\lfloor x\rfloor )} {(2r + 1)!} f^{(2r+1)}(x)\,\mathrm{d}x. }$$
(4)

If f ∈ C 2r+2[0, n], using Darboux’s formula one can obtain (1), with

$$\displaystyle{ E_{r}(f) = \frac{1} {(2r + 2)!}\int _{0}^{1}\left [B_{ 2r+2} - B_{2r+2}(x)\right ]{\biggl (\sum _{k=0}^{n-1}f^{(2r+2)}(k + x)\biggr )}\,\mathrm{d}x }$$
(5)

(cf. Whittaker and Watson [65, p. 128]). This expression for E r (f) can be also derived from (4), writting it in the form

$$\displaystyle\begin{array}{rcl} E_{r}(f)& =& \int _{0}^{1}\frac{B_{2r+1}(x)} {(2r + 1)!} {\biggl (\sum _{k=0}^{n-1}f^{(2r+1)}(k + x)\biggr )}\,\mathrm{d}x {}\\ & =& \int _{0}^{1}\frac{B'_{2r+2}(x)} {(2r + 2)!} {\biggl (\sum _{k=0}^{n-1}f^{(2r+1)}(k + x)\biggr )}\,\mathrm{d}x, {}\\ \end{array}$$

and then by an integration by parts,

$$\displaystyle\begin{array}{rcl} E_{r}(f)& =& \left [\frac{B_{2r+2}(x)} {(2r + 2)!} {\biggl (\sum _{k=0}^{n-1}f^{(2r+1)}(k + x)\biggr )}\right ]_{ 0}^{1} {}\\ & & \qquad \qquad -\int _{0}^{1}\frac{B_{2r+2}(x)} {(2r + 2)!} {\biggl (\sum _{k=0}^{n-1}f^{(2r+2)}(k + x)\biggr )}\,\mathrm{d}x. {}\\ \end{array}$$

Because of B 2r+2(1) = B 2r+2(0) = B 2r+2, the last expression can be represented in the form (5).

Since

$$\displaystyle{(-1)^{r}\left [B_{ 2r+2} - B_{2r+2}(x)\right ] \geq 0,\quad x \in [0,1],}$$

and

$$\displaystyle{\int _{0}^{1}\left [B_{ 2r+2} - B_{2r+2}(x)\right ]\,\mathrm{d}t = B_{2r+2},}$$

according to the Second Mean Value Theorem for Integrals, there exists η ∈ (0, 1) such that

$$\displaystyle{ E_{r}(f) = \frac{B_{2r+2}} {(2r + 2)!}{\biggl (\sum _{k=0}^{n-1}f^{(2r+2)}(k+\eta )\biggr )} = \frac{nB_{2r+2}} {(2r + 2)!}f^{(2r+2)}(\xi ),\quad 0 <\xi < n. }$$
(6)

The Euler–Maclaurin summation formula can be considered on an arbitrary interval (a, b) instead of (0, n). Namely, taking h = (ba)∕n, t = a + xh, and f(x) = f((ta)∕h) = φ(t), formula (1) reduces to

$$\displaystyle\begin{array}{rcl} h\sum _{k=0}^{n}\varphi (a + kh)& =& \int _{ a}^{b}\varphi (t)\,\mathrm{d}t + \frac{h} {2}\left [\varphi (a) +\varphi (b)\right ] \\ & & +\sum _{\nu =1}^{r}\frac{B_{2\nu }h^{2\nu }} {(2\nu )!} \left [\varphi ^{(2\nu -1)}(b) -\varphi ^{(2\nu -1)}(a)\right ] + E_{ r}(\varphi ),{}\end{array}$$
(7)

where, according to (6),

$$\displaystyle{ E_{r}(\varphi ) = (b - a)\frac{B_{2r+2}h^{2r+2}} {(2r + 2)!} \varphi ^{(2r+2)}(\xi ),\quad a <\xi < b. }$$
(8)

Remark 1.

An approach in the estimate of the remainder term of the Euler–Maclaurin formula was given by Ostrowski [47].

Remark 2.

The Euler–Maclaurin summation formula is implemented in Mathematica as the function NSum with option Method -> Integrate.

2 Connections Between Euler–Maclaurin Summation Formula and Some Basic Quadrature Rules of Newton–Cotes Type

In this section we first show a direct connection between the Euler–Maclaurin summation formula (1) and the well-known composite trapezoidal rule,

$$\displaystyle{ T_{n}f:=\sum _{ k=0}^{n}\!\!^{{\prime\prime}}f(k) = \frac{1} {2}f(0) +\sum _{ k=1}^{n-1}f(k) + \frac{1} {2}f(n), }$$
(9)

for calculating the integral

$$\displaystyle{ I_{n}f:=\int _{ 0}^{n}f(x)\,\mathrm{d}x. }$$
(10)

This rule for integrals over an arbitrary interval [a, b] can be presented in the form

$$\displaystyle{ h\sum _{k=0}^{n}\!\!^{{\prime\prime}}\varphi (a + kh) =\int _{ a}^{b}\varphi (t)\,\mathrm{d}t + E^{T}(\varphi ), }$$
(11)

where, as before, the sign ″ denotes summation with the first and last terms halved, h = (ba)∕n, and E T(φ) is the remainder term.

Remark 3.

In general, the sequence of the composite trapezoidal sums converges very slowly with respect to step refinement, because of | E T(φ) |  = O(h 2). However, the trapezoidal rule is very attractive in numerical integration of analytic and periodic functions, for which φ(t + ba) = φ(t). In that case, the sequence of trapezoidal sums

$$\displaystyle{ T_{n}(\varphi;h):= h\sum _{k=0}^{n}\!\!^{{\prime\prime}}\varphi (a + kh) = h\sum _{ k=1}^{n}\varphi (a + kh) }$$
(12)

converges geometrically when applied to analytic functions on periodic intervals or the real line. A nice survey on this subject, including history of this phenomenon, has been recently given by Trefethen and Weideman [59] (see also [64]). For example, when φ is a (ba)-periodic and analytic function, such that | φ(z) | ≤ M in the half-plane Im z > −c for some c > 0, then for each n ≥ 1, the following estimate

$$\displaystyle{\vert E^{T}(\varphi )\vert =\Big \vert T_{ n}(\varphi;h) -\int _{a}^{b}\varphi (t)\,\mathrm{d}t\Big\vert \leq \frac{(b - a)M} {\mathrm{e}^{2\pi cn/(b-a)} - 1}}$$

holds. A similar result holds for integrals over \(\mathbb{R}\).

It is well known that there are certain types of integrals which can be transformed (by changing the variable of integration) to a form suitable for the trapezoidal rule. Such transformations are known as Exponential and Double Exponential Quadrature Rules (cf. [4446, 57, 58]). However, the use of these transformations could introduce new singularities in the integrand and the analyticity strip may be lost. A nice discussion concerning the error theory of the trapezoidal rule, including several examples, has been recently given by Waldvogel [63].

Remark 4.

In 1990 Rahman and Schmeisser [51] gave a specification of spaces of functions for which the trapezoidal rule converges at a prescribed rate as n → +, where a correspondence is established between the speed of convergence and regularity properties of integrands. Some examples for these spaces were provided in [64].

In a general case, according to (1), it is clear that

$$\displaystyle{ T_{n}f - I_{n}f =\sum _{ \nu =1}^{r} \frac{B_{2\nu }} {(2\nu )!}\left [f^{(2\nu -1)}(n) - f^{(2\nu -1)}(0)\right ] + E_{ r}^{T}(f), }$$
(13)

where T n f and I n f are given by (9) and (10), respectively, and the remainder term E r T(f) is given by (6) for functions f ∈ C 2r+2[0, n].

Similarly, because of (7), the corresponding formula on the interval [a, b] is

$$\displaystyle{h\sum _{k=0}^{n}\!\!^{{\prime\prime}}\varphi (a + kh) -\int _{ a}^{b}\varphi (t)\,\mathrm{d}t =\sum _{ \nu =1}^{r}\frac{B_{2\nu }h^{2\nu }} {(2\nu )!} \left [\varphi ^{(2\nu -1)}(b) -\varphi ^{(2\nu -1)}(a)\right ] + E_{ r}^{T}(\varphi ),}$$

where E r T(φ) is the corresponding remainder given by (8). Comparing this with (11) we see that E T(φ) = E 0 T(φ).

Notice that if φ (2r+2)(x) does not change its sign on (a, b), then E r T(φ) has the same sign as the first neglected term. Otherwise, when φ (2r+2)(x) is not of constant sign on (a, b), then it can be proved that (cf. [14, p. 299])

$$\displaystyle{\vert E_{r}^{T}(\varphi )\vert \leq h^{2r+2} \frac{\vert 2B_{2r+2}\vert } {(2r + 2)!}\int _{a}^{b}\vert \varphi ^{(2r+2)}(t)\vert \,\mathrm{d}t \approx 2{\Bigl (\frac{h} {2\pi }\Bigr )}^{2r+2}\int _{ a}^{b}\vert \varphi ^{(2r+2)}(t)\vert \,\mathrm{d}t,}$$

i.e., | E r T(φ) |  = O(h 2r+2). Supposing that a + | φ (2r+2)(x) | dx < +, this holds also in the limit case as b → +. This limit case enables applications of the Euler–Maclaurin formula in summation of infinite series, as well as for obtaining asymptotic formulas for a large b.

A standard application of the Euler–Maclaurin formula is in numerical integration. Namely, for a small constant h, the trapezoidal sum can be dramatically improved by subtracting appropriate terms with the values of derivatives at the endpoints a and b. In such a way, the corresponding approximations of the integral can be improved to O(h 4), O(h 6), etc.

Remark 5.

Rahman and Schmeisser [52] obtained generalizations of the trapezoidal rule and the Euler–Maclaurin formula and used them for constructing quadrature formulas for functions of exponential type over infinite intervals using holomorphic functions of exponential type in the right half-plane, or in a vertical strip, or in the whole plane. They also determined conditions which equate the existence of the improper integral to the convergence of its approximating series.

Remark 6.

In this connection an interesting question can be asked. Namely, what happens if the function \(\varphi \in C^{\infty }(\mathbb{R})\) and its derivatives are (ba)-periodic, i.e., φ (2ν−1)(a) = φ (2ν−1)(b), ν = 1, 2,  ? The conclusion that T n (φ; h), in this case, must be exactly equal to a b φ(t) dt is wrong, but the correct conclusion is that E T(φ) decreases faster than any finite power of h as n tends to infinity.

Remark 7.

Also, the Euler–Maclaurin formula was used in getting an extrapolating method well-known as Romberg’s integration (cf. [14, pp. 302–308 and 546–523] and [39, pp. 158–164]).

In the sequel, we consider a quadrature sum with values of the function f at the points \(x = k + \frac{1} {2}\), k = 0, 1, , n − 1, i.e., the so-called midpoint rule

$$\displaystyle{M_{n}f:=\sum _{ k=0}^{n-1}f{\Bigl (k + \frac{1} {2}\Bigr )}.}$$

Also, for this rule there exists the so-called second Euler–Maclaurin summation formula

$$\displaystyle{ M_{n}f - I_{n}f =\sum _{ \nu =1}^{r}\frac{(2^{1-2\nu } - 1)B_{ 2\nu }} {(2\nu )!} \left [f^{(2\nu -1)}(n) - f^{(2\nu -1)}(0)\right ] + E_{ r}^{M}(f), }$$
(14)

for which

$$\displaystyle{E_{r}^{M}(f) = n\,\frac{(2^{-1-2r} - 1)B_{ 2r+2}} {(2r + 2)!} f^{(2r+2)}(\xi ),\qquad 0 <\xi < n,}$$

when f ∈ C 2r+2[0, n] (cf. [39, p. 157]). As before, I n f is given by (10).

The both formulas, (13) and (14), can be unified as

$$\displaystyle{Q_{n}f - I_{n}f =\sum _{ \nu =1}^{r}\frac{B_{2\nu }(\tau )} {(2\nu )!} \left [f^{(2\nu -1)}(n) - f^{(2\nu -1)}(0)\right ] + E_{ r}^{Q}(f),}$$

where τ = 0 for Q n  ≡ T n and τ = 1∕2 for Q n  ≡ M n . It is true, because of the fact that [50, p. 765] (see also [10])

$$\displaystyle{B_{\nu }(0) = B_{\nu }\quad \mbox{ and}\quad B_{\nu }{\Bigl (\frac{1} {2}\Bigr )} = (2^{1-\nu }- 1)B_{\nu }.}$$

If we take a combination of T n f and M n f as

$$\displaystyle{Q_{n}f = S_{n}f = \frac{1} {3}(T_{n}f + 2M_{n}f),}$$

which is, in fact, the well-known classical composite Simpson rule,

$$\displaystyle{S_{n}f:= \frac{1} {3}\left [\frac{1} {2}f(0) +\sum _{ k=1}^{n-1}f(k) + 2\sum _{ k=0}^{n-1}f{\Bigl (k + \frac{1} {2}\Bigr )} + \frac{1} {2}f(n)\right ],}$$

we obtain

$$\displaystyle{ S_{n}f - I_{n}f =\sum _{ \nu =2}^{r}\frac{(4^{1-\nu }- 1)B_{ 2\nu }} {3(2\nu )!} \left [f^{(2\nu -1)}(n) - f^{(2\nu -1)}(0)\right ] + E_{ r}^{S}(f). }$$
(15)

Notice that the summation on the right-hand side in the previous equality starts with ν = 2, because the term for ν = 1 vanishes. For f ∈ C 2r+2[0, n] it can be proved that there exists ξ ∈ (0, n), such that

$$\displaystyle{E_{r}^{S}(f) = n\,\frac{(4^{-r} - 1)B_{ 2r+2}} {3(2r + 2)!} f^{(2r+2)}(\xi ).}$$

For some modification and generalizations of the Euler–Maclaurin formula, see [2, 7, 2022, 37, 55, 60]. In 1965 Kalinin [29] derived an analogue of the Euler–Maclaurin formula for C functions, for which there is Taylor series at each positive integer x = ν,

$$\displaystyle{\int _{a}^{b}f(x)\,\mathrm{d}x =\sum _{ k=0}^{+\infty }\frac{\theta ^{k+1} - (\theta -1)^{k+1}} {(k + 1)!} h^{k+1}\sum _{ \nu =1}^{n}f^{(k)}(a + (\nu -\theta )h),}$$

where h = (ba)∕n, and used it to find some new expansions for the gamma function, the ψ function, as well as the Riemann zeta function.

Using Bernoulli and Euler polynomials, B n (x) and E n (x), in 1960 Keda [30] established a quadrature formula similar to the Euler–Maclaurin,

$$\displaystyle{\int _{0}^{1}f(x)\,\mathrm{d}x = T_{ n} +\sum _{ k=0}^{n-1}A_{ k}\left [f^{(2k+2)}(0) + f^{(2k+2)}(1)\right ] + R_{ n},}$$

where

$$\displaystyle{T_{n} = \frac{1} {n}\sum _{k=0}^{n}\!\!^{{\prime\prime}}f{\Bigl (\frac{k} {n}\Bigr )},\quad A_{k} =\sum _{ \nu =1}^{2k+2} \frac{B_{\nu }E_{2k+3-\nu }} {\nu !(2k + 3-\nu )!n^{\nu }}\quad (k = 0,1,\ldots,n - 1),}$$

and

$$\displaystyle{R_{n} = f^{(2n+2)}(\xi )\sum _{ m=1}^{n+1} \frac{2B_{2m}E_{2n-2m+3}} {(2m)!(2n - 2m + 3)!n^{2m}}\quad (0 \leq \xi \leq 1)}$$

for f ∈ C 2n+2[0, 1], where B n  = B n (0) and E n  = E n (0). The convergence of Euler–Maclaurin quadrature formulas on a class of smooth functions was considered by Vaskevič [62].

Some periodic analogues of the Euler–Maclaurin formula with applications to number theory have been developed by Berndt and Schoenfeld [6]. In the last section of [6], they showed how the composite Newton–Cotes quadrature formulas (Simpson’s parabolic and Simpson’s three-eighths rules), as well as various other quadratures (e.g., Weddle’s composite rule), can be derived from special cases of their periodic Euler–Maclaurin formula, including explicit formulas for the remainder term.

3 Euler–Maclaurin Formula Based on the Composite Gauss–Legendre Rule and Its Lobatto Modification

In the papers [15, 48, 56], the authors considered generalizations of the Euler–Maclaurin formula for some particular Newton–Cotes rules, as well as for 2- and 3-point Gauss–Legendre and Lobatto formulas (see also [4, 17, 33, 34]).

Recently, we have done [40] the extensions of Euler–Maclaurin formulas by replacing the quadrature sum Q n by the composite Gauss–Legendre shifted formula, as well as by its Lobatto modification. In these cases, several special rules have been obtained by using the Mathematica package OrthogonalPolynomials (cf. [9, 43]). Some details on construction of orthogonal polynomials and quadratures of Gaussian type will be given in Sect. 5.

We denote the space of all algebraic polynomials defined on \(\mathbb{R}\) (or some its subset) by \(\mathcal{P}\), and by \(\mathcal{P}_{m} \subset \mathcal{P}\) the space of polynomials of degree at most m \((m \in \mathbb{N})\).

Let w ν  = w ν G and τ ν  = τ ν G, ν = 1, , m, be weights (Christoffel numbers) and nodes of the Gauss–Legendre quadrature formula on [0, 1],

$$\displaystyle{ \int _{0}^{1}f(x)\,\mathrm{d}x =\sum _{ \nu =1}^{m}w_{\nu }^{G}f(\tau _{\nu }^{G}) + R_{ m}^{G}(f). }$$
(16)

Note that the nodes τ ν are zeros of the shifted (monic) Legendre polynomial

$$\displaystyle{\pi _{m}(x) ={\Bigl ({ 2m \atop m} \Bigr )}^{-1}P_{ m}(2x - 1).}$$

Degree of its algebraic precision is d = 2m − 1, i.e., R m G(f) = 0 for each \(f \in \mathcal{P}_{2m-1}\). The quadrature sum in (16) we denote by Q m G f, i.e.,

$$\displaystyle{Q_{m}^{G}f =\sum _{ \nu =1}^{m}w_{\nu }^{G}f(\tau _{\nu }^{G}).}$$

The corresponding composite Gauss–Legendre sum for approximating the integral I n f, given by (10), can be expressed in the form

$$\displaystyle{ G_{m}^{(n)}f =\sum _{ k=0}^{n-1}Q_{ m}^{G}f(k + \cdot ) =\sum _{ \nu =1}^{m}w_{\nu }^{G}\sum _{ k=0}^{n-1}f(k +\tau _{ \nu }^{G}). }$$
(17)

In the sequel we use the following expansion of a function f ∈ C s[0, 1] in Bernoulli polynomials for any x ∈ [0, 1] (see Krylov [31, p. 15])

$$\displaystyle{ f(x) =\int _{ 0}^{1}f(t)\,\mathrm{d}t+\sum _{ j=1}^{s-1}\frac{B_{j}(x)} {j!} \left [f^{(j-1)}(1) - f^{(j-1)}(0)\right ]-\frac{1} {s!}\int _{0}^{1}f^{(s)}(t)L_{ s}(x,t)\,\mathrm{d}t, }$$
(18)

where L s (x, t) = B s (xt) − B s (x) and B s (x) is a function of period one, defined by

$$\displaystyle{ B_{s}^{{\ast}}(x) = B_{ s}(x),\quad 0 \leq x < 1,\quad B_{s}^{{\ast}}(x + 1) = B_{ s}^{{\ast}}(x). }$$
(19)

Notice that B 0 (x) = 1, B 1 (x) is a discontinuous function with a jump of − 1 at each integer, and B s (x), s > 1, is a continuous function.

Suppose now that f ∈ C 2r[0, n], where r ≥ m. Since the all nodes τ ν  = τ ν G, ν = 1, , m, of the Gaussian rule (16) belong to (0, 1), using the expansion (18), with x = τ ν and s = 2r + 1, we have

$$\displaystyle\begin{array}{rcl} f(\tau _{\nu })& =& I_{1}f +\sum _{ j=1}^{2r}\frac{B_{j}(\tau _{\nu })} {j!} \left [f^{(j-1)}(1) - f^{(j-1)}(0)\right ] {}\\ & & \qquad \qquad - \frac{1} {(2r + 1)!}\int _{0}^{1}f^{(2r+1)}(t)L_{ 2r+1}(\tau _{\nu },t)\,\mathrm{d}t, {}\\ \end{array}$$

where I 1 f =  0 1 f(t) dt.

Now, if we multiply it by w ν  = w ν G and then sum in ν from 1 to m, we obtain

$$\displaystyle\begin{array}{rcl} \sum _{\nu =1}^{m}w_{\nu }f(\tau _{\nu })& =& {\biggl (\sum _{\nu =1}^{m}w_{\nu }\biggr )}I_{ 1}f +\sum _{ j=1}^{2r} \frac{1} {j!}{\biggl (\sum _{\nu =1}^{m}w_{\nu }B_{ j}(\tau _{\nu })\biggr )}\left [f^{(j-1)}(1) - f^{(j-1)}(0)\right ] {}\\ & & \qquad \qquad \ \ \quad - \frac{1} {(2r + 1)!}\int _{0}^{1}f^{(2r+1)}(t){\biggl (\sum _{\nu =1}^{m}w_{\nu }L_{ 2r+1}(\tau _{\nu },t)\biggr )}\,\mathrm{d}t, {}\\ \end{array}$$

i.e.,

$$\displaystyle{Q_{m}^{G}f = Q_{ m}^{G}(1)\int _{ 0}^{1}f(t)\,\mathrm{d}t+\sum _{ j=1}^{2r}\frac{Q_{m}^{G}(B_{ j})} {j!} \left [f^{(j-1)}(1) - f^{(j-1)}(0)\right ]+E_{ m,r}^{G}(f),}$$

where

$$\displaystyle{E_{m,r}^{G}(f) = - \frac{1} {(2r + 1)!}\int _{0}^{1}f^{(2r+1)}(t)Q_{ m}^{G}\left (L_{ 2r+1}(\,\cdot,t)\right )\,\mathrm{d}t.}$$

Since

$$\displaystyle{\int _{0}^{1}B_{ j}(x)\,\mathrm{d}x = \left \{\begin{array}{ll} 1,\quad &j = 0, \\ 0, &j \geq 1, \end{array} \right.}$$

and

$$\displaystyle{Q_{m}^{G}(B_{ j}) =\sum _{ \nu =1}^{m}w_{\nu }B_{ j}(\tau _{\nu }) = \left \{\begin{array}{ll} 1,\quad &j = 0, \\ 0, &1 \leq j \leq 2m - 1, \end{array} \right.}$$

because the Gauss–Legendre formula is exact for all algebraic polynomials of degree at most 2m − 1, the previous formula becomes

$$\displaystyle{ Q_{m}^{G}f -\int _{ 0}^{1}f(t)\,\mathrm{d}t =\sum _{ j=2m}^{2r}\frac{Q_{m}^{G}(B_{ j})} {j!} \left [f^{(j-1)}(1) - f^{(j-1)}(0)\right ] + E_{ m,r}^{G}(f). }$$
(20)

Notice that for Gauss–Legendre nodes and the corresponding weights the following equalities

$$\displaystyle{\tau _{\nu } +\tau _{m-\nu +1} = 1,\ \ w_{\nu } = w_{m-\nu +1} > 0,\ \ \nu = 1,\ldots,m,}$$

hold, as well as

$$\displaystyle{w_{\nu }B_{j}(\tau _{\nu }) + w_{m-\nu +1}B_{j}(\tau _{m-\nu +1}) = w_{\nu }B_{j}(\tau _{\nu })(1 + (-1)^{j}),}$$

which is equal to zero for odd j. Also, if m is odd, then τ (m+1)∕2 = 1∕2 and B j (1∕2) = 0 for each odd j. Thus, the quadrature sum

$$\displaystyle{Q_{m}^{G}(B_{ j}) =\sum _{ \nu =1}^{m}w_{\nu }B_{ j}(\tau _{\nu }) = 0}$$

for odd j, so that (20) becomes

$$\displaystyle{ Q_{m}^{G}f -\int _{ 0}^{1}f(t)\,\mathrm{d}t =\sum _{ j=m}^{r}\frac{Q_{m}^{G}(B_{ 2j})} {(2j)!} \left [f^{(2j-1)}(1) - f^{(2j-1)}(0)\right ] + E_{ m,r}^{G}(f). }$$
(21)

Consider now the error of the (shifted) composite Gauss–Legendre formula (17). It is easy to see that

$$\displaystyle\begin{array}{rcl} G_{m}^{(n)}f - I_{ n}f& =& \sum _{k=0}^{n-1}\left [Q_{ m}^{G}f(k + \cdot \,) -\int _{ k}^{k+1}f(t)\,\mathrm{d}t\right ] {}\\ & =& \sum _{k=0}^{n-1}\left [Q_{ m}^{G}f(k + \cdot \,) -\int _{ 0}^{1}f(k + x)\,\mathrm{d}x\right ]. {}\\ \end{array}$$

Then, using (21) we obtain

$$\displaystyle\begin{array}{rcl} G_{m}^{(n)}f - I_{ n}f& =& \sum _{k=0}^{n-1}\left \{\sum _{ j=m}^{r}\frac{Q_{m}^{G}(B_{ 2j})} {(2j)!} \left [f^{(2j-1)}(k + 1) - f^{(2j-1)}(k)\right ] + E_{ m,r}^{G}(f(k + \cdot \,))\right \} {}\\ & =& \sum _{j=m}^{r}\frac{Q_{m}^{G}(B_{ 2j})} {(2j)!} \left [f^{(2j-1)}(n) - f^{(2j-1)}(0)\right ] + E_{ n,m,r}^{G}(f), {}\\ \end{array}$$

where E n, m, r G(f) is given by

$$\displaystyle{ E_{n,m,r}^{G}(f) = - \frac{1} {(2r + 1)!}\int _{0}^{1}{\biggl (\sum _{ k=0}^{n-1}f^{(2r+1)}(k + t)\biggr )}Q_{ m}^{G}\left (L_{ 2r+1}(\,\cdot,t)\right )\,\mathrm{d}t. }$$
(22)

Since L 2r+1(x, t) = B 2r+1 (xt) − B 2r+1 (x) and

$$\displaystyle{B_{2r+1}^{{\ast}}(\tau _{\nu }) = B_{ 2r+1}(\tau _{\nu }),\quad B_{2r+1}^{{\ast}}(\tau _{\nu } - t) = - \frac{1} {2r + 2} \frac{\,\mathrm{d}} {\,\mathrm{d}t}B_{2r+2}^{{\ast}}(\tau _{\nu } - t),}$$

we have

$$\displaystyle\begin{array}{rcl} Q_{m}^{G}\left (L_{ 2r+1}(\,\cdot,t)\right )& =& Q_{m}^{G}\left (B_{ 2r+1}^{{\ast}}(\,\cdot - t)\right ) - Q_{ m}^{G}\left (B_{ 2r+1}^{{\ast}}(\,\cdot \,)\right ) {}\\ & =& - \frac{1} {2r + 2}\,Q_{m}^{G}\left ( \frac{\,\mathrm{d}} {\,\mathrm{d}t}B_{2r+2}^{{\ast}}(\,\cdot - t)\right ), {}\\ \end{array}$$

because \(Q_{m}^{G}\left (B_{2r+1}(\,\cdot \,)\right ) = 0\). Then for (22) we get

$$\displaystyle{(2r + 2)!E_{n,m,r}^{G}(f) =\int _{ 0}^{1}{\biggl (\sum _{ k=0}^{n-1}f^{(2r+1)}(k + t)\biggr )}Q_{ m}^{G}\left ( \frac{\,\mathrm{d}} {\,\mathrm{d}t}B_{2r+2}^{{\ast}}(\,\cdot - t)\right )\,\mathrm{d}t.}$$

By using an integration by parts, it reduces to

$$\displaystyle{(2r+2)!E_{n,m,r}^{G}(f) = F(t)Q_{ m}^{G}\left (B_{ 2r+2}^{{\ast}}(\,\cdot - t)\right )\Big\vert _{ 0}^{1}-\int _{ 0}^{1}Q_{ m}^{G}\left (B_{ 2r+2}^{{\ast}}(\,\cdot - t)\right )F'(t)\,\mathrm{d}t,}$$

where F(t) is introduced in the following way

$$\displaystyle{F(t) =\sum _{ k=0}^{n-1}f^{(2r+1)}(k + t).}$$

Since B 2r+2 (τ ν − 1) = B 2r+2 (τ ν ) = B 2r+2(τ ν ), we have

$$\displaystyle\begin{array}{rcl} F(t)Q_{m}^{G}\left (B_{ 2r+2}^{{\ast}}(\,\cdot - t)\right )\Big\vert _{ 0}^{1}& =& {\bigl (F(1) - F(0)\bigr )}Q_{ m}^{G}\left (B_{ 2r+2}^{{\ast}}(\,\cdot \,)\right ) {}\\ & =& Q_{m}^{G}\left (B_{ 2r+2}(\,\cdot \,)\right )\int _{0}^{1}F'(t)\,\mathrm{d}t, {}\\ \end{array}$$

so that

$$\displaystyle{(2r + 2)!E_{n,m,r}^{G}(f) =\int _{ 0}^{1}\left [Q_{ m}^{G}\left (B_{ 2r+2}(\,\cdot \,)\right ) - Q_{m}^{G}\left (B_{ 2r+2}^{{\ast}}(\,\cdot - t)\right )\right ]F'(t)\,\mathrm{d}t.}$$

Since

$$\displaystyle{ g_{m,r}^{G}(t):= (-1)^{r-m}Q_{ m}^{G}\left [B_{ 2r+2}(\,\cdot \,) - B_{2r+2}^{{\ast}}(\,\cdot - t)\right ] > 0,\qquad 0 < t < 1, }$$
(23)

there exists an η ∈ (0, 1) such that

$$\displaystyle{(2r + 2)!E_{n,m,r}^{G}(f) = F'(\eta )\int _{ 0}^{1}Q_{ m}^{G}\left [B_{ 2r+2}(\,\cdot \,) - B_{2r+2}^{{\ast}}(\,\cdot - t)\right ]\,\mathrm{d}t.}$$

Typical graphs of functions tg m, r G(t) for some selected values of r ≥ m ≥ 1 are presented in Fig. 1.

Fig. 1
figure 1

Graphs of tg m, r G(t), r = m (solid line), r = m + 1 (dashed line), and r = m + 2 (dotted line), when m = 1, m = 2 (top), and m = 3, m = 4 (bottom)

Because of continuity of f (2r+2) on [0, n] we conclude that there exists also ξ ∈ (0, n) such that F′(η) = nf (2r+2)(ξ).

Finally, because of \(\int _{0}^{1}Q_{m}^{G}\left [B_{2r+2}^{{\ast}}(\,\cdot - t)\right ]\,\mathrm{d}t = 0\), we obtain that

$$\displaystyle{(2r + 2)!E_{n,m,r}^{G}(f) = nf^{(2r+2)}(\xi )\int _{ 0}^{1}Q_{ m}^{G}\left [B_{ 2r+2}(\,\cdot \,)\right ]\,\mathrm{d}t.}$$

In this way, we have just proved the Euler–Maclaurin formula for the composite Gauss–Legendre rule (17) for approximating the integral I n f, given by (10) (see [40]):

Theorem 1.

For \(n,m,r \in \mathbb{N}\) (m ≤ r) and f ∈ C 2r [0,n] we have

$$\displaystyle{ G_{m}^{(n)}f - I_{ n}f =\sum _{ j=m}^{r}\frac{Q_{m}^{G}(B_{ 2j})} {(2j)!} \left [f^{(2j-1)}(n) - f^{(2j-1)}(0)\right ] + E_{ n,m,r}^{G}(f), }$$
(24)

where G m (n) f is given by (17) , and Q m G B 2j denotes the basic Gauss–Legendre quadrature sum applied to the Bernoulli polynomial x ↦ B 2j (x), i.e.,

$$\displaystyle{ Q_{m}^{G}(B_{ 2j}) =\sum _{ \nu =1}^{m}w_{\nu }^{G}B_{ 2j}(\tau _{\nu }^{G}) = -R_{ m}^{G}(B_{ 2j}), }$$
(25)

where R m G (f) is the remainder term in (16).

If f ∈ C 2r+2 [0,n], then there exists ξ ∈ (0,n), such that the error term in (24) can be expressed in the form

$$\displaystyle{ E_{n,m,r}^{G}(f) = n\,\frac{Q_{m}^{G}(B_{ 2r+2})} {(2r + 2)!} f^{(2r+2)}(\xi ). }$$
(26)

We consider now special cases of the formula (24) for some typical values of m. For a given m, by G (m) we denote the sequence of coefficients which appear in the sum on the right-hand side in (24), i.e.,

$$\displaystyle{G^{(m)} ={\bigl \{ Q_{ m}^{G}(B_{ 2j})\bigr \}}_{j=m}^{\infty } = \left \{Q_{ m}^{G}(B_{ 2m}),Q_{m}^{G}(B_{ 2m+2}),Q_{m}^{G}(B_{ 2m+4}),\ldots \right \}.}$$

These Gaussian sums we can calculate very easily by using Mathematica Package OrthogonalPolynomials (cf. [9, 43]). In the sequel we mention cases when 1 ≤ m ≤ 6.

Case m = 1. Here τ 1 G = 1∕2 and w 1 G = 1, so that, according to (25),

$$\displaystyle{Q_{1}^{G}(B_{ 2j}) = B_{2j}(1/2) = (2^{1-2j} - 1)B_{ 2j},}$$

and (24) reduces to (14). Thus,

$$\displaystyle{G^{(1)} = \left \{- \frac{1} {12},\, \frac{7} {240},- \frac{31} {1344},\, \frac{127} {3840},- \frac{2555} {33792},\, \frac{1414477} {5591040},-\frac{57337} {49152},\, \frac{118518239} {16711680},\,\ldots \right \}.}$$

Case m = 2. Here we have

$$\displaystyle{\tau _{1}^{G} = \frac{1} {2}\left (1 - \frac{1} {\sqrt{3}}\right ),\ \ \tau _{2}^{G} = \frac{1} {2}\left (1 + \frac{1} {\sqrt{3}}\right )\quad \mbox{ and}\quad w_{1}^{G} = w_{ 2}^{G} = \frac{1} {2},}$$

so that \(Q_{2}^{G}(B_{2j}) = \frac{1} {2}\left (B_{2j}(\tau _{1}^{G}) + B_{ 2j}(\tau _{2}^{G})\right ) = B_{ 2j}(\tau _{1}^{G})\). In this case, the sequence of coefficients is

$$\displaystyle{G^{(2)} = \left \{- \frac{1} {180},\, \frac{1} {189},- \frac{17} {2160},\, \frac{97} {5346},- \frac{1291411} {21228480},\, \frac{16367} {58320},-\frac{243615707} {142767360},\,\ldots \right \}.}$$

Case m = 3. In this case

$$\displaystyle{\tau _{1}^{G} = \frac{1} {10}\left (5 -\sqrt{15}\right ),\quad \tau _{2}^{G} = \frac{1} {2},\quad \tau _{3}^{G} = \frac{1} {10}\left (5 + \sqrt{15}\right )}$$

and

$$\displaystyle{w_{1}^{G} = \frac{5} {18},\quad w_{2}^{G} = \frac{4} {9},\quad w_{3}^{G} = \frac{5} {18},}$$

so that

$$\displaystyle{Q_{3}^{G}(B_{ 2j}) = \frac{5} {9}B_{2j}(\tau _{1}^{G}) + \frac{4} {9}B_{2j}(\tau _{2}^{G})}$$

and

$$\displaystyle{G^{(3)} = \left \{- \frac{1} {2800},\, \frac{49} {72000},- \frac{8771} {5280000},\, \frac{4935557} {873600000},- \frac{15066667} {576000000},\, \frac{3463953717} {21760000000},\,\ldots \right \}.}$$

Cases m = 4, 5, 6. The corresponding sequences of coefficients are

$$\displaystyle\begin{array}{rcl} \!G^{(4)}\!& =& \!\left \{- \frac{1} {44100}, \frac{41} {565950},- \frac{3076} {11704875}, \frac{93553} {75631500},- \frac{453586781} {60000990000}, \frac{6885642443} {117354877500},\,\ldots \right \}\!, {}\\ \!G^{(5)}\!& =& \!\left \{\!- \frac{1} {698544}, \frac{205} {29719872},- \frac{100297} {2880541440}, \frac{76404959} {352578272256},- \frac{839025422533} {496513166929920},\,\ldots \right \}\!, {}\\ \!G^{(6)}\!& =& \!\left \{\!- \frac{1} {11099088}, \frac{43} {70436520},- \frac{86221} {21074606784}, \frac{147502043} {4534139665440},- \frac{1323863797} {4200045163776},\,\ldots \!\right \}\!. {}\\ \end{array}$$

The Euler–Maclaurin formula based on the composite Lobatto formula can be considered in a similar way. The corresponding Gauss-Lobatto quadrature formula

$$\displaystyle{ \int _{0}^{1}f(x)\,\mathrm{d}x =\sum _{ \nu =0}^{m+1}w_{\nu }^{L}f(\tau _{\nu }^{L}) + R_{ m}^{L}(f), }$$
(27)

with the endnodes τ 0 = τ 0 L = 0, τ m+1 = τ m+1 L = 1, has internal nodes τ ν  = τ ν L, ν = 1, , m, which are zeros of the shifted (monic) Jacobi polynomial,

$$\displaystyle{\pi _{m}(x) ={\Bigl ({ 2m + 2 \atop m} \Bigr )}^{-1}P_{ m}^{(1,1)}(2x - 1),}$$

orthogonal on the interval (0, 1) with respect to the weight function xx(1 − x). The algebraic degree of precision of this formula is d = 2m + 1, i.e., R m L(f) = 0 for each \(f \in \mathcal{P}_{2m+1}\).

For constructing the Gauss-Lobatto formula

$$\displaystyle{ Q_{m}^{L}(f) =\sum _{ \nu =0}^{m+1}w_{\nu }^{L}f(\tau _{\nu }^{L}), }$$
(28)

we use parameters of the corresponding Gaussian formula with respect to the weight function xx(1 − x), i.e.,

$$\displaystyle{\int _{0}^{1}g(x)x(1 - x)\,\mathrm{d}x =\sum _{ \nu =1}^{m}\widehat{w}_{\nu }^{G}g(\widehat{\tau }_{\nu }^{G}) +\widehat{ R}_{ m}^{G}(g).}$$

The nodes and weights of the Gauss-Lobatto quadrature formula (27) are (cf. [36, pp. 330–331])

$$\displaystyle{\tau _{0}^{L} = 0,\quad \tau _{\nu }^{L} =\widehat{\tau }_{ \nu }^{G}\quad (\nu = 1,\ldots,m),\quad \tau _{ m+1}^{L} = 1,}$$

and

$$\displaystyle{w_{0}^{L} = \frac{1} {2} -\sum _{\nu =1}^{m}\frac{\widehat{w}_{\nu }^{G}} {\widehat{\tau }_{\nu }^{G}},\ \ \ w_{\nu }^{L} = \frac{\widehat{w}_{\nu }^{G}} {\widehat{\tau }_{\nu }^{G}(1 -\widehat{\tau }_{\nu }^{G})}\ \ (\nu = 1,\ldots,m),\ \ \ w_{m+1}^{L} = \frac{1} {2} -\sum _{\nu =1}^{m} \frac{\widehat{w}_{\nu }^{G}} {1 -\widehat{\tau }_{\nu }^{G}},}$$

respectively. The corresponding composite rule is

$$\displaystyle\begin{array}{rcl} L_{m}^{(n)}f& =& \sum _{ k=0}^{n-1}Q_{ m}^{L}f(k + \cdot ) =\sum _{ \nu =0}^{m+1}w_{\nu }^{L}\sum _{ k=0}^{n-1}f(k +\tau _{ \nu }^{L}), \\ & =& (w_{0}^{L} + w_{ m+1}^{L})\sum _{ k=0}^{n{\prime\prime}}f(k) +\sum _{ \nu =1}^{m}w_{\nu }^{L}\sum _{ k=0}^{n-1}f(k +\tau _{ \nu }^{L}).{}\end{array}$$
(29)

As in the Gauss–Legendre case, there exists a symmetry of nodes and weights, i.e.,

$$\displaystyle{\tau _{\nu }^{L} +\tau _{ m+1-\nu }^{L} = 1,\ \ w_{\nu }^{L} = w_{ m+1-\nu }^{L} > 0\quad \nu = 0,1,\ldots,m + 1,}$$

so that the Gauss-Lobatto quadrature sum

$$\displaystyle{Q_{m}^{L}(B_{ j}) =\sum _{ \nu =0}^{m+1}w_{\nu }^{L}B_{ j}(\tau _{\nu }^{L}) = 0}$$

for each odd j.

By the similar arguments as before, we can state and prove the following result.

Theorem 2.

For \(n,m,r \in \mathbb{N}\) (m ≤ r) and f ∈ C 2r [0,n] we have

$$\displaystyle{ L_{m}^{(n)}f - I_{ n}f =\sum _{ j=m+1}^{r}\frac{Q_{m}^{L}(B_{ 2j})} {(2j)!} \left [f^{(2j-1)}(n) - f^{(2j-1)}(0)\right ] + E_{ n,m,r}^{L}(f), }$$
(30)

where L m (n) f is given by (29) , and Q m L B 2j denotes the basic Gauss-Lobatto quadrature sum (28) applied to the Bernoulli polynomial x ↦ B 2j (x), i.e.,

$$\displaystyle{Q_{m}^{L}(B_{ 2j}) =\sum _{ \nu =0}^{m+1}w_{\nu }^{L}B_{ 2j}(\tau _{\nu }^{L}) = -R_{ m}^{L}(B_{ 2j}),}$$

where R m L (f) is the remainder term in (27).

If f ∈ C 2r+2 [0,n], then there exists ξ ∈ (0,n), such that the error term in (30) can be expressed in the form

$$\displaystyle{E_{n,m,r}^{L}(f) = n\,\frac{Q_{m}^{L}(B_{ 2r+2})} {(2r + 2)!} f^{(2r+2)}(\xi ).}$$

In the sequel we give the sequence of coefficients L (m) which appear in the sum on the right-hand side in (30), i.e.,

$$\displaystyle{L^{(m)} ={\bigl \{ Q_{ m}^{L}(B_{ 2j})\bigr \}}_{j=m+1}^{\infty } = \left \{Q_{ m}^{L}(B_{ 2m+2}),Q_{m}^{L}(B_{ 2m+4}),Q_{m}^{L}(B_{ 2m+6}),\ldots \right \},}$$

obtained by the Package OrthogonalPolynomials, for some values of m.

Case m = 0. This is a case of the standard Euler–Maclaurin formula (1), for which τ 0 L = 0 and τ 1 L = 1, with w 0 L = w 1 L = 1∕2. The sequence of coefficients is

$$\displaystyle{L^{(0)} = \left \{\frac{1} {6},- \frac{1} {30}, \frac{1} {42},- \frac{1} {30}, \frac{5} {66},- \frac{691} {2730}, \frac{7} {6},-\frac{3617} {510}, \frac{43867} {798},-\frac{174611} {330}, \frac{854513} {138},\,\ldots \right \},}$$

which is, in fact, the sequence of Bernoulli numbers {B 2j } j = 1 .

Case m = 1. In this case τ 0 L = 0, τ 1 L = 1∕2, and τ 2 = 1, with the corresponding weights w 0 L = 1∕6, w 1 L = 2∕3, and w 2 L = 1∕6, which is, in fact, the Simpson formula (15). The sequence of coefficients is

$$\displaystyle{L^{(1)} = \left \{ \frac{1} {120},- \frac{5} {672}, \frac{7} {640},- \frac{425} {16896}, \frac{235631} {2795520},-\frac{3185} {8192}, \frac{19752437} {8355840},-\frac{958274615} {52297728},\,\ldots \right \}.}$$

Case m = 2. Here we have

$$\displaystyle{\tau _{0}^{L} = 0,\quad \tau _{ 1}^{L} = \frac{1} {10}(5 -\sqrt{5}),\quad \tau _{2}^{L} = \frac{1} {10}(5 + \sqrt{5}),\quad \tau _{3}^{L} = 1}$$

and w 0 L = w 3 L = 1∕12, w 1 L = w 2 L = 5∕12, and the sequence of coefficients is

$$\displaystyle{L^{(2)} = \left \{ \frac{1} {2100},- \frac{1} {1125}, \frac{89} {41250},- \frac{25003} {3412500}, \frac{3179} {93750},- \frac{2466467} {11953125}, \frac{997365619} {623437500},\,\ldots \right \}.}$$

Case m = 3. Here the nodes and the weight coefficients are

$$\displaystyle{\tau _{0}^{L} = 0,\quad \tau _{ 1}^{L} = \frac{1} {14}(7 -\sqrt{31}),\quad \tau _{2}^{L} = \frac{1} {2},\quad \tau _{3}^{L} = \frac{1} {14}(7 + \sqrt{31}),\quad \tau _{4}^{L} = 1}$$

and

$$\displaystyle{w_{0}^{L} = \frac{1} {20},\quad w_{1}^{L} = \frac{49} {180},\quad w_{2}^{L} = \frac{16} {45},\quad w_{3}^{L} = \frac{49} {180},\quad w_{4}^{L} = \frac{1} {20},}$$

respectively, and the sequence of coefficients is

$$\displaystyle{L^{(3)} = \left \{ \frac{1} {35280},- \frac{65} {724416}, \frac{38903} {119857920},- \frac{236449} {154893312}, \frac{1146165227} {122882027520},\,\ldots \right \}.}$$

Cases m = 4, 5. The corresponding sequences of coefficients are

$$\displaystyle\begin{array}{rcl} L^{(4)}& =& \left \{ \frac{1} {582120},- \frac{17} {2063880}, \frac{173} {4167450},- \frac{43909} {170031960}, \frac{160705183} {79815002400},- \frac{76876739} {3960744480},\,\ldots \right \}, {}\\ L^{(5)}& =& \left \{ \frac{1} {9513504},- \frac{49} {68999040}, \frac{5453} {1146917376},- \frac{671463061} {17766424811520}, \frac{1291291631} {3526568534016},\,\ldots \right \}. {}\\ \end{array}$$

Remark 8.

Recently Dubeau [16] has shown that an Euler–Maclaurin like formula can be associated with any interpolatory quadrature rule.

4 Abel–Plana Summation Formula and Some Modifications

Another important summation formula is the so-called Abel–Plana formula, but it is not so well known like the Euler–Maclaurin formula. In 1820 Giovanni (Antonio Amedea) Plana [49] obtained the summation formula

$$\displaystyle{ \sum _{k=0}^{+\infty }f(k) -\int _{ 0}^{+\infty }f(x)\,\mathrm{d}x = \frac{1} {2}\,f(0) + \mathrm{i}\int _{0}^{+\infty }\frac{f(\mathrm{i}y) - f(-\mathrm{i}y)} {\mathrm{e}^{2\pi y} - 1} \,\mathrm{d}y, }$$
(31)

which holds for analytic functions f in \(\varOmega ={\bigl \{ z \in \mathbb{C}\,:\,\mathrm{ Re\,}z \geq 0\bigr \}}\) which satisfy the conditions:

  •  1 \(\lim \limits _{\vert y\vert \rightarrow +\infty }\mathrm{e}^{-\vert 2\pi y\vert }\vert f(x \pm \mathrm{i}y)\vert = 0\) uniformly in x on every finite interval,

  •  2 \(\int _{0}^{+\infty }\vert f(x + \mathrm{i}y) - f(x -\mathrm{i}y)\vert \mathrm{e}^{-\vert 2\pi y\vert }\,\mathrm{d}y\) exists for every x ≥ 0 and tends to zero when x → +.

This formula was also proved in 1823 by Niels Henrik Abel [1]. In addition, Abel also proved an interesting “alternating series version”, under the same conditions,

$$\displaystyle{ \sum _{k=0}^{+\infty }(-1)^{k}f(k) = \frac{1} {2}\,f(0) + \mathrm{i}\int _{0}^{+\infty }\frac{f(\mathrm{i}y) - f(-\mathrm{i}y)} {2\sinh \pi y} \,\mathrm{d}y. }$$
(32)

Otherwise, this formula can be obtained only from (31). Note that, by subtracting (31) from the same formula written for the function z ↦ 2f(2z), we get (32).

For the finite sum \(S_{n,m}f =\sum \limits _{ k=m}^{n}\!(-1)^{k}f(k)\), (32) the Abel summation formula becomes

$$\displaystyle\begin{array}{rcl} S_{n,m}f& =& \frac{1} {2}{\bigl [(-1)^{m}f(m) + (-1)^{n}f(n + 1)\bigr ]} \\ & & \qquad \qquad -\int _{-\infty }^{+\infty }{\bigl [(-1)^{m}\psi _{ m}(y) + (-1)^{n}\psi _{ n+1}(y)\bigr ]}w^{A}(y)\,\mathrm{d}y,{}\end{array}$$
(33)

where the Abel weight on \(\mathbb{R}\) and the function ϕ m (y) are given by

$$\displaystyle{ w^{A}(x) = \frac{x} {2\sinh \pi x}\quad \mbox{ and}\quad \phi _{m}(y) = \frac{f(m +\mathrm{ i}y) - f(m -\mathrm{ i}y)} {2\mathrm{i}y}. }$$
(34)

The moments for the Abel weight can be expressed in terms of Bernoulli numbers as

$$\displaystyle{ \mu _{k} = \left \{\begin{array}{ll} 0, &k\ \mbox{ odd}, \\ {\bigl (2^{k+2} - 1\bigr )}{(-1)^{k/2}B_{ k+2} \over k + 2},&k\ \mbox{ even}. \end{array} \right. }$$
(35)

A general Abel–Plana formula can be obtained by a contour integration in the complex plane. Let \(m,n \in \mathbb{N}\), m < n, and C(ɛ) be a closed rectangular contour with vertices at m ±ib, n ±ib, b > 0 (see Fig. 2), and with semicircular indentations of radius ɛ round m and n. Let f be an analytic function in the strip \(\varOmega _{m,n} ={\bigl \{ z \in \mathbb{C}\,:\, m \leq \mathrm{ Re\,}z \leq n\bigr \}}\) and suppose that for every m ≤ x ≤ n,

$$\displaystyle{\lim \limits _{\vert y\vert \rightarrow +\infty }\mathrm{e}^{-\vert 2\pi y\vert }\vert f(x \pm \mathrm{i}y)\vert = 0\quad \mbox{ uniformly in}\ x,}$$

and that

$$\displaystyle{\int _{0}^{+\infty }\vert f(x + \mathrm{i}y) - f(x -\mathrm{i}y)\vert \mathrm{e}^{-\vert 2\pi y\vert }\,\mathrm{d}y}$$

exists.

Fig. 2
figure 2

Rectangular contour C(ɛ)

The integration

$$\displaystyle{\int _{C(\varepsilon )} \frac{f(z)} {\mathrm{e}^{-\mathrm{i}2\pi z} - 1}\,\mathrm{d}z,}$$

with ɛ → 0 and b → +, leads to the Plana formula in the following form (cf. [42])

$$\displaystyle{ T_{m,n}f -\int _{m}^{n}f(x)\,\mathrm{d}x =\int _{ -\infty }^{+\infty }{\bigl (\phi _{ n}(y) -\phi _{m}(y)\bigr )}w^{P}(y)\,\mathrm{d}y, }$$
(36)

where

$$\displaystyle{ \phi _{m}(y) = \frac{f(m + \mathrm{i}y) - f(m -\mathrm{i}y)} {2\mathrm{i}y} \quad \mbox{ and}\quad w^{P}(y) = \frac{\vert y\vert } {\mathrm{e}^{\vert 2\pi y\vert }- 1}. }$$
(37)

Practically, the Plana formula (36) gives the error of the composite trapezoidal formula (like the Euler–Maclaurin formula). As we can see the formula (36) is similar to the Euler–Maclaurin formula, with the difference that the sum of terms

$$\displaystyle{ \frac{B_{2j}} {(2j)!}\left (f^{(2j-1)}(n) - f^{(2j-1)}(m)\right )}$$

replaced by an integral. Therefore, in applications this integral must be calculated by some quadrature rule. It is natural to construct the Gaussian formula with respect to the Plana weight function xw P(x) on \(\mathbb{R}\) (see the next section for such a construction).

In order to find the moments of this weight function, we note first that if k is odd, the moments are zero, i.e.,

$$\displaystyle{\mu _{k}(w^{P}) =\int _{ \mathbb{R}}x^{k}w^{P}(x)\,\mathrm{d}x =\int _{ \mathbb{R}}x^{k} \frac{\vert x\vert } {\mathrm{e}^{\vert 2\pi x\vert }- 1}\,\mathrm{d}x = 0.}$$

For even k, we have

$$\displaystyle{\mu _{k}(w^{P}) = 2\int _{ 0}^{+\infty } \frac{x^{k+1}} {\mathrm{e}^{2\pi x} - 1}\,\mathrm{d}x = \frac{2} {(2\pi )^{k+2}}\int _{0}^{+\infty } \frac{t^{k+1}} {\mathrm{e}^{t} - 1}\,\mathrm{d}t,}$$

which can be exactly expressed in terms of the Riemann zeta function ζ(s),

$$\displaystyle{\mu _{k}(w^{P}) = \frac{2(k + 1)!\zeta (k + 2)} {(2\pi )^{k+2}} = (-1)^{k/2}\frac{B_{k+2}} {k + 2},}$$

because the number k + 2 is even. Thus, in terms of Bernoulli numbers, the moments are

$$\displaystyle{ \mu _{k}(w^{P}) = \left \{\begin{array}{ll} 0, &k\mbox{ is odd}, \\ (-1)^{k/2}{B_{k+2} \over k + 2},\quad &k\mbox{ is even}. \end{array} \right. }$$
(38)

Remark 9.

By the Taylor expansion for ϕ m (y) (and ϕ n (y)) on the right-hand side in (36),

$$\displaystyle{\phi _{m}(y) = \frac{f(m + \mathrm{i}y) - f(m -\mathrm{i}y)} {2\mathrm{i}y} =\sum _{ j=1}^{+\infty }\frac{(-1)^{j-1}y^{2j-2}} {(2j - 1)!} f^{(2j-1)}(m),}$$

and using the moments (38), the Plana formula (36) reduces to the Euler–Maclaurin formula,

$$\displaystyle\begin{array}{rcl} T_{m,n}f -\int _{m}^{n}f(x)\,\mathrm{d}x& =& \sum _{ j=1}^{+\infty }\frac{(-1)^{j-1}} {(2j - 1)!}\mu _{2j-2}(w^{P})\left (f^{(2j-1)}(n) - f^{(2j-1)}(m)\right ) {}\\ & =& \sum _{j=1}^{+\infty } \frac{B_{2j}} {(2j)!}\left (f^{(2j-1)}(n) - f^{(2j-1)}(m)\right ), {}\\ \end{array}$$

because of μ 2j−2(w P) = (−1)j−1 B 2j ∕(2j). Note that T m, n f is the notation for the composite trapezoidal sum

$$\displaystyle{ T_{m,n}f:=\sum _{ k=m}^{n}\!\!^{{\prime\prime}}f(k) = \frac{1} {2}f(m) +\sum _{ k=m+1}^{n-1}f(k) + \frac{1} {2}f(n). }$$
(39)

For more details see Rahman and Schmeisser [53, 54], Dahlquist [1113], as well as a recent paper by Butzer, Ferreira, Schmeisser, and Stens [8].

A similar summation formula is the so-called midpoint summation formula. It can be obtained by combining two Plana formulas for the functions zf(z − 1∕2) and zf((z + m − 1)∕2). Namely,

$$\displaystyle{T_{m,2n-m+2}f{\Bigl (\frac{z + m - 1} {2} \Bigr )} - T_{m,n+1}f{\Bigl (z -\frac{1} {2}\Bigr )} =\sum _{ k=m}^{n}f(k),}$$

i.e.,

$$\displaystyle{ \sum _{k=m}^{n}f(k) -\int _{ m-1/2}^{n+1/2}f(x)\,\mathrm{d}x =\int _{ -\infty }^{+\infty }{\bigl [\phi _{ m-1/2}(y) -\phi _{n+1/2}(y)\bigr ]}w^{M}(y)\,\mathrm{d}y, }$$
(40)

where the midpoint weight function is given by

$$\displaystyle{ w^{M}(x) = w^{P}(x) - w^{P}(2x) = \frac{\vert x\vert } {\mathrm{e}^{\vert 2\pi x\vert } + 1}, }$$
(41)

and ϕ m−1∕2 and ϕ n+1∕2 are defined in (37), taking m: = m − 1∕2 and m: = n + 1∕2, respectively. The moments for the midpoint weight function can be expressed also in terms of Bernoulli numbers as

$$\displaystyle{ \mu _{k}(w^{M}) =\int _{ \mathbb{R}}x^{k} \frac{\vert x\vert } {\mathrm{e}^{\vert 2\pi x\vert } + 1}\,\mathrm{d}x = \left \{\begin{array}{ll} 0, &k\mbox{ is odd}, \\ (-1)^{k/2}(1 - 2^{-(k+1)}){B_{k+2} \over k + 2},\quad \quad &k\mbox{ is even}. \end{array} \right. }$$
(42)

An interesting weight function and the corresponding summation formula can be obtained from the Plana formula, if we integrate by parts the right side in (36) (cf. [13]). Introducing the so-called Binet weight function yw B(y) and the function yψ m (y) by

$$\displaystyle{ w^{B}(y) = -\frac{1} {2\pi }\log {\bigl (1 -\mathrm{e}^{-2\pi \left \vert y\right \vert }\bigr )}\quad \mbox{ and}\quad \psi _{ m}(y) = \frac{f'(m + \mathrm{i}y) + f'(m -\mathrm{i}y)} {2}, }$$
(43)

respectively, we see that dw B(y)∕ dy = −w P(y)∕y and

$$\displaystyle\begin{array}{rcl} \frac{\,\mathrm{d}} {\,\mathrm{d}y}\!{\Bigl \{{\bigl [\phi _{n}(y) -\phi _{m}(y)\bigr ]}y\Bigr \}}& \!=\!& \frac{1} {2\mathrm{i}}\! \frac{\,\mathrm{d}} {\,\mathrm{d}y}{\Bigl \{\!{\bigl [f(n +\mathrm{ i}y) - f(n -\mathrm{ i}y)\bigr ]}\! -\!{\bigl [ f(m +\mathrm{ i}y) - f(m -\mathrm{ i}y)\bigr ]}\!\Bigr \}} {}\\ & =& \psi _{n}(y) -\psi _{m}(y), {}\\ \end{array}$$

so that

$$\displaystyle\begin{array}{rcl} \int _{-\infty }^{+\infty }{\bigl [\phi _{ n}(y) -\phi _{m}(y)\bigr ]}w^{P}(y)\,\mathrm{d}y& =& \int _{ -\infty }^{+\infty }{\bigl [\phi _{ n}(y) -\phi _{m}(y)\bigr ]}(-y)\,\mathrm{d}w^{B}(y) {}\\ & =& \int _{-\infty }^{+\infty }{\bigl [\psi _{ n}(y) -\psi _{m}(y)\bigr ]}w^{B}(y)\,\mathrm{d}y, {}\\ \end{array}$$

because w B(y) = O(e−2π | y | ) as | y | → +. Thus, the Binet summation formula becomes

$$\displaystyle{ T_{m,n}f -\int _{m}^{n}f(x)\,\mathrm{d}x =\int _{ -\infty }^{+\infty }{\bigl [\psi _{ n}(y) -\psi _{m}(y)\bigr ]}w^{B}(y)\,\mathrm{d}y. }$$
(44)

Such a formula can be useful when f′(z) is easier to compute than f(z).

The moments for the Binet weight can be obtained from ones for w P. Since

$$\displaystyle{\mu _{k}(w^{P}) =\int _{ \mathbb{R}}y^{k}w^{P}(y)\,\mathrm{d}y =\int _{ \mathbb{R}}y^{k}(-y)\,\mathrm{d}w^{B}(y) = (k + 1)\mu _{ k}(w^{B}),}$$

according to (38),

$$\displaystyle{ \mu _{k}(w^{B}) = \left \{\begin{array}{ll} 0, &k\mbox{ is odd}, \\ (-1)^{k/2}{ B_{k+2} \over (k + 1)(k + 2)},\quad \quad &k\mbox{ is even}. \end{array} \right. }$$
(45)

There are also several other summation formulas. For example, the Lindelöf formula [32] for alternating series is

$$\displaystyle{ \sum _{k=m}^{+\infty }(-1)^{k}f(k) = (-1)^{m}\int _{ -\infty }^{+\infty }f(m - 1/2 + \mathrm{i}y)\frac{\,\mathrm{d}y} {2\cosh \pi y}, }$$
(46)

where the Lindelöf weight function is given by

$$\displaystyle{ w^{L}(x) = \frac{1} {2\cosh \pi y} = \frac{1} {\mathrm{e}^{\pi x} + \mathrm{e}^{-\pi x}}. }$$
(47)

Here, the moments

$$\displaystyle{\mu _{k}(w^{L}) =\int _{ \mathbb{R}} \frac{x^{k}} {\mathrm{e}^{\pi x} + \mathrm{e}^{-\pi x}}\,\mathrm{d}x}$$

can be expressed in terms of the generalized Riemann zeta function zζ(z, a), defined by

$$\displaystyle{\zeta (z,a) =\sum _{ \nu =0}^{+\infty }(\nu +a)^{-z}.}$$

Namely,

$$\displaystyle{ \mu _{k}(w^{L}) = \left \{\begin{array}{ll} 0, &k\ \mbox{ odd}, \\ 2(4\pi )^{-k-1}k!\big[\zeta \left (k + 1, \frac{1} {4}\big) -\zeta \left (k + 1, \frac{3} {4}\right )\right ],\quad \quad &k\ \mbox{ even}. \end{array} \right. }$$
(48)

5 Construction of Orthogonal Polynomials and Gaussian Quadratures for Weights of Abel–Plana Type

The weight functions w ( ∈ { w P, w M, w B, w A, w L}) which appear in the summation formulas considered in the previous section are even functions on \(\mathbb{R}\). In this section we consider the construction of (monic) orthogonal polynomials π k ( ≡ π k (w; ⋅ ) and corresponding Gaussian formulas

$$\displaystyle{ \int _{\mathbb{R}}f(x)w(x)\,\mathrm{d}x =\sum _{ \nu =1}^{n}A_{\nu }f(x_{\nu }) + R_{ n}(w;f), }$$
(49)

with respect to the inner product \((p,q) =\int _{\mathbb{R}}p(x)q(x)w(x)\,\mathrm{d}x\) \((p,q \in \mathcal{P})\). We note that R n (w; f) ≡ 0 for each \(f \in \mathcal{P}_{2n-1}\).

Such orthogonal polynomials \(\{\pi _{k}\}_{k\in \mathbb{N}_{0}}\) and Gaussian quadratures (49) exist uniquely, because all the moments for these weights μ k ( ≡ μ k (w)), k = 0, 1,  , exist, are finite, and μ 0 > 0.

Because of the property (xp, q) = (p, xq), these (monic) orthogonal polynomials π k satisfy the fundamental three–term recurrence relation

$$\displaystyle{ \pi _{k+1}(x) = x\pi _{k}(x) -\beta _{k}\pi _{k-1}(x),\quad k = 0,1,\ldots, }$$
(50)

with π 0(x) = 1 and π −1(x) = 0, where \(\{\beta _{k}\}_{k\in \mathbb{N}_{0}}\) \((=\{\beta _{k}(w)\}_{k\in \mathbb{N}_{0}})\) is a sequence of recursion coefficients which depend on the weight w. The coefficient β 0 may be arbitrary, but it is conveniently defined by \(\beta _{0} =\mu _{0} =\int _{\mathbb{R}}w(x)\,\mathrm{d}x\). Note that the coefficients α k in (50) are equal to zero, because the weight function w is an even function! Therefore, the nodes in (49) are symmetrically distributed with respect to the origin, and the weights for symmetrical nodes are equal. For odd n one node is at zero.

A characterization of the Gaussian quadrature (49) can be done via an eigenvalue problem for the symmetric tridiagonal Jacobi matrix (cf. [36, p. 326]),

$$\displaystyle{J_{n} = J_{n}(w) = \left [\begin{array}{ccccc} \alpha _{0} & \sqrt{\beta _{1}} & & & \mathbf{O} \\ \sqrt{\beta _{1}} & \alpha _{1} & \sqrt{\beta _{2}} \\ & \sqrt{\beta _{2}} & \alpha _{2} & \ddots \\ & & \ddots & \ddots & \sqrt{\beta _{n-1}} \\ \mathbf{O} & & &\sqrt{\beta _{n-1}} & \alpha _{n-1} \end{array} \right ],}$$

constructed with the coefficients from the three-term recurrence relation (50) (in our case, α k  = 0, k = 0, 1, , n − 1).

The nodes x ν are the eigenvalues of J n and the weights A ν are given by A ν  = β 0 v ν, 1 2, ν = 1, , n, where β 0 is the moment \(\mu _{0} =\int _{\mathbb{R}}w(x)\,\mathrm{d}x\), and v ν, 1 is the first component of the normalized eigenvector v ν  = [v ν, 1v ν, n ]T (with v ν T v ν  = 1) corresponding to the eigenvalue x ν ,

$$\displaystyle{J_{n}\mathbf{v}_{\nu } = x_{\nu }\mathbf{v}_{\nu },\quad \nu = 1,\ldots,n.}$$

An efficient procedure for constructing the Gaussian quadrature rules was given by Golub and Welsch [27], by simplifying the well-known QR algorithm, so that only the first components of the eigenvectors are computed.

The problems are very sensitive with respect to small perturbations in the data.

Unfortunately, the recursion coefficients are known explicitly only for some narrow classes of orthogonal polynomials, as e.g. for the classical orthogonal polynomials (Jacobi, the generalized Laguerre, and Hermite polynomials). However, for a large class of the so-called strongly non-classical polynomials these coefficients can be constructed numerically, but procedures are very sensitive with respect to small perturbations in the data. Basic procedures for generating these coefficients were developed by Walter Gautschi in the eighties of the last century (cf. [23, 24, 36, 41]).

However, because of progress in symbolic computations and variable-precision arithmetic, recursion coefficients can be today directly generated by using the original Chebyshev method of moments (cf. [36, pp. 159–166]) in symbolic form or numerically in sufficiently high precision. In this way, instability problems can be eliminated. Respectively symbolic/variable-precision software for orthogonal polynomials and Gaussian and similar type quadratures is available. In this regard, the Mathematica package OrthogonalPolynomials (see [9] and [43]) is downloadable from the web site http://www.mi.sanu.ac.rs/~gvm/. Also, there is Gautschi’s software in Matlab (packages OPQ and SOPQ). Thus, all that is required is a procedure for the symbolic calculation of moments or their calculation in variable-precision arithmetic.

In our case we calculate the first 2N moments in a symbolic form (list mom), using corresponding formulas (for example, (38) in the case of the Plana weight w P), so that we can construct the Gaussian formula (49) for each n ≤ N. Now, in order to get the first N recurrence coefficients {al,be} in a symbolic form, we apply the implemented function aChebyshevAlgorithm from the Package OrthogonalPolynomials, which performs construction of these coefficients using Chebyshev algorithm, with the option Algorithm->Symbolic. Thus, it can be implemented in the Mathematica package OrthogonalPolynomials in a very simple way as

<<orthogonalPolynomials‘ mom=Table[<expression for moments>,{k,0,199}]; {al,be}=aChebyshevAlgorithm[mom,Algorithm->Symbolic] pq[n_]:=aGaussianNodesWeights[n,al,be, WorkingPrecision->65,Precision -> 60] xA = Table[pq[n],{n,5,40,5}];

where we put N = 100 and the WorkingPrecision->65 in order to obtain very precisely quadrature parameters (nodes and weights) with Precision->60. These parameters are calculated for n = 5(5)40, so that xA[[k]][[1]] and xA[[k]][[2]] give lists of nodes and weights for five-point formula when k=1, for ten-point formula when k=2, etc. Otherwise, here we can calculate the n-point Gaussian quadrature formula for each n ≤ N = 100.

All computations were performed in Mathematica, Ver. 10.3.0, on MacBook Pro (Retina, Mid 2012) OS X 10.11.2. The calculations are very fast. The running time is evaluated by the function Timing in Mathematica and it includes only CPU time spent in the Mathematica kernel. Such a way may give different results on different occasions within a session, because of the use of internal system caches. In order to generate worst-case timing results independent of previous computations, we used also the command ClearSystemCache[], and in that case the running time for the Plana weight function w P has been 4. 2 ms (calculation of moments), 0. 75 s (calculation of recursive coefficients), and 8 s (calculation quadrature parameters for n = 5(5)40).

In the sequel we mention results for different weight functions, whose graphs are presented in Fig. 3.

Fig. 3
figure 3

Graphs of the weight functions: (left) w A (solid line) and w L (dashed line); (right) w P (solid line), w B (dashed line) and w M (dotted line)

1. Abel and Lindelöf Weight Functions w A and w L These weight functions are given by (34) and (47), and their moments by (35) and (48), respectively. It is interesting that their corresponding coefficients in the three-term recurrence relation (50) are known explicitly (see [36, p. 159])

$$\displaystyle{\beta _{0}^{A} =\mu _{ 0}^{A} = \frac{1} {4},\quad \beta _{k}^{A} = \frac{k(k + 1)} {4},\quad k = 1,2,\ldots \,}$$

and

$$\displaystyle{\beta _{0}^{L} =\mu _{ 0}^{L} = \frac{1} {2},\quad \beta _{k}^{L} = \frac{k^{2}} {4},\quad k = 1,2,\ldots.}$$

Thus, for these two weight functions we have recursive coefficients in the explicit form, so that we go directly to construction quadrature parameters.

2. Plana Weight Function w P This weight function is given by (37), and the corresponding moments by (38). Using the Package OrthogonalPolynomials we obtain the sequence of recurrence coefficients {β k P} k ≥ 0 in the rational form:

$$\displaystyle\begin{array}{rcl} \beta _{0}^{P}& =& \frac{1} {12},\ \ \beta _{1}^{P} = \frac{1} {10},\ \ \beta _{2}^{P} = \frac{79} {210},\ \ \beta _{3}^{P} = \frac{1205} {1659},\ \ \beta _{4}^{P} = \frac{262445} {209429},\ \ \beta _{5}^{P} = \frac{33461119209} {18089284070}, {}\\ \beta _{6}^{P}& =& \frac{361969913862291} {137627660760070},\ \ \beta _{7}^{P} = \frac{85170013927511392430} {24523312685049374477}, {}\\ \beta _{8}^{P}& =& \frac{1064327215185988443814288995130} {236155262756390921151239121153}, {}\\ \beta _{9}^{P}& =& \frac{286789982254764757195675003870137955697117} {51246435664921031688705695412342990647850}, {}\\ \beta _{10}^{P}& =& \frac{15227625889136643989610717434803027240375634452808081047} {2212147521291103911193549528920437912200375980011300650}, {}\\ \beta _{11}^{P}& =& \frac{587943441754746283972138649821948554273878447469233852697401814148410885} {71529318090286333175985287358122471724664434392542372273400541405857921},\qquad \qquad {}\\ \end{array}$$

etc.

As we can see, the fractions are becoming more complicated, so that already β 11 P has the “form of complexity” {72∕71}, i.e., it has 72 decimal digits in the numerator and 71 digits in the denominator. Further terms of this sequence have the “form of complexity” {88∕87}, {106∕05}, {129∕128}, {152∕151}, , {13451∕13448}.

Thus, the last term β 99 P has more than 13 thousand digits in its numerator and denominator. Otherwise, its value, e.g. rounded to 60 decimal digits, is

$$\displaystyle{\beta _{99}^{P} = 618.668116294139071216871819412846078447729830182674784697227.}$$

3. Midpoint Weight Function w M This weight function is given by (41), and the corresponding moments by (42). As in the previous case, we obtain the sequence of recurrence coefficients {β k M} k ≥ 0 in the rational form:

$$\displaystyle\begin{array}{rcl} \beta _{0}^{M}& =& \frac{1} {24},\ \ \beta _{1}^{M} = \frac{7} {40},\ \ \beta _{2}^{M} = \frac{2071} {5880},\ \ \beta _{3}^{M} = \frac{999245} {1217748},\ \ \beta _{4}^{M} = \frac{21959166635} {18211040276}, {}\\ \beta _{5}^{M}& =& \frac{108481778600414331} {55169934195679160},\ \ \beta _{6}^{M} = \frac{2083852396915648173441543} {813782894744588335008520}, {}\\ \beta _{7}^{M}& =& \frac{25698543837390957571411809266308135} {7116536885169433586426285918882662}, {}\\ \beta _{8}^{M}& =& \frac{202221739836050724659312728605015618097349555485} {45788344599633183797631374444694817538967629598}, {}\\ \beta _{9}^{M}& =& \frac{14077564493254853375144075652878384268409784777236869234539068357} {2446087170499983327141705915330961521888001335934900402777402200},\qquad {}\\ \end{array}$$

etc. In this case, the last term β 99 M has slightly complicated the “form of complexity” {16401∕16398} than one in the previous case, precisely. Otherwise, its value (rounded to 60 decimal digits) is

$$\displaystyle{\beta _{99}^{M} = 619.562819405146668677971154899553589896235540274133472854031.}$$

4. Binet Weight Function w B The moments for this weight function are given in (38), and our Package OrthogonalPolynomials gives the sequence of recurrence coefficients {β k B} k ≥ 0 in the rational form:

$$\displaystyle\begin{array}{rcl} \beta _{0}^{B}& =& \frac{1} {12},\quad \beta _{1}^{B} = \frac{1} {30},\quad \beta _{2}^{B} = \frac{53} {210},\quad \beta _{3}^{B} = \frac{195} {371},\quad \beta _{4}^{B} = \frac{22999} {22737},\quad \beta _{5}^{B} = \frac{29944523} {19733142}, {}\\ \beta _{6}^{B}& =& \frac{109535241009} {48264275462},\quad \beta _{7}^{B} = \frac{29404527905795295658} {9769214287853155785}, {}\\ \beta _{8}^{B}& =& \frac{455377030420113432210116914702} {113084128923675014537885725485}, {}\\ \beta _{9}^{B}& =& \frac{26370812569397719001931992945645578779849} {5271244267917980801966553649147604697542}, {}\\ \beta _{10}^{B}& =& \frac{152537496709054809881638897472985990866753853122697839} {24274291553105128438297398108902195365373879212227726}, {}\\ \beta _{11}^{B}& =& \frac{100043420063777451042472529806266909090824649341814868347109676190691} {13346384670164266280033479022693768890138348905413621178450736182873}, {}\\ \end{array}$$

etc. Numerical values of coefficients β k B for k = 12, , 39, rounded to 60 decimal digits, are presented in Table 1.

Table 1 Numerical values of the coefficients β k B, k = 12, , 39

For this case we give also quadrature parameters x ν B and A ν B, ν = 1, , n, for n = 10 (rounded to 30 digits in order to save space). Numbers in parenthesis indicate the decimal exponents (Table 2).

Table 2 Gaussian quadrature parameters x ν B and A ν B, ν = 1, , n, for ten-point rule