1 Introduction

Let \((I_{\tau })_{\tau \in \mathbb N}\) denote a sequence of quadrature  rules

$$\begin{aligned} I_{\tau }[f] \ = \ \sum \limits _{\ell = 0}^{n_{\tau }} w_{\tau ,\ell } f(x_{\tau ,\ell }), \ \forall \ f \ \in \ C^{0}[a,b] \end{aligned}$$
(1)

\((x_{\tau , \ell } \ \in \ [a,b] \ \forall \ \tau , \ell\)), where \(n_{\tau }+1\) is the number of points needed to evaluate \(I_{\tau }\). Fejér [8], Stekloff [, p. 350], and Pólya [15, 16], showed that, if the elements of \((I_{\tau })_{\tau \ \in \ \mathbb N}\) are of interpolatory type and have positive weights: \(w_{\tau ,\ell } \ > \ 0 \ \forall \ \tau , \ell ,\) then \(\left( I_{\tau }[f]\right) _{\tau \in \mathbb N}\) converges to \({\int \limits _{a}^{b}} f(x) dx\) whenever f is Riemann integrable. By interpolatory type we mean

$$\begin{aligned} I_{\tau }[f] \ = \ {\int \limits _{a}^{b}} p[f](x) dx \ \ \forall \ \ f, \end{aligned}$$
(2)

where p[f] is the unique polynomial of degree at most \(n_{\tau }\) that interpolates f at the points \(x_{\tau ,0}, x_{\tau ,1}, \dots , x_{\tau ,n_{\tau }}\). However, Fejér highlighted that, for the interpolatory type quadrature of Gauss-Legendre (which is based on the zeros \(x_{\tau ,0}, x_{\tau ,1}, \dots , x_{\tau ,n_{\tau }}\) of the Legendre polynomial of degree \(n_{\tau }\)), this convergence property follows directly from the separation theorem of Chebyshev, Markov and Stieltjes [16, Section 3.41], which states that

$$\begin{aligned} x_{\tau ,s} - a< \sum \limits _{\ell =0}^{s} w_{\tau ,\ell } < x_{\tau ,s+1} - a, s = 0, 1, \dots , n_{\tau }-1. \end{aligned}$$
(3)

In view of (3), we can rewrite the right-hand side of (1) as

$$\begin{aligned} \sum \limits _{\ell = 0}^{n_{\tau }} f(t_{\ell }^{*})[t_{\ell +1} - t_{\ell }], t_{\ell } \le t_{\ell }^{*} \le t_{\ell +1}, \end{aligned}$$

with \(t_{0} = a\) and

$$\begin{aligned} t_{\ell }^{*} = x_{\tau ,\ell }, t_{\ell +1} = a + \sum \limits _{i = 0}^{\ell } w_{\tau ,i}, \ell = 0, 1, \dots , n_{\tau }, \end{aligned}$$

that is, the Gauss-Legendre rule of f is a Riemann sumFootnote 1 of f.

A natural question is whether (3) holds for other quadrature rules with positive weights. These include the interpolatory type quadratures of Fejér of the first and second kinds (based on Chebyshev points of the first and second kinds, respectively) and the Clenshaw-Curtis quadrature (based on extrema of Chebyschev polynomials of the first kind). However, although the weights for these rules are known in explicit form [8, 10, 13], their expressions are apparently not of much help in getting sharp bounds for the sums of consecutive quadrature weights. For instance, the weights for the Fejér quadrature \(([a,b] = [-1,1])\) with Chebyshev points of the second kind

$$\begin{aligned} x_{\tau ,\ell } = \cos (\phi _{\ell }), \phi _{\ell } = [\ell +1] \frac{\pi }{\tau +2}, \ell = 0, 1, \dots , \tau , \end{aligned}$$

\(n_{\tau } = \tau\), are given by [8, p. 301]

$$\begin{aligned} w_{\tau ,\ell }= & {} \frac{1}{\tau +1} \left[ 1 - \left( 1 - \frac{1}{3}\right) \cos (2\phi _{\ell }) - \left( \frac{1}{3} - \frac{1}{5}\right) \cos (4\phi _{\ell }) - \ldots \right. \\- & {} \left. \left( \frac{1}{\eta -2} - \frac{1}{\eta }\right) \cos ([\eta -1]\phi _{\ell }) - \frac{1}{\eta }\cos ([\eta +1]\phi _{\ell }) \right] , \ell = 0, 1, \dots , \tau +1, \end{aligned}$$

where \(\eta\) is the largest odd integer that does not exceed \(\tau +1\).

Another class of quadrature rules that has positive weights and is guaranteed to converge for all Riemann integrable functions is the class of classical Romberg integrals [1]. They are built with the aim of accelerating the convergence of the composite trapezoidal rule [3, 5]. Denote by

$$\begin{aligned} T[f](a,b,q) \ = \ \frac{b-a}{q}\left( \frac{1}{2}f(a) + \sum \limits _{i = 1}^{q-1} f\left( a+i \frac{b-a}{q} \right) + \frac{1}{2}f(b) \right) \end{aligned}$$
(4)

the composite trapezoidal rule of f with q subdivisions of [ab] and, for fixed positive integers m and k, let

$$\begin{aligned} n = 2^{k} m, \quad x_{i} = a+ih, \quad i \ = \ 0,1, \dots ,n, \quad h \ = \ \frac{b-a}{n}, \end{aligned}$$

be an evenly spaced grid in [ab], with \(2^{k} m + 1\) points. The Romberg integrals are defined recursively by

$$\begin{aligned} {T_{i}^{j}} \ = \ \frac{4^{j} T_{i}^{j-1} - T_{i-1}^{j-1}}{4^{j}-1}, \end{aligned}$$
(5)

\(j = 1, 2, \dots , k, \quad i = j,j+1,\dots , k\), where

$$\begin{aligned} {T_{i}^{0}} := T[f](a,b,2^{i}m), \quad i = 0, 1, \dots , k. \end{aligned}$$
(6)

The value \({T_{i}^{j}}\) is the Romberg integral of order \(2j+2\) of f with \(2^{i} m\) subdivisions of [ab]. We refer to it as the classical Romberg integral because it can be defined in a more general setting [4, 5, 6]. \({T_{i}^{j}}\) is exact for polynomials of degree less than or equal to \(2j+1\) and the integration error

$$\begin{aligned} {\int \limits _{a}^{b}} f(x) dx \ - \ {T_{i}^{j}} \end{aligned}$$

for functions f of class \(C^{2j+2}[a,b]\) is \(O\left( [2^{i}m]^{-(2j+2)}\right)\). For more information about the convergence of Romberg integrals see [1, 2, 5] and the references therein.

By (5) and (6), \({T_{i}^{j}}\) is a linear combination of the values of f on the grid \(x_{0}, x_{2^{k-i}}, x_{(2^{k-i}) 2}, x_{(2^{k-i})3}, \dots ,\) \(x_{(2^{k-i})2^{i} m}\). Bauer, Rutishauser and Stiefel [1] showed that the coefficients \(\alpha _{j,i,m, \ell }\) in the functional representation

$$\begin{aligned} f \longmapsto {T_{i}^{j}}[f](a,b,2^{i} m) \ = \ \frac{b-a}{2^{i} m} \sum \limits _{\ell = 0}^{2^{i} m} \alpha _{j,i,m, \ell }f(x_{2^{k-i}\ell }) \end{aligned}$$
(7)

are all positive for \(m = 1\) and satisfy

$$\begin{aligned} \frac{1}{3}\bigg (\prod \limits _{\eta = 1}^{\infty } \frac{4^{\eta }}{4^{\eta }-1}\bigg ) \ \le \ \alpha _{j,i,1, \ell } \ \le \ \bigg (\prod \limits _{\eta = 1}^{\infty } \frac{4^{\eta }}{4^{\eta }-1}\bigg ) \ \approx \ 1.452 \ \ \forall \ j, i, \ell . \end{aligned}$$

They proceed by showing that the following expression for \(\alpha _{j,i,1, \ell }\) is an alternating (finite) series with terms that decrease in magnitude:

$$\begin{aligned} \alpha _{j,i,1, \ell } \ \ = \ \sum \limits _{\tau = 0}^{s} \ 2^{\tau -\sigma } \Bigg (\prod \limits _{{\begin{array}{ccl} \eta = 0 \\ \eta \ne j-\tau \end{array}} }^{j} \ \frac{4^{i-\tau }}{4^{i-\tau } - 4^{\eta +i-j}} \Bigg ), \ \ \ \sigma = \left\{ \begin{array}{cl} 1, &{} \ell = 0 \ \text {or} \ \ell = 2^{i}, \\ 0,&{} \text {otherwise}, \end{array} \right. \end{aligned}$$
(8)

where \(2^{s}\) is the largest power of 2 that divides \(\ell\) (\(s \le j)\).

The analogue of (3) for \({T_{i}^{j}}\) is

$$\begin{aligned} x_{2^{k-i}r} - a< \frac{b-a}{2^{i}m}\bigg (\sum \limits _{\ell = 0}^{r} \alpha _{j,i,m, \ell }\bigg ) < x_{2^{k-i}(r+1)} - a, \end{aligned}$$
(9)

\(r = 0, 1, \dots , 2^{i}m-1.\) However, as in the cases of Fejér and Clenshaw-Curtis quadratures, (8) does not seem to be very useful for bounding large sums of the Romberg coefficients. In this note we prove a stronger form of (9) without explicit manipulation of (8).

For each jimr, \(j \ge 0\), \(0 \ \le \ r \ \le 2^{i-1} m\), let \(\theta _{j,i,m,r}\) be defined by

\(\theta _{j,i,m,r} \ = \ \left( \sum \limits _{\ell = 0}^{r} \alpha _{j,i,m, \ell }\right) - r\)

and let

$$\begin{aligned} \theta _{j} \ = \ \min \limits _{ i, m,r} \ \theta _{j,i,m,r}, \ \ \ \Theta _{j} \ = \ \max \limits _{ i, m, r} \ \theta _{j,i,m,r}. \end{aligned}$$
(10)

We have

Theorem 1

$$\begin{aligned} 0.0555 \ \le \ \theta _{j} \ \le \ \Theta _{j} \ \le 0.8155 \ \ \forall j \ge 0. \end{aligned}$$
(11)

Corollary 1

For \(j \ge 0\) and \(0 \ \le \ r \ \le 2^{i-1} m\),

$$\begin{aligned} x_{2^{k-i}r} - a= & {} r\frac{b-a}{2^{i}m}< (0.0555 + r)\frac{b-a}{2^{i}m} \le \frac{b-a}{2^{i}m}\left( \sum \limits _{\ell = 0}^{r} \alpha _{j,i,m, \ell }\right) \nonumber \\\le & {} (0.8155+r)\frac{b-a}{2^{i}m} < (1+r)\frac{b-a}{2^{i}m} = x_{2^{k-i}(r+1)} - a. \end{aligned}$$
(12)

Remark 1

We do not need to consider \(\theta _{j,i,m,r}\) with \(r > 2^{i-1}m\) in (10) and (12) because the coefficients \(\alpha _{j,i,m, \ell }\) are symmetric with respect to \(\ell\).

Remark 2

The upper and lower bounds of Theorem 1 can be improved for large values of j (see Table 1 in Section 3).

The following result follows immediately from (12).

Corollary 2

\({T_{i}^{j}}[f](a,b,2^{i} m)\) is always a Riemann sum of f.

In the next two sections we make some considerations about the coefficients of the Romberg functionals and prove Theorem 1. In Section 4 we briefly discuss some problems related to other quadrature rules based on equally spaced abscissae.

2 On the coefficients of the Romberg functionals

The proof of Theorem 1 is based on the simple observation that the Romberg integrals are composite rules, that is \({T_{i}^{j}}[f](a,b,2^{i} m) \ =\)

$$\begin{aligned} T_{i-1}^{j}[f]\bigg (a,\frac{a+b}{2},2^{i-1} m\bigg ) \ + \ T_{i-1}^{j}[f]\bigg (\frac{a+b}{2}, b,2^{i-1} m\bigg ), i \ \ge \ j \ \ge \ 1. \end{aligned}$$
(13)

This fact can be easily checked by induction on j, using (5) and the fact that the trapezoidal rule (4) is also composite. Thus, we have

$$\begin{aligned}&(4^{j}-1) {T_{i}^{j}}[f](a,b,2^{i} m) \ \overset{(5)}{=} \\&4^{j} T_{i}^{j-1}[f](a,b, 2^{i} m) - T_{i-1}^{j-1}[f](a,b, 2^{i-1} m)\\\overset{(13)\;}{=}&4^{j} T_{i-1}^{j-1}[f]\left( a,(a+b)/2, 2^{i-1} m\right) \ + \ 4^{j} T_{i-1}^{j-1}[f]\left( (a+b)/2, b, 2^{i-1} m\right) \\- & {} T_{i-1}^{j-1}[f](a,b, 2^{i-1} m)\\\overset{(7)\,}{=}&4^{j} \frac{(b-a)/2}{2^{i-1} m} \left( \sum \limits _{\ell = 0}^{2^{i-1} m} \alpha _{j-1,{i-1},m, \ell }f(x_{2^{k-i}\ell }) \ + \ \sum \limits _{\ell = 0}^{2^{i-1} m} \alpha _{j-1,{i-1},m, \ell }f(x_{2^{k-i}(2^{i-1} m + \ell )}) \right) \\- & {} \frac{b-a}{2^{i-1} m} \sum \limits _{\ell = 0}^{2^{i-1} m} \alpha _{j-1,{i-1},m, \ell }f(x_{2^{k-i+1}\ell }) \\= & {} \frac{b-a}{2^{i} m} \left( \sum \limits _{\ell = 0}^{2^{i-1} m} 4^{j}\alpha _{j-1,{i-1},m, \ell }f(x_{2^{k-i}\ell }) \ + \ \sum \limits _{\ell = 0}^{2^{i} m} 4^{j}\alpha _{j-1,{i-1},m, \ell }f(x_{2^{k-i}(2^{i-1} m + \ell )}) \right. \\- & {} \left. \sum \limits _{\ell = 0}^{2^{i-1} m} 2\alpha _{j-1,{i-1},m, \ell }f(x_{2^{k-i+1}\ell }) \right) .\\ \end{aligned}$$

This relation tells us how to build the vector of coefficients

$$\begin{aligned} \mathbf {u} = (u_{0}, u_{1}, \dots , u_{2^{i} m}) \ := \ (\alpha _{j,{i},m, 0}, \ \alpha _{j,{i},m, 1},\ \dots , \alpha _{j,{i},m, 2^{i} m}) \end{aligned}$$

of size \(2^{i} m + 1\) in terms of the vector

$$\begin{aligned} \mathbf {v}= & {} (v_{0}, v_{1}, \dots , v_{2^{i-1} m}) \\:= & {} (\alpha _{j-1,{i-1},m, 0},\ \alpha _{j-1,{i-1},m, 1}, \ \dots , \ \alpha _{j-1,{i-1},m, 2^{i-1} m}) \end{aligned}$$

of size \(2^{i-1} m + 1\):

$$\begin{aligned} \mathbf {u}= & {} \frac{4^{j}}{4^{j}-1}(v_{0}, v_{1}, v_{2}, v_{3}, \ \dots , v_{2^{i-1} m}, \ 0,\ 0,\ \, 0,\ \, 0, \dots ,\ \ \ 0) \nonumber \\+ & {} \frac{4^{j}}{4^{j}-1}( \ 0,\ 0, \ \, 0,\ \, 0, \ \dots , \ \ \ v_{0} \ \ \ , v_{1}, v_{2}, v_{3}, v_{4} \dots , v_{2^{i-1} m} ) \nonumber \\- & {} \frac{2}{4^{j}-1}(v_{0}, \ 0, \ v_{1}, \ 0, \ v_{2}, \ 0, \dots , 0,\ v_{2^{i-1} m} ). \end{aligned}$$
(14)

Using (14), we can give another proof of the positivity of the Romberg coefficients (see Corollary 3 below). Let

$$\gamma _{j} \ = \ \min \limits _{i, m, \ell } \ \alpha _{j,{i},m,\ell }\quad\mathrm{and}\quad\Gamma _{j} \ = \ \max \limits _{i, m, \ell } \ \alpha _{j,{i},m,\ell }.$$

An immediate consequence of (14) is that

$$\begin{aligned} \Gamma _{j} \ = \ \prod \limits _{\eta = 1}^{j} \frac{4^{\eta }}{4^{\eta }-1} \ = \ \alpha _{j,{i},m,\ell } \end{aligned}$$
(15)

for every \(j \ge 1\) and for every odd indexFootnote 2\(\ell\) (recall that the weights for \(j = 0\) are \(1/2, 1, \dots , 1, 1/2\) ).

Lemma 1

For each \(j \ge 1\) and \(q \ge 0\), we have \(\Gamma _{j+q} \ = \ \prod \limits _{\eta = 1}^{q} \frac{4^{j+\eta }}{4^{j+\eta }-1} \Gamma _{j} \ \text {and}\)

$$\begin{aligned} \gamma _{j+q} \ \ge \ \left( \prod \limits _{\eta = 1}^{q} \frac{4^{j+\eta }}{4^{j+\eta }-1}\right) \left[ \gamma _{j} -2 \left( \sum \limits _{\eta = 1}^{q} \frac{1}{4^{j+\eta }}\right) \Gamma _{j} \right] . \end{aligned}$$
(16)

Proof

The proof is by induction on q. Lemma 1 is true for \(q = 0\). Assume now that (16) holds for q and let us prove it for \(q+1\). We have

$$\begin{aligned} \alpha _{j+q+1,{i},m,\ell }&\overset{(14)}{\ge }\frac{4^{j+q+1}}{4^{j+q+1}-1} \gamma _{j+q} \ - \ \frac{2}{4^{j+q+1}-1} \Gamma _{j+q} \\&\overset{(16)}{\ge }\frac{4^{j+q+1}}{4^{j+q+1}-1} \left( \prod \limits _{\eta = 1}^{q} \frac{4^{j+\eta }}{4^{j+\eta }-1}\right) \left[ \gamma _{j} -2\left( \sum \limits _{\eta = 1}^{q} \frac{1}{4^{j+\eta }}\right) \Gamma _{j} \right] \\- & {} \frac{2}{4^{j+q+1}-1} \left( \prod \limits _{\eta = 1}^{q} \frac{4^{j+\eta }}{4^{\eta +j}-1} \right) \ \Gamma _{j} \\= & {} \left( \prod \limits _{\eta = 1}^{q+1} \frac{4^{j+\eta }}{4^{j+\eta }-1}\right) \left[ \gamma _{j} -2 \left( \sum \limits _{\eta = 1}^{q+1} \frac{1}{4^{j+\eta }}\right) \ \Gamma _{j} \right] . \end{aligned}$$

Corollary 3

$$\begin{aligned} \gamma _{j} \ge 0 \ \forall \ j \ge 0. \end{aligned}$$
(17)

Proof

The weights \(\alpha _{j,i,m,\ell }\) for \(j = 0\) and \(j = 1\) are those of the trapezoidal and Simpson’s rules, respectively, that is,

$$\begin{aligned} 1/2, \ 1,\ 1,\ \dots , 1,\ 1/2 \ \ \ \text {and} \ \ \ 1/3,\ 4/3,\ 2/3,\ 4/3,\ 2/3, \dots , 4/3,\ 1/3. \end{aligned}$$
(18)

In addition, note that, for \(j = 1\) and \(q \ge 1\),

$$\begin{aligned} \gamma _{j} -2 \bigg (\sum \limits _{\eta = 1}^{q} \frac{1}{4^{j+\eta }}\bigg ) \Gamma _{j} \ \ge \ \frac{1}{3} - \frac{2}{16}\frac{4}{3}\frac{4}{3} \ = \ \frac{1}{9}. \end{aligned}$$

This and (16) complete the proof.

3 Proof of Theorem 1

The nice thing about (14) is that the same strategy used in the proof of Lemma 1 to bound the coefficients of the Romberg integrals can be used to bound the sums of these coefficients. In order to estimate \(\theta _{j}\) and \(\Theta _{j}\) in Theorem 1, let us also define

$$\begin{aligned} \theta _{j}' \ = \ \min \limits _{ { \begin{array}{c} i, m,r \\ r \ \text {even} \end{array} }} \theta _{j,{i},m,r}, \ \ \text {and} \ \ \Theta _{j}' \ = \ \max \limits _{ { \begin{array}{c} i, m,r \\ r \ \text {even} \end{array} }} \theta _{j,{i},m,r}. \end{aligned}$$
(19)

Note that, by (15),

$$\begin{aligned} \theta _{j,{i},m,2\nu +1} \ = \ \theta _{j,{i},m,2\nu } + \alpha _{j,{i},m,2\nu +1} - 1 \ \ge \ \theta _{j,{i},m,2\nu } \ \ge \ \theta _{j}', \end{aligned}$$

that is

$$\begin{aligned} \theta _{j} = \theta _{j}' \ \forall \ j \ \ge \ 0. \end{aligned}$$
(20)

We have

Lemma 2

For each \(j \ge 1\) and \(q \ge 0\),

$$\begin{aligned} \theta _{1+q} \ \ge \ \frac{1}{18} \ \ \text {and} \ \ \Theta _{1+q} \ \le \ 0.8155, \end{aligned}$$
(21)
$$\begin{aligned} \Theta '_{j+q}\le & {} \prod \limits _{\eta = 1}^{q} \left( \frac{4^{j+\eta }}{4^{j+\eta }-1}\right) \Theta _{j}', \nonumber \\ \Theta _{j+q}\le & {} \left( \prod \limits _{\eta = 1}^{q} \frac{4^{j+\eta }}{4^{j+\eta }-1}\right) \left[ \Theta _{j}' \ + \ \prod \limits _{\eta = 1}^{j} \frac{4^{\eta }}{4^{\eta }-1} \right] \ - \ 1 \end{aligned}$$
(22)

and \(\theta _{j+q}' \ \ge\)

$$\begin{aligned} \left( \prod \limits _{\eta = 1}^{q}\frac{4^{j+\eta }}{4^{j+\eta }-1}\right) \left[ \theta _{j}' -2 \left( \sum \limits _{\eta = 1}^{q} \frac{1}{4^{j+\eta }}\right) \left( \Theta _{j}' \ + \ \prod \limits _{\eta = 1}^{j} \frac{4^{\eta }}{4^{\eta }-1} \right) \right] \ + \ \sum \limits _{\eta = 1}^{q} \frac{2}{4^{j+\eta }-1}. \end{aligned}$$
(23)

Proof

The inequalities in (21) follow by (22) and (23). Because \(\theta _{1}' \ = \ \Theta _{1}' \ = \ \frac{1}{3}\) (see (18)), we get

$$\begin{aligned} \theta _{1}' -2 \bigg (\sum \limits _{\eta = 1}^{q} \frac{1}{4^{1+\eta }}\bigg )\bigg ( \Theta _{1}' \ + \ \prod \limits _{\eta = 1}^{1} \frac{4^{\eta }}{4^{\eta }-1} \bigg )\ge & {} \theta _{1}' -2 \bigg (\sum \limits _{\eta = 1}^{\infty } \frac{1}{4^{1+\eta }}\bigg )\bigg ( \Theta _{1}' \ + \ \prod \limits _{\eta = 1}^{1} \frac{4^{\eta }}{4^{\eta }-1} \bigg ) \\= & {} \bigg [ \frac{1}{3} - \bigg (\frac{2}{3}\frac{1}{4}\bigg )\bigg ( \frac{1}{3} \ + \ \frac{4}{3} \bigg ) \bigg ] \ = \ \frac{1}{18}. \\ \end{aligned}$$

Using this in (23) for \(j = 1\), we obtain

$$\begin{aligned} \theta _{1+q} \ \overset{(20)}{=} \ \theta _{1+q}' \ \ge \ \left( \prod \limits _{\eta = 1}^{q} \frac{4^{1+\eta }}{4^{1+\eta }-1}\right) \frac{1}{18} \ + \ \sum \limits _{\eta = 1}^{q} \frac{2}{4^{j+\eta }-1} \ \ge \ \frac{1}{18}. \end{aligned}$$

In the same fashion, for \(j = 1\), (22) gives

$$\begin{aligned} \Theta _{1+q} \ \le \ \left( \prod \limits _{\eta = 2}^{\infty } \frac{4^{\eta }}{4^{\eta }-1}\right) \left[ \frac{1}{3} \ + \ \frac{4}{3}\right] \ - \ 1 \ < \ 0.8155. \end{aligned}$$

The second inequality of (22) follows directly by the previous one and the fact that, for \(\ell\) odd,

$$\begin{aligned} \alpha _{j+q,{i},m, \ell } \ \overset{(15)}{=} \ \Gamma _{j+q} \ = \ \prod \limits _{\eta = 1}^{j+q} \frac{4^{\eta }}{4^{\eta }-1} \ \ \text {and} \ \ \Theta _{j+q} \ \ \overset{(15)}{=} \ \Theta _{j+q}' + \Gamma _{j+q} - 1. \end{aligned}$$

The proof of the other inequalities is by induction on q. Lemma 2 is true for \(q = 0\) . Assume now that (22) holds for q and let us prove it for \(q+1\).

By (14), for every even r with \(r < 2^{\ell -1}m\), we have

$$\begin{aligned}&\theta _{j+q+1,i,m,r} \ = \ \left( \sum \limits _{\ell = 0}^{r} \alpha _{j+q+1,{i},m, \ell }\right) - r \nonumber \\= & {} \bigg (\frac{4^{j+q+1}}{4^{j+q+1}-1} \, \sum \limits _{\ell = 0}^{r} \alpha _{j+q,{i},m, \ell } - \frac{2}{4^{j+q+1}-1} \sum \limits _{\ell = 0}^{ r/2} \alpha _{j+q,{i},m, \ell } \bigg ) - r \nonumber \\= & {} \left( \frac{4^{j+q+1}}{4^{j+q+1}-1} \left[ \theta _{j+q,i,m,r} + r \right] - \frac{2}{4^{j+q+1}-1} \left[ \theta _{j+q,i,m, r/2 } + r/2 \right] \right) - r \nonumber \\= & {} \frac{4^{j+q+1}}{4^{j+q+1}-1} \, \theta _{j+q,i,m,r} \ - \ \frac{2}{4^{j+q}-1} \, \theta _{j+q,i,m, r/2 }. \end{aligned}$$
(24)

Because (22) and (23) hold for q, (21) does for q as well. Hence, for \(j = 1\), we can use

$$\begin{aligned} \theta _{j+q,i,m, r/2 } \ \ge \ \theta _{j+q} \ \ge \ \frac{1}{18} \ \ge \ 0 \end{aligned}$$
(25)

to obtain

$$\begin{aligned} \Theta _{j+q+1}' \ \le \ \frac{4^{j+q+1}}{4^{j+q+1}-1} \, \Theta _{j+q}'. \end{aligned}$$

This also holds for \(j \ne 1\) (see Remark 3). Hence, by the induction hypothesis,

$$\begin{aligned} \Theta _{j+q+1}' \ \le \ \prod \limits _{\eta = 1}^{q+1} \left( \frac{4^{j+\eta }}{4^{j+\eta }-1}\right) \Theta _{j}'. \end{aligned}$$
(26)

This proves the first inequality of (22) for \(q+1\). In addition, by (24) and the induction hypothesis, we obtain

$$\begin{aligned}&\theta _{j+q+1,{i},m,r}' \ \ge \ \frac{4^{j+q+1}}{4^{j+q+1}-1} \theta _{j+q}' \ - \ \frac{2}{4^{j+q+1}-1} \Theta _{j+q} \\\ge & {} \frac{4^{j+q+1}}{4^{j+q+1}-1} \left( \prod \limits _{\eta = 1}^{q} \frac{4^{j+\eta }}{4^{j+\eta }-1}\right) \left[ \theta _{j}' -2 \left( \sum \limits _{\eta = 1}^{q} \frac{1}{4^{j+\eta }}\right) \left( \Theta _{j}' \ + \ \prod \limits _{\eta = 1}^{j} \frac{4^{\eta }}{4^{\eta }-1} \right) \right] \\+ & {} \ \ \sum \limits _{\eta = 1}^{q} \frac{2}{4^{j+\eta }-1} \ - \ \frac{2}{4^{j+q+1}-1} \left( \prod \limits _{\eta = 1}^{q} \frac{4^{j+\eta }}{4^{j+\eta }-1}\right) \left[ \Theta _{j}'\ + \ \prod \limits _{\eta = 1}^{j} \frac{4^{\eta }}{4^{\eta }-1} \right] \ + \ \frac{2}{4^{j+q+1}-1} \\= & {} \frac{4^{j+q+1}}{4^{j+q+1}-1} \left( \prod \limits _{\eta = 1}^{q} \frac{4^{j+\eta }}{4^{j+\eta }-1}\right) \left[ \theta _{j}' -2 \left( \sum \limits _{\eta = 1}^{q+1} \frac{1}{4^{j+\eta }}\right) \left( \Theta _{j}' \ + \ \prod \limits _{\eta = 1}^{j} \frac{4^{\eta }}{4^{\eta }-1} \right) \right] \\+ & {} \sum \limits _{\eta = 1}^{q+1} \frac{2}{4^{j+\eta }-1}. \end{aligned}$$

Therefore,

$$\begin{aligned} \theta _{j+q+1}'\ge & {} \left( \prod \limits _{\eta = 1}^{q+1} \frac{4^{j+\eta }}{4^{j+\eta }-1}\right) \left[ \theta _{j}' -2 \left( \sum \limits _{\eta = 1}^{q+1} \frac{1}{4^{j+\eta }}\right) \left( \Theta _{j}' \ + \ \prod \limits _{\eta = 1}^{j} \frac{4^{\eta }}{4^{\eta }-1} \right) \right] \\+ & {} \sum \limits _{\eta = 1}^{q+1} \frac{2}{4^{j+\eta }-1}. \end{aligned}$$

This proves (23) for \(q+1\).

Remark 3

Note that the proof of Lemma 2 for \(j = 1\) is independent of the proof for other values of j. Hence, once we proved (21) we have (25), since

$$\begin{aligned} \theta _{j+q,i,m, r/2 } \ \ge \ \theta _{j+q} \ \ge \ 0. \end{aligned}$$

Using (20), Lemma 2 yields the following result

Corollary 4

For each \(j \ge 1\) and \(q \ge 0\),

$$\begin{aligned} \Theta _{j+q} \ \le \ \left( \prod \limits _{\eta = j+1}^{\infty } \frac{4^{\eta }}{4^{\eta }-1}\right) \left[ \Theta _{j}' \ + \ \prod \limits _{\eta = 1}^{j} \frac{4^{\eta }}{4^{\eta }-1} \right] \ - \ 1 \end{aligned}$$
(27)

and

$$\begin{aligned} \theta _{j+q} \ \ge \ \left[ \theta _{j}' - \frac{2}{3}\frac{1}{4^{j}}\left( \Theta _{j}' \ + \ \prod \limits _{\eta = 1}^{j} \frac{4^{\eta }}{4^{\eta }-1} \right) \right] \ + \ \sum \limits _{\eta = 1}^{q} \frac{2}{4^{j+\eta }-1}, \end{aligned}$$
(28)

provided that the term in brackets in (28) is non-negative.

Theorem 1 follows by (21). Slightly sharper bounds can be obtained by computing the right-hand sides of (27) and (28) for some larger values of j and q as is shown in Table 1.

Table 1 The right-hand sides of (27) and (28) for some values of j

4 Some problems on quadrature formulae based on equally spaced points

According to Pólya [15], if the sequence (1) satisfies

$$\begin{aligned} \lim \limits _{\tau \rightarrow \infty } I_{\tau }[p] \ = \ {\int \limits _{a}^{b}} p(x) dx \end{aligned}$$
(29)

for every polynomial p, then \(\left( I_{\tau }[f]\right) _{\tau \ \in \ \mathbb N}\) converges to \({\int \limits _{a}^{b}} f(x) dx\) for every continuous function f if and only if

$$\begin{aligned} ||I_{\tau } || \ := \sum \limits _{\ell = 0}^{n_{\tau }} |w_{\tau ,\ell }| \end{aligned}$$
(30)

is bounded in \(\tau\).

For the (interpolatory) Newton-Cotes quadrature, which is based on the equally spaced abscissae

$$\begin{aligned} x_{\tau ,\ell } \ = \ a + \ell \frac{b-a}{\tau }, \quad \ell = 0, 1, \dots , \tau (n_{\tau } = \tau ), \end{aligned}$$

the weights \(w_{\tau ,\ell }\) are all positive only for \(\tau \le 7\) and \(\tau = 9\) [7, p. 534]. Moreover, they do not satisfy (30) [12, 14]. In fact, it was shown by Wilson [17] that a sequence of quadrature formulae at equally spaced abscissae with positive weights can only exist if \(n_{\tau }\) is at least proportional to \(d_{\tau }^{2}\), where \(d_{\tau }\) is the exactness degree of \(I_{\tau }\), that is, the largest positive integer such that

$$\begin{aligned} I_{\tau }[p] \ = \ {\int \limits _{a}^{b}} p(x) dx \end{aligned}$$
(31)

holds for every polynomial p of degree at most \(d_{\tau }\).

Following some ideas of Wilson [18], Huybrechs [9] introduced a method for obtaining quadrature rules for equally spaced points (with positive weights) based on least squares. The main idea is to choose the weights \(w_{\tau ,\ell }, \ell = 0, 1, \dots , n_{\tau }\) in (1) that satisfy (31) and minimize

$$\begin{aligned} \sum \limits _{\ell = 0}^{n_{\tau }} w_{\tau ,\ell }^{2}. \end{aligned}$$

Huybrechs showed that the weights obtained by this method are all positive provided that \(n_{\tau }\) is sufficiently larger than \(d_{\tau }\). It remains to check whether these rules represent Riemann sums in the sense of (3).

Lastly, Klein and Berrut [11] introduced a method for approximating definite integrals via the integration of Floater-Hormann rational interpolants. The resulting quadrature rules \(Q_{n,d}[f]\), indexed by the parameters \(n+1\) (the number of nodes used) and d (the degree of exactness according to (31)), converge to \({\int \limits _{a}^{b}} f(x) dx\) with order \(\left( \frac{b-a}{n}\right) ^{d+2}\) whenever f is of class \(C^{d+3}[a,b]\) when \(n \rightarrow \infty\) (for fixed d). However, convergence may fail when \(n \rightarrow \infty\) for \(d \approx n\) because, for \(d = n\), their method reduces to the Newton-Cotes quadrature. For fixed d, \(Q_{n,d}\) satisfies (29) (for n in the place of \(\tau\)). In light of (30), it is not known whether these quadrature rules have positive weights for n sufficiently larger than d or even whether convergence of \(Q_{n,d}[f]\) can be ensured for arbitrary continuous functions f when \(n \rightarrow \infty\).