1 Introduction

Let \(d\sigma \) be a (nonnegative) measure on the interval [ab], and consider the Gauss quadrature formula associated with it,

$$\begin{aligned} \int _{a}^{b}f(t)d\sigma (t)=\sum _{\nu =1}^{n}\lambda _{\nu }f(\tau _{\nu })+R_{n}^{G}(f), \end{aligned}$$
(1.1)

where \(\tau _{\nu }=\tau _{\nu }^{(n)}\) are the zeros of the nth degree (monic) orthogonal polynomial \(\pi _{n}(\cdot )=\pi _{n}(\cdot ;d\sigma )\). It is well known that the weights \(\lambda _{\nu }=\lambda _{\nu }^{(n)}\) are all positive, and formula (1.1) has precise degree of exactness \(d_{n}^{G}=2n-1\), i.e., \(R_{n}^{G}(f)=0\) for all \(f\in \mathbb {P}_{2n-1}\), where \(\mathbb {P}_{2n-1}\) denotes the set of polynomials of degree at most \(2n-1\) (cf. [7, Sect. 1.2]).

An important issue regarding the Gauss formula is the study of its error term, which has been extensively investigated for more than a century (cf. [7, Sect. 4]). A simple error estimator, used in practice, is the following: Set \(I(f)=\int _{a}^{b}f(t)d\sigma (t),\ Q_{n}^{G}(f)=\sum _{\nu =1}^{n}\lambda _{\nu }f(\tau _{\nu })\), and consider a quadrature formula with \(m>n\) points, having quadrature sum \(Q_{m}(f)\) and degree of exactness greater than \(2n-1\). Then, we write

$$\begin{aligned} \left| R_{n}^{G}(f)\right| \simeq \left| Q_{n}^{G}(f)-Q_{m}(f)\right| , \end{aligned}$$
(1.2)

i.e., \(Q_{m}(f)\) plays the role of the “true” value of I(f). The simplest choice for \(Q_{m}(f)\) is \(Q_{n+1}^{G}(f)\), where with \(n+1\) new evaluations of the function (at the \(n+1\) new Gauss nodes) the degree of exactness is raised from \(2n-1\) to \(2n+1\), a rather minor improvement, while the process could also be unreliable (cf. [4, p. 199]).

Kronrod, motivated from his desire to estimate economically, yet accurately, the error of the Gauss formula, extended formula (1.1), obtaining the so-called Gauss–Kronrod quadrature formula,

$$\begin{aligned} \int _{a}^{b}f(t)d\sigma (t)=\sum _{\nu =1}^{n}\sigma _{\nu }f(\tau _{\nu })+\sum _{\mu =1}^{n+1}\sigma _{\mu }^{*}f(\tau _{\mu }^{*})+R_{n}^{K}(f), \end{aligned}$$
(1.3)

where \(\tau _{\nu }\) are the Gauss nodes, while the new nodes \(\tau _{\mu }^{*}=\tau _{\mu }^{*(n)}\) and all weights \(\sigma _{\nu }=\sigma _{\nu }^{(n)},\ \sigma _{\mu }^{*}=\sigma _{\mu }^{*(n)}\) are chosen such that formula (1.3) has maximum degree of exactness (at least) \(d_{n}^{K}=3n+1\). It turns out that the nodes \(\tau _{\mu }^{*}\) are zeros of a (monic) polynomial \(\pi _{n+1}^{*}(\cdot )=\pi _{n+1}^{*}(\cdot ;d\sigma )\), of degree \(n+1\), discovered much earlier by Stieltjes through his work on continued fractions and the moment problem, which is characterized by the orthogonality condition

$$\begin{aligned} \int _{a}^{b}\pi _{n+1}^{*}(t)t^{k}\pi _{n}(t)d\sigma (t)=0,\ \ k=0,1,\ldots ,n, \end{aligned}$$

i.e., \(\pi _{n+1}^{*}\) is orthogonal to all polynomials of lower degree relative to the variable-sign measure \(d\sigma ^{*}(t)=\pi _{n}(t)d\sigma (t)\) on [ab] (cf. [16]).

Now, if we set \(Q_{n}^{K}(f)=\sum _{\nu =1}^{n}\sigma _{\nu }f(\tau _{\nu })+\sum _{\mu =1}^{n+1}\sigma _{\mu }^{*}f(\tau _{\mu }^{*})\), then the benefit of using \(Q_{n}^{K}(f)\) in place of \(Q_{m}(f)\) in (1.2), i.e.,

$$\begin{aligned} \left| R_{n}^{G}(f)\right| \simeq \left| Q_{n}^{G}(f)-Q_{n}^{K}(f)\right| , \end{aligned}$$
(1.4)

is obvious; with \(n+1\) new evaluations of the function (at the nodes \(\tau _{\mu }^{*}\)) the degree of exactness is raised from \(2n-1\) to \(3n+1\), a substantial improvement.

However, it is well known that Gauss–Kronrod formulae fail to exist, with real and distinct nodes in the interval of integration and positive weights, for several of the classical measures; notable examples are the Hermite and the Laguerre measures, but the list also includes the Gegenbauer and the Jacobi measures for certain values of the involved parameters (cf. [16, Sect. 2.1 and the references cited therein]).

An alternative to the Gauss–Kronrod formula for estimating the error of the Gauss formula, developed by Laurie (cf. [13]), is the so-called anti-Gaussian quadrature formula,

$$\begin{aligned} \int _{a}^{b}f(t)d\sigma (t)=\sum _{\mu =1}^{n+1}w_{\mu }f(t_{\mu })+R_{n+1}^{AG}(f), \end{aligned}$$
(1.5)

which is an interpolatory formula designed to have an error precisely opposite to the error of the Gauss formula, that is, if \(Q_{n+1}^{AG}(f)=\sum _{\mu =1}^{n+1}w_{\mu }f(t_{\mu })\), then

$$\begin{aligned} I(p)-Q_{n+1}^{AG}(p)=-\left[ I(p)-Q_{n}^{G}(p)\right] \ \ \mathrm{for\ all}\ \ p\in \mathbb {P}_{2n+1}. \end{aligned}$$
(1.6)

The anti-Gaussian formula enjoys nice and desirable properties: The nodes \(t_{\mu }\) interlace with the Gauss nodes \(\tau _{\nu }\) and, with the possible exception of the first and the last one, the \(t_{\mu }\) are contained in [ab]; furthermore, the weights \(w_{\mu }\) are all positive. In addition, the anti-Gaussian formula can easily be constructed.

In effect then one can use the \((2n+1)\)-point quadrature formula obtained by the quadrature sum

$$\begin{aligned} Q^{AvG}_{2n+1}(f)=\frac{1}{2}\left[ Q_{n}^{G}(f)+Q_{n+1}^{AG}(f)\right] \end{aligned}$$
(1.7)

in place of \(Q_{n}^{K}(f)\) in (1.4), in which case

$$\begin{aligned} \left| R_{n}^{G}(f)\right| \simeq \frac{\left| Q_{n}^{G}(f)-Q_{n+1}^{AG}(f)\right| }{2}. \end{aligned}$$

This new quadrature formula, based on \(Q^{AvG}_{2n+1}(f)\), is known as the averaged Gaussian quadrature formula (cf. [13]).

A few years after Laurie, Calvetti and Reichel (cf. [2]) defined the modified anti-Gaussian quadrature formula, the same way as Laurie did, except that instead of (1.6) formula (1.5) satisfies

$$\begin{aligned} I(p)-Q_{n+1}^{ MAG}(p)=-\gamma \left[ I(p)-Q_{n}^{G}(p)\right] \ \ \mathrm{for\ all}\ \ p\in \mathbb {P}_{2n+1},\ \ \gamma >0, \end{aligned}$$
(1.8)

where \(Q_{n+1}^{ MAG}(f)\) is the quadrature sum of this formula. Obviously, when \(\gamma =1\), the modified anti-Gaussian formula is precisely Laurie’s anti-Gaussian formula. Now, it turns out that a Gauss–Lobatto quadrature formula relative to a (nonnegative) symmetric measure on the real line is actually a modified anti-Gaussian formula (cf. [2]).

In a slightly different form, modified anti-Gaussian formulae were defined by Ehrich in [6] (essentially, he uses \(1+\gamma \) instead of \(\gamma \), in which case \(\gamma >-1\)). Ehrich’s definition was used by Spalević in order to define the so-called generalized averaged Gaussian quadrature formulae, which are constructed in such a way that the degree of exactness is maximized (cf. [17,18,19]).

Now, it is quite remarkable that for a certain class of measures, examined by Gautschi and the author in [11], such that the respective (monic) orthogonal polynomials, above a specific index, satisfy a three-term recurrence relation with constant coefficients, the nodes of the anti-Gaussian formula turn out to be the zeros of the respective Stieltjes polynomial; in this case, the resulting averaged Gaussian formula is precisely the corresponding Gauss–Kronrod formula, having elevated degree of exactness, while the two formulae give the same error estimate for the Gauss formula. This is shown in Sect. 3; that same result has very recently been obtained, in a different way, by Spalević (cf. [19]). Further, we generalize things, by considering a class of measures such that the respective (monic) orthogonal and (monic) Stieltjes polynomials, of degree \(n+1\), are connected by a functional relation. For this class, a subclass of which are the measures in [11], we prove analogous results. Before all this, in the following section, we give a description and prove the most important properties of the anti-Gaussian formulae. Moreover, in Sect. 4, we first show that symmetric Gauss–Lobatto formulae are modified anti-Gaussian formulae, and then we specialize our results to the measures in [11]. The results in Sects. 2 and 4 are derived by a method which is new and appears in the literature for the first time. The paper concludes in Sect. 5 with some numerical examples illustrating our results of Sect. 3.

2 Anti-Gaussian quadrature formulae

The results presented in Theorem 2.1 below have originally been obtained, in a different way, by Laurie in [13].

First of all, the monic orthogonal polynomials relative to the measure \(d\sigma \) satisfy the three-term recurrence relation

$$\begin{aligned} \pi _{n+1}(t)= & {} (t-\alpha _{n})\pi _{n}(t)-\beta _{n}\pi _{n-1}(t),\ \ n=0,1,2,\ldots ,\nonumber \\ \pi _{0}(t)= & {} 1,\ \pi _{-1}(t)=0, \end{aligned}$$
(2.1)

where \(\alpha _{n}\in \mathbb {R}\) and \(\beta _{n}>0\).

Theorem 2.1

Consider the anti-Gaussian formula (1.5) for the measure \(d\sigma \) on the interval [ab]. Then the following holds:

  1. (a)

    The nodes \(t_{\mu },\ \mu =1,2,\ldots ,n+1\), are the zeros of the \((n+1)\)st degree (monic) polynomial \(\pi _{n+1}^{AG}(\cdot )=\pi _{n+1}^{AG}(\cdot ;d\sigma )\) given by

    $$\begin{aligned} \pi _{n+1}^{AG}(t)=\pi _{n+1}(t)-\beta _{n}\pi _{n-1}(t). \end{aligned}$$
    (2.2)

          Furthermore, the nodes \(t_{\mu }\) are all real and interlace with the nodes \(\tau _{\nu }\) of the Gauss formula, that is,

    $$\begin{aligned} t_{n+1}<\tau _{n}<t_{n}<\cdots<t_{2}<\tau _{1}<t_{1}. \end{aligned}$$
    (2.3)

    The inner nodes \(t_{\mu },\ \mu =2,3,\ldots ,n\), are all in the interval [ab]. The end nodes \(t_{1}\) and \(t_{n+1}\) are in [ab] if and only if

    $$\begin{aligned} \frac{\pi _{n+1}(b)}{\pi _{n-1}(b)}\ge \beta _{n}\quad \mathrm{and}\quad \frac{\pi _{n+1}(a)}{\pi _{n-1}(a)}\ge \beta _{n}, \end{aligned}$$
    (2.4)

    respectively.

  2. (b)

    The weights \(w_{\mu },\ \mu =1,2,\ldots ,n+1\), are given by the formula

    $$\begin{aligned} w_{\mu }=\frac{2\Vert \pi _{n}\Vert ^{2}}{\pi _{n}(t_{\mu })\pi _{n+1}^{AG^{\large \prime }}(t_{\mu })},\ \ \mu =1,2,\ldots ,n+1, \end{aligned}$$
    (2.5)

    where \(\Vert \cdot \Vert \) denotes the \(L_{2}\) norm.

          Furthermore, the weights are all positive.

  3. (c)

    Formula (1.5) has precise degree of exactness \(d_{n+1}^{AG}=2n-1\).

Proof

  1. (a)

    To prove that formula (1.5) with nodes the zeros of \(\pi _{n+1}^{AG}\) in (2.2) is an anti-Gaussian formula, it suffices to show that (1.6) is satisfied. First of all, formula (1.5) is interpolatory, and, by orthogonality, we have

    $$\begin{aligned} \int _{a}^{b}\pi _{n+1}^{AG}(t)p(t)d\sigma (t)=\int _{a}^{b}[\pi _{n+1}(t)-\beta _{n}\pi _{n-1}(t)]p(t)d\sigma (t)=0\\ \mathrm{for\ all}\ \ p\in \mathbb P_{n-2}, \end{aligned}$$

    hence, formula (1.5) has degree of exactness (at least) \(2n-1\) (cf. [7, Sect. 1.3]). As a result of this and the fact that the Gauss formula (1.1) has also degree of exactness \(2n-1\), we get

    $$\begin{aligned} I(p)-Q_{n+1}^{AG}(p)=-\left[ I(p)-Q_{n}^{G}(p)\right] =0\quad \mathrm{for\ all}\ \ p\in \mathbb {P}_{2n-1}. \end{aligned}$$
    (2.6)

    Consequently, to prove (1.6), it remains, by linearity, to show that

    $$\begin{aligned} I(t^{2n})-Q_{n+1}^{AG}(t^{2n})=-\left[ I(t^{2n})-Q_{n}^{G}(t^{2n})\right] \end{aligned}$$
    (2.7)

    and

    $$\begin{aligned} I(t^{2n+1})-Q_{n+1}^{AG}(t^{2n+1})=-\left[ I(t^{2n+1})-Q_{n}^{G}(t^{2n+1})\right] . \end{aligned}$$
    (2.8)

    As the \(\pi _{m}\) are monic polynomials, it is easy to see that

    $$\begin{aligned} t^{2n}=[\pi _{n+1}(t)-\beta _{n}\pi _{n-1}(t)]\pi _{n-1}(t)+p_{1}(t),\ \ p_{1}\in \mathbb P_{2n-1}, \end{aligned}$$
    (2.9)

    and

    $$\begin{aligned} t^{2n}=\pi _{n}^{2}(t)+p_{2}(t),\ \ p_{2}\in \mathbb P_{2n-1}. \end{aligned}$$
    (2.10)

    Now, the left-hand side in (2.7), by inserting (2.9), and using orthogonality, that \(\pi _{n+1}-\beta _{n}\pi _{n-1}\) is the nodal polynomial in formula (1.5), (2.6) and that

    $$\begin{aligned} \beta _{n}=\frac{\Vert \pi _{n}\Vert ^{2}}{\Vert \pi _{n-1}\Vert ^{2}} \end{aligned}$$
    (2.11)

    (cf. [7, Equation (5.3)]), gives

    $$\begin{aligned} I(t^{2n})-Q_{n+1}^{AG}(t^{2n})= & {} I((\pi _{n+1}-\beta _{n}\pi _{n-1})\pi _{n-1})-Q_{n+1}^{AG}((\pi _{n+1}-\beta _{n}\pi _{n-1})\pi _{n-1})\nonumber \\&+I(p_{1})-Q_{n+1}^{AG}(p_{1})\nonumber \\= & {} \displaystyle -\beta _{n}\int _{a}^{b}\pi _{n-1}^{2}(t)d\sigma (t)=-\beta _{n}\Vert \pi _{n-1}\Vert ^{2}=-\Vert \pi _{n}\Vert ^{2}.\nonumber \\ \end{aligned}$$
    (2.12)

    Also, the right-hand side in (2.7), by inserting (2.10), and proceeding in a like manner, yields

    $$\begin{aligned} I(t^{2n})-Q_{n}^{G}(t^{2n})=\Vert \pi _{n}\Vert ^{2}, \end{aligned}$$
    (2.13)

    which, together with (2.12), proves (2.7). On the other hand, applying repeatedly the three-term recurrence relation (2.1), we get

    $$\begin{aligned} \pi _{m}(t)=t^{m}-(\alpha _{0}+\alpha _{1}+\cdots +\alpha _{m-1})t^{m-1}+\cdots . \end{aligned}$$

    The latter, allows us to write

    $$\begin{aligned} t^{2n+1}= & {} [\pi _{n+1}(t)-\beta _{n}\pi _{n-1}(t)]\pi _{n}(t)+[2(\alpha _{0}+\alpha _{1}+\cdots +\alpha _{n-1})+\alpha _{n}]t^{2n}\\&+p_{3}(t),p_{3}\in \mathbb P_{2n-1}, \end{aligned}$$

    and

    $$\begin{aligned} t^{2n+1}=\pi _{n}(t)\pi _{n+1}(t)+[2(\alpha _{0}+\alpha _{1}+\cdots +\alpha _{n-1})+\alpha _{n}]t^{2n}+p_{4}(t),\\ p_{4}\in \mathbb P_{2n-1}. \end{aligned}$$

    Now, as in the proof of (2.7), using orthogonality, (2.12)–(2.13) and (2.6), we find

    $$\begin{aligned} I(t^{2n+1})-Q_{n+1}^{AG}(t^{2n+1})=-[2(\alpha _{0}+\alpha _{1}+\cdots +\alpha _{n-1})+\alpha _{n}]\Vert \pi _{n}\Vert ^{2}, \end{aligned}$$
    $$\begin{aligned} I(t^{2n+1})-Q_{n}^{G}(t^{2n+1})=[2(\alpha _{0}+\alpha _{1}+\cdots +\alpha _{n-1})+\alpha _{n}]\Vert \pi _{n}\Vert ^{2}, \end{aligned}$$

    which, combined together, prove (2.8).       To prove the properties of the nodes \(t_{\mu }\), we first note that, setting \(t=\tau _{\nu }\) in (2.2), we get

    $$\begin{aligned} \pi _{n+1}^{AG}(\tau _{\nu })=\pi _{n+1}(\tau _{\nu })-\beta _{n}\pi _{n-1}(\tau _{\nu }). \end{aligned}$$
    (2.14)

    Also, as \(\tau _{\nu }\) is a zero of \(\pi _{n}\), we have, from (2.1),

    $$\begin{aligned} \pi _{n+1}(\tau _{\nu })=-\beta _{n}\pi _{n-1}(\tau _{\nu }), \end{aligned}$$

    which, inserted into (2.14), yields

    $$\begin{aligned} \pi _{n+1}^{AG}(\tau _{\nu })=2\pi _{n+1}(\tau _{\nu }). \end{aligned}$$
    (2.15)

    The latter, in view of the separation property for the zeros of \(\pi _{n}\) and \(\pi _{n+1}\) (cf. [20, Theorem 3.3.2]), implies

    $$\begin{aligned} \mathrm{sign\,}\pi _{n+1}^{AG}(\tau _{\nu })=\mathrm{sign\,}\pi _{n+1}(\tau _{\nu })=(-1)^{\nu },\ \ \nu =1,2,\ldots ,n. \end{aligned}$$
    (2.16)

    In addition, it is clear that

    $$\begin{aligned} \begin{array}{c} \displaystyle \lim _{t\rightarrow \infty }\pi _{n+1}^{AG}(t)=\infty ,\\ \displaystyle \lim _{t\rightarrow -\infty }\pi _{n+1}^{AG}(t)=(-1)^{n+1}\infty . \end{array} \end{aligned}$$
    (2.17)

    From (2.16) and (2.17), there follows that all \(t_{\mu }\) are real and satisfy (2.3); in fact, the inner nodes \(t_{\mu },\ \mu =2,3,\ldots ,n\), are all in the interval [ab]. Furthermore, in view of (2.16), the end nodes \(t_{1}\) and \(t_{n+1}\) are in [ab] if and only if

    $$\begin{aligned} \pi _{n+1}^{AG}(b)\ge 0\quad \mathrm{and}\quad (-1)^{n+1}\pi _{n+1}^{AG}(a)\ge 0, \end{aligned}$$

    which, on account of (2.2), is equivalent to (2.4).

  2. (b)

    Setting \(f(t)=\frac{\pi _{n+1}^{AG}(t)}{t-t_{\mu }}\pi _{n}(t),\ \mu =1,2,\ldots ,n+1\), in formula (1.5), we get

    $$\begin{aligned} \int _{a}^{b}\frac{\pi _{n+1}^{AG}(t)}{t-t_{\mu }}\pi _{n}(t)d\sigma (t)=w_{\mu }\pi _{n+1}^{AG^{\large \prime }}(t_{\mu })\pi _{n}(t_{\mu })+R_{n+1}^{AG}\left( \frac{\pi _{n+1}^{AG}(t)}{t-t_{\mu }}\pi _{n}(t)\right) . \end{aligned}$$
    (2.18)

    As \(\frac{\pi _{n+1}^{AG}(t)}{t-t_{\mu }}\) is a monic polynomial of degree n, we have, by orthogonality,

    $$\begin{aligned} \int _{a}^{b}\frac{\pi _{n+1}^{AG}(t)}{t-t_{\mu }}\pi _{n}(t)d\sigma (t)=\int _{a}^{b}\pi _{n}^{2}(t)d\sigma (t)=\Vert \pi _{n}\Vert ^{2}. \end{aligned}$$
    (2.19)

    Also, by virtue of (1.6), which can be equivalently written as

    $$\begin{aligned} R_{n+1}^{AG}(p)=-R_{n}^{G}(p)\ \ \mathrm{for\ all}\ \ p\in \mathbb {P}_{2n+1}, \end{aligned}$$

    the fact that \(\pi _{n}\) is the nodal polynomial in formula (1.1), and (2.19), we find

    $$\begin{aligned} \displaystyle R_{n+1}^{AG}\left( \frac{\pi _{n+1}^{AG}(t)}{t-t_{\mu }}\pi _{n}(t)\right)= & {} \displaystyle -R_{n}^{G}\left( \frac{\pi _{n+1}^{AG}(t)}{t-t_{\mu }}\pi _{n}(t)\right) \nonumber \\= & {} \displaystyle -\int _{a}^{b}\frac{\pi _{n+1}^{AG}(t)}{t-t_{\mu }}\pi _{n}(t)d\sigma (t)=-\Vert \pi _{n}\Vert ^{2}. \end{aligned}$$
    (2.20)

    Now, inserting (2.19) and (2.20) into (2.18) and solving for \(w_{\mu }\), we obtain (2.5).       Furthermore, on account of the interlacing property (2.3), we have

    $$\begin{aligned} \mathrm{sign\,}\pi _{n}(t_{\mu })=\mathrm{sign\,}\pi _{n+1}^{AG^{\large \prime }}(t_{\mu })=(-1)^{\mu -1},\ \ \mu =1,2,\ldots ,n+1, \end{aligned}$$

    which shows that the denominator on the right-hand side of (2.5) is positive and so are the weights.

  3. (c)

    As already mentioned in (a), formula (1.5) has degree of exactness (at least) \(2n-1\) and, in view of (2.20), the degree of exactness is precisely \(2n-1\) (cf. [7, Sect. 1.3]). \(\square \)

3 Anti-Gaussian quadrature formulae with Stieltjes abscissae

We assume that the monic orthogonal polynomials relative to the measure \(d\sigma \) satisfy a three-term recurrence relation of the following kind,

$$\begin{aligned} \pi _{n+1}(t)= & {} (t-\alpha _{n})\pi _{n}(t)-\beta _{n}\pi _{n-1}(t),\ \ n=0,1,2,\ldots ,\nonumber \\ \alpha _{n}= & {} \alpha ,\ \ \beta _{n}=\beta \ \ \mathrm{for\ all}\ \ n\ge \ell , \end{aligned}$$
(3.1)

where \(\alpha _{n}\in \mathbb {R},\ \beta _{n}>0,\ l\in \mathbb {N}\), and \(\pi _{0}(t)=1,\ \pi _{-1}(t)=0\). Thus, the coefficients \(\alpha _{n}\) and \(\beta _{n}\) are constant equal, respectively, to some \(\alpha \in \mathbb {R}\) and \(\beta >0\) for all \(n\ge \ell \). Any such measure \(d\sigma \) is known to be supported on a finite interval, say [ab] (cf. [14, Theorem 10]), and we indicate this, together with property (3.1), by writing \(d\sigma \in {\mathscr {M}}_\ell ^{(\alpha ,\beta )}[a,b]\).

Among the many orthogonal polynomials satisfying a recurrence relation of this kind are the four Chebyshev-type polynomials, and their modifications discussed in Allaway’s thesis [1, Chapter 4], as well as the four Bernstein–Szegö-type polynomials.

We first recall some results from [11].

Theorem 3.1

([11, Theorem 2.3]) Consider a measure \(d\sigma \in {\mathscr {M}}_\ell ^{(\alpha ,\beta )}[a,b]\). Then the corresponding Stieltjes polynomials are given by

$$\begin{aligned} \pi _{n+1}^{*}(t)=\pi _{n+1}(t)-\beta \pi _{n-1}(t)\quad \mathrm{for\ all}\ \ n\ge 2\ell -1. \end{aligned}$$
(3.2)

Proposition 3.1

([11, Proposition 2.4]) Consider a measure \(d\sigma \in {\mathscr {M}}_\ell ^{(\alpha ,\beta )}[a,b]\) and let \(\tau _{\nu }\) be the zeros of the corresponding orthogonal polynomial \(\pi _{n}\). Then

$$\begin{aligned} \pi _{n+1}(\tau _{\nu })=\frac{1}{2}\pi _{n+1}^{*}(\tau _{\nu }),\ \ \nu =1,2,\ldots ,n, \end{aligned}$$
(3.3)

for all \(n\ge 2\ell -1\).

If \(d\sigma \in {\mathscr {M}}_\ell ^{(\alpha ,\beta )}[a,b]\), then trivially \(\alpha _n\rightarrow \alpha ,\ \beta _n\rightarrow \beta \) as \(n\rightarrow \infty \), and it follows (cf. [3, p. 121]) that

$$\begin{aligned} \left[ \alpha -2\sqrt{\beta },\alpha +2\sqrt{\beta }\,\right] \end{aligned}$$
(3.4)

is the “limiting spectral interval” of \(d\sigma \). Although it may well be that \(d\sigma \) has support points outside the interval (3.4) (cf. [3, Exercise 4.6, p. 128]), for inclusion results we will assume the following property.

Property A

The measure \(d\sigma \in {\mathscr {M}}_\ell ^{(\alpha ,\beta )}[a,b]\) is such that

$$\begin{aligned} a=\alpha -2\sqrt{\beta },\ \ b=\alpha +2\sqrt{\beta }. \end{aligned}$$

Theorem 3.2

([11, Theorem 3.2]) Consider a measure \(d\sigma \in {\mathscr {M}}_\ell ^{(\alpha ,\beta )}[a,b]\). Then the following holds:

  1. (a)

    The Gauss–Kronrod formula (1.3) has the interlacing property for all \(n\ge 2\ell -1\), that is,

    $$\begin{aligned} \tau _{n+1}^{*}<\tau _{n}<\tau _{n}^{*}<\cdots<\tau _{2}^{*}<\tau _{1}<\tau _{1}^{*}. \end{aligned}$$
    (3.5)
  2. (b)

    If \(d\sigma \) has Property A, then all \(\tau _{\mu }^{*}\) are in [ab] for all \(n\ge 2\ell -1\).

  3. (c)

    All weights \(\sigma _{\nu },\ \sigma _{\mu }^{*}\) in formula (1.3) are positive for each \(n\ge 2\ell -1\); in particular,

    $$\begin{aligned} \sigma _{\nu }=\frac{1}{2}\lambda _{\nu },\ \ \nu =1,2,\ldots ,n, \end{aligned}$$
    (3.6)

    where \(\lambda _{\nu },\ \nu =1,2,\ldots ,n\), are the weights in the Gauss formula (1.1).

  4. (d)

    Formula (1.3) has degree of exactness (at least) \(4n-2\ell +2\) if \(n\ge 2\ell -1\).

Remark 3.1

Property A allows to prove that \(\frac{\pi _{n+1}(b)}{\pi _{n-1}(b)}\ge \beta \) and \(\frac{\pi _{n+1}(a)}{\pi _{n-1}(a)}\ge \beta \) (cf. the proof of Theorem 3.2(b) in [11]), which are equivalent to \(\tau _{1}^{*}\in [a,b]\) and \(\tau _{n+1}^{*}\in [a,b]\), respectively. Analogous conditions are required in the anti-Gaussian formula (1.5) (cf. Equation (2.4)) for \(t_{1}\in [a,b]\) and \(t_{n+1}\in [a,b]\).

We can now present our results.

Theorem 3.3

Consider a measure \(d\sigma \in {\mathscr {M}}_\ell ^{(\alpha ,\beta )}[a,b]\) and let \(n\ge 2\ell -1\). Then the following holds:

  1. (a)

    The anti-Gaussian formula (1.5) is given by

    $$\begin{aligned} t_{\mu }=\tau _{\mu }^{*},\ \ w_{\mu }=2\sigma _{\mu }^{*},\ \ \mu =1,2,\ldots ,n+1, \end{aligned}$$
    (3.7)

    where \(\tau _{\mu }^{*}\) are the Kronrod nodes, i.e., the zeros of the Stieltjes polynomial \(\pi _{n+1}^{*}\), and \(\sigma _{\mu }^{*}\) are the corresponding weights in the Gauss–Kronrod formula (1.3).

  2. (b)

    The averaged Gaussian formula obtained by the quadrature sum (1.7) is precisely the Gauss–Kronrod formula (1.3), having degree of exactness (at least) \(4n-2\ell +2\). Furthermore, the two formulae give the same error estimate for \(R_{n}^{G}(f)\), that is,

    $$\begin{aligned} \left| Q_{n}^{G}(f)-Q^{AvG}_{2n+1}(f)\right| =\frac{\left| Q_{n}^{G}(f)-Q_{n+1}^{AG}(f)\right| }{2}=\left| Q_{n}^{G}(f)-Q_{n}^{K}(f)\right| . \end{aligned}$$

Proof

  1. (a)

    From (2.2) and (3.2), in view of (3.1), we have

    $$\begin{aligned} \pi _{n+1}^{AG}(t)=\pi _{n+1}(t)-\beta _{n}\pi _{n-1}(t)=\pi _{n+1}(t)-\beta \pi _{n-1}(t)=\pi _{n+1}^{*}(t), \end{aligned}$$

    hence, the nodes \(t_{\mu }\) of the anti-Gaussian formula (1.5) are the zeros \(\tau _{\mu }^{*}\) of the Stieltjes polynomial \(\pi _{n+1}^{*}\). Moreover, as the weights \(\sigma _{\mu }^{*}\) of the Gauss–Kronrod formula (1.3) are given by

    $$\begin{aligned} \sigma _{\mu }^{*}=\displaystyle \frac{\Vert \pi _{n}\Vert ^{2}}{\pi _{n}(\tau _{\mu }^{*})\pi _{n+1}^{*^{\large \prime }}(\tau _{\mu }^{*})},\ \ \mu =1,2,\ldots ,n+1 \end{aligned}$$

    (cf. [16, Equation (2.4)]), (2.5) implies, in view of what has already been proved, that \(w_{\mu }=2\sigma _{\mu }^{*},\ \mu =1,2,\ldots ,n+1\).

  2. (b)

    From (1.7), (1.1), (1.5) and (1.3), in view of (3.6), (3.7) and Theorem 3.2(d), we have

    $$\begin{aligned} \begin{array}{rcl} Q^{AvG}_{2n+1}(f)&{}=&{}\displaystyle \frac{1}{2}\left[ Q_{n}^{G}(f)+Q_{n+1}^{AG}(f)\right] =\displaystyle \sum _{\nu =1}^{n}\frac{1}{2}\lambda _{\nu }f(\tau _{\nu })+\sum _{\mu =1}^{n+1}\frac{1}{2}w_{\mu }f(t_{\mu })\\ &{}=&{}\displaystyle \sum _{\nu =1}^{n}\sigma _{\nu }f(\tau _{\nu })+\sum _{\mu =1}^{n+1}\sigma _{\mu }^{*}f(\tau _{\mu }^{*})=Q_{n}^{K}(f), \end{array} \end{aligned}$$

    which proves our assertion. \(\square \)

Remark 3.2

On [[18, Equation (11)]], Spalević demonstrated that, for the averaged Gaussian formula to have degree of exactness (at least) \(3n+1\), i.e., that of a typical Gauss–Kronrod formula, it is required that

$$\begin{aligned} \lim _{n\rightarrow \infty }\alpha _{n}=A\ \ \mathrm{and}\ \ \lim _{n\rightarrow \infty }\beta _{n}=B, \end{aligned}$$
(3.8)

where \(A\in \mathbb {R}\) and \(B\ge 0\); in this case, the averaged Gaussian formula is a good alternative to the Gauss–Kronrod formula for estimating the error of the Gauss formula, and because of that Spalević called it optimal averaged Gaussian formula.

Now, in Theorem 3.3, \(d\sigma \in {\mathscr {M}}_\ell ^{(\alpha ,\beta )}[a,b]\), hence, (3.8) is trivially satisfied, with \(A=\alpha \) and \(B=\beta \), and as \(n\ge 2\ell -1\), the degree of exactness \(4n-2\ell +2\ge 3n+1\); thus, the averaged Gaussian formula obtained in this case is an optimal one.

Remark 3.3

An interpolatory formula for a measure \(d\sigma \) on the interval [ab], having as nodes the zeros of the respective Stieltjes polynomial, had been originally considered by Monegato for the Legendre measure (cf. [15, Sect. II.1]). For measures \(d\sigma \in {\mathscr {M}}_\ell ^{(\alpha ,\beta )}[a,b]\) and \(n\ge 2\ell -1\), this formula has been studied in [11, Sect. 4]; it was shown that the weights are given by (3.7), and the formula has degree of exactness \(2n-1\), although no connection has been made with the anti-Gaussian formula.

As already mentioned in the introduction, the results of Theorem 3.3 have very recently been obtained, in a different way, by Spalević (cf. [19, Theorem 3.1]). Subsequently, in [5], Djukić, Reichel, Spalević and Tomanović considered Bernstein–Szegö measures of any one of the four kinds with a quadratic divisor polynomial, which all belong to the class \({\mathscr {M}}_\ell ^{(0,1/4)}[-1,1]\), with \(\ell \) depending on the particular measure; they applied Theorem 3.1 in [19] and investigated when the thus obtained averaged Gaussian formulae have all nodes in the interval of integration.

What has been said so far, in particular, Proposition 3.1 inspired the following generalization of Theorem 3.3.

Theorem 3.4

Consider a (nonnegative) measure \(d\sigma \) on the interval [ab], and assume that the respective (monic) orthogonal polynomial \(\pi _{n+1}(\cdot )=\pi _{n+1}(\cdot ;d\sigma )\) and (monic) Stieltjes polynomial \(\pi _{n+1}^{*}(\cdot )=\pi _{n+1}^{*}(\cdot ;d\sigma )\), both of degree \(n+1\), satisfy, for some \(\gamma >0\),

$$\begin{aligned} \pi _{n+1}(\tau _{\nu })=\frac{1}{1+\gamma }\pi _{n+1}^{*}(\tau _{\nu }),\ \ \nu =1,2,\ldots ,n, \end{aligned}$$
(3.9)

where \(\tau _{\nu }\) are the zeros of the corresponding nth degree (monic) orthogonal polynomial \(\pi _{n}(\cdot )=\pi _{n}(\cdot ;d\sigma )\). Then the following holds:

  1. (a)

    The Gauss–Kronrod formula (1.3) has the interlacing property (3.5), in which case the \(\tau _{\mu }^{*}\) are all real and, with the possible exception of the first and the last one, are contained in the interval [ab]. Furthermore, all weights \(\sigma _{\nu },\ \sigma _{\mu }^{*}\) are positive.

  2. (b)

    The modified anti-Gaussian formula (1.5) given by the quadrature sum

    $$\begin{aligned} Q_{n+1}^{ MAG}(f)=\sum _{\mu =1}^{n+1}(1+\gamma )\sigma _{\mu }^{*}f(\tau _{\mu }^{*}), \end{aligned}$$
    (3.10)

    satisfies

    $$\begin{aligned} I(p)-Q_{n+1}^{ MAG}(p)=-\gamma \left[ I(p)-Q_{n}^{G}(p)\right] \ \ \mathrm{for\ all}\ \ p\in \mathbb {P}_{3n+1}. \end{aligned}$$
    (3.11)
  3. (c)

    The generalized averaged Gaussian formula obtained by the quadrature sum

    $$\begin{aligned} Q^{ GAvG}_{2n+1}(f)=\frac{1}{1+\gamma }\left[ \gamma Q_{n}^{G}(f)+Q_{n+1}^{{ MAG}}(f)\right] , \end{aligned}$$
    (3.12)

    is precisely the Gauss–Kronrod formula (1.3), having degree of exactness (at least) \(3n+1\). Furthermore, the two formulae give the same error estimate for \(R_{n}^{G}(f)\), that is,

    $$\begin{aligned} \left| Q_{n}^{G}(f)-Q^{\textit{GAvG}}_{2n+1}(f)\right| =\frac{1}{1+\gamma }\left| Q_{n}^{G}(f)-Q_{n+1}^{\textit{MAG}}(f)\right| =\left| Q_{n}^{G}(f)-Q_{n}^{K}(f)\right| . \end{aligned}$$

Proof

  1. (a)

    The proof of the interlacing property follows precisely the steps of the proof of that same property in Theorem 2.1(a), if (2.15) is replaced by

    $$\begin{aligned} \pi _{n+1}^{*}(\tau _{\nu })=(1+\gamma )\pi _{n+1}(\tau _{\nu }),\ \ \nu =1,2,\ldots ,n,\ \ \gamma >0 \end{aligned}$$
    (3.13)

    (cf. (3.9)), i.e., if we repeat the proof in Theorem 2.1(a) with \(\pi _{n+1}^{*}\) in place of \(\pi _{n+1}^{AG}\).       Moreover, it is known that

    $$\begin{aligned} \sigma _{\nu }=\lambda _{\nu }+\frac{\Vert \pi _{n}\Vert ^{2}}{\pi _{n}^{\large \prime }(\tau _{\nu }) \pi _{n+1}^{*}(\tau _{\nu })},\ \ \nu =1,2,\ldots ,n \end{aligned}$$
    (3.14)

    (cf. [16, Equation (2.4)]), with

    $$\begin{aligned} \lambda _{\nu }=-\frac{\Vert \pi _{n}\Vert ^{2}}{\pi _{n}^{\prime }(\tau _{\nu }) \pi _{n+1}(\tau _{\nu })},\ \ \nu =1,2,\ldots ,n \end{aligned}$$
    (3.15)

    (cf. [20, Equation (3.4.7)]), where \(\lambda _{\nu }\) are the weights in the corresponding Gauss formula (1.1). Now, inserting (3.13) into (3.14), yields, on account of (3.15),

    $$\begin{aligned} \sigma _{\nu }=\lambda _{\nu }-\frac{1}{1+\gamma }\lambda _{\nu } =\frac{\gamma }{1+\gamma }\lambda _{\nu },\ \ \nu =1,2,\ldots ,n. \end{aligned}$$
    (3.16)

    The latter, as \(\gamma >0\) and the \(\lambda _{\nu }\) are also all positive, proves the positivity of the \(\sigma _{\nu }\), while the positivity of the \(\sigma _{\mu }^{*}\) is equivalent to the interlacing property (cf. [16, Sect. 2]).

  2. (b)

    Starting from (1.3),

    $$\begin{aligned} \int _{a}^{b}p(t)d\sigma (t)=\sum _{\nu =1}^{n}\sigma _{\nu }p(\tau _{\nu })+\sum _{\mu =1}^{n+1}\sigma _{\mu }^{*}p(\tau _{\mu }^{*})\quad \mathrm{for\ all}\ \ p\in \mathbb {P}_{3n+1}, \end{aligned}$$

    and inserting (3.16), we get

    $$\begin{aligned} \int _{a}^{b}p(t)d\sigma (t)=\sum _{\nu =1}^{n}\frac{\gamma }{1+\gamma }\lambda _{\nu }p(\tau _{\nu })+\sum _{\mu =1}^{n+1}\sigma _{\mu }^{*}p(\tau _{\mu }^{*})\quad \mathrm{for\ all}\ \ p\in \mathbb {P}_{3n+1}, \end{aligned}$$

    which, by a simple computation, can be written as

    $$\begin{aligned} \begin{array}{r} \displaystyle \int _{a}^{b}p(t)d\sigma (t)-\sum _{\mu =1}^{n+1}(1+\gamma )\sigma _{\mu }^{*}p(\tau _{\mu }^{*}) =-\gamma \left[ \int _{a}^{b}p(t)d\sigma (t)-\sum _{\nu =1}^{n}\lambda _{\nu }p(\tau _{\nu })\right] \\ \mathrm{for\ all}\ \ p\in \mathbb {P}_{3n+1}, \end{array} \end{aligned}$$

    which is precisely (3.11).

  3. (c)

    Starting from (3.12), and using (1.1), (3.10) and (1.3), we have, by means of (3.16),

    $$\begin{aligned} \begin{array}{rcl} Q^{\textit{GAvG}}_{2n+1}(f)&{}=&{}\displaystyle \sum _{\nu =1}^{n}\frac{\gamma }{1+\gamma }\lambda _{\nu }f(\tau _{\nu }) +\frac{1}{1+\gamma }\sum _{\mu =1}^{n+1}(1+\gamma )\sigma _{\mu }^{*}f(\tau _{\mu }^{*})\\ &{}=&{}\displaystyle \sum _{\nu =1}^{n}\sigma _{\nu }f(\tau _{\nu })+\sum _{\mu =1}^{n+1}\sigma _{\mu }^{*}f(\tau _{\mu }^{*}) =Q_{n}^{K}(f), \end{array} \end{aligned}$$

    which proves our assertion. \(\square \)

Remark 3.4

For \(\gamma =1\), (3.9) is precisely (3.3), which is satisfied by a measure \(d\sigma \) belonging to the class \({\mathscr {M}}_\ell ^{(\alpha ,\beta )}[a,b]\). Hence, \({\mathscr {M}}_\ell ^{(\alpha ,\beta )}[a,b]\) is a subclass of the class of measures considered in Theorem 3.4. The identification of this last class requires a thorough investigation.

4 Symmetric Gauss–Lobatto quadrature formulae

Let the (nonnegative) measure \(d\sigma \) be symmetric with respect to the origin on the interval \([-a,a]\). Then, the respective monic orthogonal polynomials \(\pi _{m}(\cdot )=\pi _{m}(\cdot ;d\sigma )\) satisfy a three-term recurrence relation (2.1) with \(\alpha _{n}=0,\ n=0,1,2,\ldots \), and the Gauss formula (1.1) has the form

$$\begin{aligned} \int _{-a}^{a}f(t)d\sigma (t)=\sum _{\nu =1}^{n}\lambda _{\nu }f(\tau _{\nu })+R_{n}^{G}(f), \end{aligned}$$
(4.1)

where the nodes \(\tau _{\nu }\) are symmetric with respect to the origin and the weights \(\lambda _{\nu }\) corresponding to symmetric nodes are equal.

We consider the \((n+1)\)-point Gauss–Lobatto quadrature formula for the measure \(d\sigma \) on the interval \([-a,a]\),

$$\begin{aligned} \int _{-a}^{a}f(t)d\sigma (t)=w_{1}^{L}f(a)+\sum _{\nu =2}^{n}w_{\nu }^{L}f(\tau _{\nu }^{L})+w_{n+1}^{L}f(-a)+R_{n+1}^{L}(f), \end{aligned}$$
(4.2)

where \(\tau _{\nu }^{L}=\tau _{\nu }^{L(n)}\) are the zeros of the \((n-1)\)st degree (monic) orthogonal polynomial \(\pi _{n-1}^{L}(\cdot )=\pi _{n-1}^{L}(\cdot ;d\sigma ^{L})\) relative to the measure \(d\sigma ^{L}(t)=(a^{2}-t^{2})d\sigma (t)\) on \([-a,a]\). The weights \(w_{\nu }^{L}=w_{\nu }^{L(n)}\) are all positive, and formula (4.2) has precise degree of exactness \(d_{n+1}^{L}=2n-1\) (cf. [7, Sect. 2.1.1]).

First of all, from Christoffel’s Theorem (cf. [20, Sect. 2.5 (2)]), one can derive a formula for the polynomial \(\pi _{n-1}^{L}\) in terms of the \(\pi _{m}\)’s,

$$\begin{aligned} (t^{2}-a^{2})\pi _{n-1}^{L}(t)=\pi _{n+1}(t)-\frac{\pi _{n+1}(a)}{\pi _{n-1}(a)}\pi _{n-1}(t). \end{aligned}$$
(4.3)

Moreover, one can have a formula for the error term of each one of formulae (4.1) and (4.2) in terms of a higher-order derivative of f. As both of these formulae have degree of exactness \(2n-1\), if we assume that \(f\in C^{2n}[-a,a]\), first,

$$\begin{aligned} R_{n}^{G}(f)=\frac{f^{(2n)}(\xi ^{G})}{(2n)!}\int _{-a}^{a}\pi _{n}^{2}(t)d\sigma (t) =\frac{\Vert \pi _{n}\Vert ^{2}}{(2n)!}f^{(2n)}(\xi ^{G}),\ \ -a<\xi ^{G}<a \end{aligned}$$
(4.4)

(cf. [7, Equation (1.18)]). Similarly, for the error term of formula (4.2), we have

$$\begin{aligned} R_{n+1}^{L}(f)=\frac{f^{(2n)}(\xi ^{L})}{(2n)!}\int _{-a}^{a}(t^{2}-a^{2})\left[ \pi _{n-1}^{L}(t)\right] ^{2}d\sigma (t),\ \ -a<\xi ^{L}<a, \end{aligned}$$

where, inserting (4.3) and using orthogonality, we get

$$\begin{aligned} \begin{array}{rcl} R_{n+1}^{L}(f)&{}=&{}\displaystyle -\frac{f^{(2n)}(\xi ^{L})}{(2n)!}\frac{\pi _{n+1}(a)}{\pi _{n-1}(a)} \int _{-a}^{a}\pi _{n-1}(t)\pi _{n-1}^{L}(t)d\sigma (t)\\ &{}=&{}\displaystyle -\frac{f^{(2n)}(\xi ^{L})}{(2n)!}\frac{\pi _{n+1}(a)}{\pi _{n-1}(a)} \int _{-a}^{a}\pi _{n-1}^{2}(t)d\sigma (t)\\ &{}=&{}\displaystyle -\frac{f^{(2n)}(\xi ^{L})}{(2n)!}\frac{\pi _{n+1}(a)}{\pi _{n-1}(a)}\Vert \pi _{n-1}\Vert ^{2}, \end{array} \end{aligned}$$

hence, in view of (2.11),

$$\begin{aligned} R_{n+1}^{L}(f)=-\frac{\pi _{n+1}(a)}{\beta _{n}\pi _{n-1}(a)}\frac{\Vert \pi _{n}\Vert ^{2}}{(2n)!}f^{(2n)}(\xi ^{L}),\ \ -a<\xi ^{L}<a. \end{aligned}$$
(4.5)

We can now show that the Gauss–Lobatto formula (4.2) is a modified anti-Gaussian formula. This has originally been proved, in a different way, by Calvetti and Reichel in [2]. We then specialize this result to the measures of the previous section.

Theorem 4.1

The Gauss–Lobatto formula (4.2) for the symmetric measure \(d\sigma \) on the interval \([-a,a]\) is a modified anti-Gaussian formula of type (1.5), satisfying (1.8) with

$$\begin{aligned} \gamma =\frac{\pi _{n+1}(a)}{\beta _{n}\pi _{n-1}(a)}. \end{aligned}$$
(4.6)

Proof

The proof goes along the lines of the proof of the first part of Theorem 2.1(a). Setting \(Q_{n+1}^{L}(f)=w_{1}^{L}f(a)+\sum _{\nu =2}^{n}w_{\nu }^{L}f(\tau _{\nu }^{L})+w_{n+1}^{L}f(-a)\), it suffices, by linearity, to show that

$$\begin{aligned} I(t^{i})-Q_{n+1}^{L}(t^{i})=-\gamma \left[ I(t^{i})-Q_{n}^{G}(t^{i})\right] ,\ \ i=0,1,\ldots ,2n+1, \end{aligned}$$

or, equivalently,

$$\begin{aligned} R_{n+1}^{L}(t^{i})=-\gamma R_{n}^{G}(t^{i}),\ \ i=0,1,\ldots ,2n+1, \end{aligned}$$
(4.7)

with \(\gamma \) given by (4.6). First of all, as both formulae (4.1) and (4.2) have degree of exactness \(2n-1\), we have

$$\begin{aligned} R_{n}^{G}(t^{i})=R_{n+1}^{L}(t^{i})=0,\ \ i=0,1,\ldots ,2n-1, \end{aligned}$$

hence, (4.7) is satisfied for \(i=0,1,\ldots ,2n-1\). Furthermore, both formulae (4.1) and (4.2) are symmetric, thus, they integrate exactly all odd monomials; in particular,

$$\begin{aligned} R_{n}^{G}(t^{2n+1})=R_{n+1}^{L}(t^{2n+1})=0, \end{aligned}$$

showing the validity of (4.7) for \(i=2n+1\). Therefore, it only remains to verify (4.7) for \(i=2n\). Setting \(f(t)=t^{2n}\) in (4.4) and (4.5), we get

$$\begin{aligned} R_{n}^{G}(t^{2n})=\Vert \pi _{n}\Vert ^{2} \end{aligned}$$

and

$$\begin{aligned} R_{n+1}^{L}(t^{2n})=-\frac{\pi _{n+1}(a)}{\beta _{n}\pi _{n-1}(a)}\Vert \pi _{n}\Vert ^{2}, \end{aligned}$$

respectively, proving (4.7) for \(i=2n\), and concluding our proof. \(\square \)

If we now start with a symmetric measure \(d\sigma \in {\mathscr {M}}_\ell ^{(\alpha ,\beta )}[a,b]\), satisfying Property A, then, necessarily, \(\alpha =0\) and \([a,b]=[-2\sqrt{\beta },2\sqrt{\beta }]\). We have the following

Corollary 4.1

Consider a symmetric measure \(d\sigma \in {\mathscr {M}}_\ell ^{(0,\beta )}[-2\sqrt{\beta },2\sqrt{\beta }]\) and \(n\ge \ell \). Then the Gauss–Lobatto formula (4.2) is a modified anti-Gaussian formula of type (1.5), satisfying (1.8) with

$$\begin{aligned} \gamma =\frac{\pi _{n+1}(2\sqrt{\beta })}{\beta \pi _{n-1}(2\sqrt{\beta })}. \end{aligned}$$

Proof

We apply Theorem 4.1 with \(\beta _{n}=\beta \) for all \(n\ge \ell \) and \(a=2\sqrt{\beta }\). \(\square \)

Remark 4.1

Clearly, the \(\gamma \) in (4.6) is a positive constant. Moreover, for a symmetric measure \(d\sigma \in {\mathscr {M}}_\ell ^{(0,\beta )}[-2\sqrt{\beta },2\sqrt{\beta }]\), the result of Theorem 4.1 holds for values of \(n<\ell \), but with \(\beta _{n}\) not necessarily equal to \(\beta \).

5 Numerical examples

Example 5.1

We approximate the integral

$$\begin{aligned} \int _{-1}^{1}\frac{e^{\omega t^{2}}\sqrt{1-t^{2}}}{1+8t^{2}}dt, \end{aligned}$$
(5.1)

using the Gauss formula (1.1) for the Bernstein–Szegö measure \(d\sigma (t)=\frac{(1-t^{2})^{1/2}}{1+8t^{2}}dt, -1\le t\le 1\), which is symmetric and belongs to the class \({\mathscr {M}}_{2}^{(0,1/4)}[-1,1]\). We want to estimate the error by means of either the Gauss–Kronrod formula (1.3) or the anti-Gaussian formula (1.5) or the averaged Gaussian formula obtained by the quadrature sum (1.7), all for the measure \(d\sigma \).

Gauss–Kronrod formulae for Bernstein–Szegö measures have been studied in [10, 12]. In particular, for the measure \(d\sigma \) and all \(n\ge 1\), it was shown that the Gauss–Kronrod formula (1.3) has the interlacing property (3.5), all nodes are contained in \([-1,1]\), all weights are positive and the formula has degree of exactness \(4n-1\) for \(n\ge 2\) and 5 for \(n=1\) (cf. [10, Theorem 5.2] and [12]).

Based on this and Theorem 3.3, the anti-Gaussian formula (1.5) for the Bernstein–Szegö measure \(d\sigma \) and \(n\ge 1\) has all nodes in \([-1,1]\) and all weights positive, while the corresponding averaged Gaussian formula obtained by the quadrature sum (1.7) is precisely the Gauss–Kronrod formula for \(d\sigma \).

Furthermore, we have the following estimates

$$\begin{aligned} \left| R_{n}^{G}(f)\right| \simeq \left| Q_{n}^{G}(f)-Q_{n}^{K}(f)\right| =\left| Q_{n}^{G}(f)-Q^{AvG}_{2n+1}(f)\right| =\frac{\left| Q_{n}^{G}(f)-Q_{n+1}^{AG}(f)\right| }{2}, \end{aligned}$$
(5.2)

and

$$\begin{aligned} \left| R_{n}^{G}(f)\right| \simeq \left| Q_{n}^{G}(f)-Q_{n+1}^{AG}(f)\right| . \end{aligned}$$
(5.3)

In Tables 1 and 2, we give the numerical results of estimates (5.2) and (5.3), respectively, together each time with the modulus of the actual error. (Numbers in parentheses indicate decimal exponents.) All computations were performed on a SUN Ultra 5 computer in quad precision (machine precision \(1.93\cdot 10^{-34}\)). The true value of the integral was computed by means of Gaussian quadrature for the Bernstein–Szegö measure \(d\sigma \), using software from [8, 9].

Table 1 Estimate (5.2) and corresponding actual error in approximating the integral (5.1) using the Gauss formula (1.1)
Table 2 Estimate (5.3) and corresponding actual error in approximating the integral (5.1) using the Gauss formula (1.1)

As the Gauss–Kronrod formula (1.3) and the averaged Gaussian formula obtained by the quadrature sum (1.7) are identical, they give the same (cf. (5.2)) extremely accurate estimate for the error of the Gauss formula (1.1); in fact, for \(\omega =0.25\) or 0.5 and \(n\ge 10\) and \(\omega =1.0\), 2.0 or 4.0 and \(n\ge 15\), estimate (5.2) is almost identical to the modulus of the actual error. Estimate (5.3), obtained by means of the anti-Gaussian formula (1.5), is not as accurate, as one expects on account of the lower degree of exactness of the formula; however, the estimate provides, in almost all cases, the correct order of magnitude of the actual error.

Example 5.2

We approximate the integral

$$\begin{aligned} \int _{-2}^{2}\frac{\cos {2t}}{a^{2}+t^{2}}d\sigma (t), \end{aligned}$$
(5.4)

using the Gauss formula (1.1) for the measure \(d\sigma \) on the interval \([-2,2]\), which is such that the corresponding orthogonal polynomials satisfy the three-term recurrence relation

$$\begin{aligned} \pi _{n+1}(t)= & {} t\pi _{n}(t)-\beta _{n}\pi _{n-1}(t),\ \ n=0,1,2,\ldots ,\\ \pi _{0}(t)= & {} 1,\ \pi _{-1}(t)=0, \end{aligned}$$

with

$$\begin{aligned} \beta _{0}=2\pi ,\ \ \beta _{1}=2,\ \ \beta _{n}=1,\ n\ge 2. \end{aligned}$$

Obviously, \(d\sigma \) is symmetric and belongs to the class \({\mathscr {M}}_{2}^{(0,1)}[-2,2]\). We want to estimate the error of the Gauss formula (1.1) by means of either the Gauss–Kronrod formula (1.3) or the anti-Gaussian formula (1.5) or the averaged Gaussian formula obtained by the quadrature sum (1.7), all for the measure \(d\sigma \).

From Theorem 3.2, the Gauss–Kronrod formula (1.3) for the measure \(d\sigma \) and all \(n\ge 3\) has the interlacing property (3.5), all nodes are contained in \([-2,2]\), all weights are positive and the formula has degree of exactness (at least) \(4n-2\).

Also, from Theorem 3.3, in view of the above, the anti-Gaussian formula (1.5) for the measure \(d\sigma \) and \(n\ge 3\) has all nodes in \([-2,2]\) and all weights positive, while the corresponding averaged Gaussian formula obtained by the quadrature sum (1.7) is precisely the Gauss–Kronrod formula for \(d\sigma \).

As in the previous example, the numerical results of estimates (5.2) and (5.3), together with the modulus of the actual error, are tabulated in Tables 3 and 4, respectively. The true value of the integral was again computed by means of Gaussian quadrature for the measure \(d\sigma \), using software from [8, 9].

Table 3 Estimate (5.2) and corresponding actual error in approximating the integral (5.4) using the Gauss formula (1.1)
Table 4 Estimate (5.3) and corresponding actual error in approximating the integral (5.4) using the Gauss formula (1.1)

It should be noted that, as in the previous example, the Gauss–Kronrod formula (1.3) and the averaged Gaussian formula obtained by the quadrature sum (1.7) are identical, and therefore they give the same (cf. (5.2)) very accurate estimate for the error of the Gauss formula (1.1); see, e.g., in Table 3, the cases \(a=1.0\) or 2.0 and \(n=20\), while, for \(a=4.0\) and \(n=15\) or 20, estimate (5.2) is almost identical to the modulus of the actual error. Estimate (5.3), obtained by means of the anti-Gaussian formula (1.5), is not as accurate, although, in most cases, it provides the correct order of magnitude of the actual error.