1 Introduction

While the theory of ordinary and basic hypergeometric functions goes back to the time of Euler and Gauß, the study of elliptic hypergeometric functions only started in the late 1990s, after the publication of a paper [2] by Frenkel and Turaev. As a testament to the usefulness of hypergeometric functions, and specifically also elliptic hypergeometric functions, Frenkel and Turaev introduced the elliptic hypergeometric functions when studying solutions to the Yang-Baxter equation. Since then, elliptic hypergeometric functions have appeared in many other applications. To get an impression of their applications you can browse the list of papers on elliptic hypergeometric functions on the website of Rosengren (see the note at the top of the bibliography).

Given the name, it might come as no surprise that to understand elliptic hypergeometric series and integrals it is important to understand both ordinary hypergeometric functions and elliptic functions. The theory of elliptic hypergeometric functions is really a reflection of the theory for ordinary hypergeometric functions, but usually twisted in a slightly more complicated way. One of the questions one should often ask is “I know hypergeometric functions satisfy this property, does it have an elliptic hypergeometric analogue?” and the other way around “What is the ordinary/basic hypergeometric version of this elliptic hypergeometric result?” On the other hand the basic building blocks for elliptic hypergeometric series are elliptic functions, so it is important to know the basic properties of elliptic functions as well.

This explains the order of these notes: First we briefly discuss the basics of (basic) hypergeometric series, then the basics of elliptic functions, before we consider the elliptic hypergeometric functions themselves. We then continue by showcasing a few of the most important identities satisfied by elliptic hypergeometric functions. In particular we will be focused on the generalizations of the classical orthogonal polynomials (Legendre, Jacobi, Wilson, etc.) to the elliptic level.

2 Hypergeometric Series

A general reference for ordinary hypergeometric functions is [1].

Definition 2.1

A hypergeometric series is a series ∑d n for which the quotient of two subsequent terms r(n) = d n+1d n is a rational function of n.

The name hypergeometric originates from the geometric series k = 0 x k = 1∕(1 − x), which is a special case of hypergeometric series where the quotient r(n) is a constant function. Other examples of hypergeometric series are ex = n = 0 x n∕(n! ) and the binomial \((1 + x)^{n} =\sum _{ k=0}^{n}\binom{n}{k}x^{k}\).

Notice that any rational function can be factored

$$\displaystyle{r(n) = \frac{(n + a_{1})(n + a_{2})\cdots (n + a_{r})} {(n + b_{1})(n + b_{2})\cdots (n + b_{s})} z.}$$

Thus the zeros are at n = −a j and the poles at n = −b j . This means that

$$\displaystyle{d_{n} = d_{0}\prod _{k=0}^{n-1}r(k) = d_{ 0}\frac{(a_{1},a_{2},\ldots,a_{r})_{n}} {(b_{1},b_{2},\ldots,b_{s})_{n}} z^{n}}$$

where we use the notation

Definition 2.2

The pochhammer symbol (a) n is defined for \(n \in \mathbb{Z}_{\geq 0}\) as

$$\displaystyle{(a)_{n} =\prod _{ k=0}^{n-1}(a + k)\;.}$$

Note that this implies that (a)0 = 1. We use an abbreviations for products of these pochhammer symbols:

$$\displaystyle{(a_{1},a_{2},\ldots,a_{r})_{n} = (a_{1})_{n}(a_{2})_{n}\cdots (a_{r})_{n}}$$

Considering this general expression for the terms of a hypergeometric series we introduce the notation

Definition 2.3

The hypergeometric series are defined as

$$\displaystyle{_{r}F_{s}\bigg[\begin{array}{*{10}c} a_{1},\ldots,a_{r} \\ b_{1},\ldots,b_{s} \end{array};z\bigg] =\sum _{ n=0}^{\infty } \frac{(a_{1},\ldots,a_{r})_{n}} {(1,b_{1},\ldots,b_{s})_{n}}z^{n}\;.}$$

Notice the 1 which is always added as a numerator parameter. First of all (1) n = n! , so it indeed appears in the series we gave before. Indeed we have

$$\displaystyle{\mathrm{e}^{x} = _{ 0}F_{0}\bigg[\begin{array}{*{10}c} -\\ -\end{array};x\bigg]\;,\quad (1+x)^{n} = _{ 1}F_{0}\bigg[\begin{array}{*{10}c} -n\\ - \end{array};-x\bigg]\;.}$$

Thus it often saves us some writing. But more importantly it means that the summand d n = 0 for n < 0 (in generic cases), so the starting point of our series is an inherent boundary. If you really do not want the 1 as a b-parameter, you can add it as an a-parameter, after which the (1) n in the numerator cancels to the (1) n in the denominator, so the geometric series is written as 1∕(1 − x) = 1 F 0(1; −; x).

For positive integer n the series for (1 + x)n only contains a finite number of nonzero terms. Such a series is called a terminating hypergeometric series. You can easily spot them in the r F s notation, as one of the numerator arguments must be a negative integer.

To consider a series as an analytic function, we need to consider whether it converges. Since hypergeometric series are defined by a pretty property for the quotient of two subsequent terms, the ratio test can be easily applied. Indeed we have to consider the limit lim n r(n). Any rational function has a limit at infinity, and this gives the following result for the convergence of hypergeometric series:

Theorem 2.4

The series r F s (a 1, , a r ; b 1, , b s ; z) converges if one of the following holds

  • It is terminating

  • If rs

  • If r = s + 1 and \(\vert z\vert <1\)

and it diverges if one of the following holds

  • It is nonterminating and r > s + 1

  • It is nonterminating and r = s + 1 and \(\vert z\vert> 1\) .

Hypergeometric series are intricately linked to the Gamma function.

Definition 2.5

The Gamma function is defined as the unique meromorphic function which equals

$$\displaystyle{\varGamma (z) =\int _{ 0}^{\infty }t^{z-1}\mathrm{e}^{-t}\mathop{ }\!\mathrm{ d}t}$$

for \(\mathfrak{R}(z)> 0\).

Note that Γ(z + 1) = (z) (by integration by parts), so we can easily extend the domain of the function to \(\mathbb{C}\setminus \mathbb{Z}_{\leq 0}\) once we have defined it for \(\mathfrak{R}(z)> 0\). This difference equation also ensures that Γ has simple poles at \(\mathbb{Z}_{\leq 0}\), as it would imply for example that for \(\mathfrak{R}(z)> -1\) we have \(\varGamma (z) = (1/z)\int _{t=0}^{\infty }t^{z}\mathrm{e}^{-t}\mathop{ }\!\mathrm{ d}t\), which is an analytic function times 1∕z. The first relation to hypergeometric series comes from the fact that

$$\displaystyle{(a)_{n} = \frac{\varGamma (a + n)} {\varGamma (a)} }$$

which only uses the difference equation for the Gamma function. Thus hypergeometric series can be written as

$$\displaystyle{_{r}F_{s}\bigg[\begin{array}{*{10}c} a_{1},\ldots,a_{r} \\ b_{1},\ldots b_{s} \end{array};z\bigg] = \frac{\varGamma (1,b_{1},\ldots,b_{s})} {\varGamma (a_{1},\ldots,a_{r})} \sum _{n=0}^{\infty } \frac{\varGamma (a_{1} + n,\ldots,a_{r} + n,1)} {\varGamma (1 + n,b_{1} + n,\ldots,b_{s} + 1)}z^{n}}$$

using the same abbreviation for products of Gamma functions as we introduced before for pochhammer symbols. The second relation to hypergeometric series comes from integrals. Consider the integral (over a complex contour)

$$\displaystyle{\int _{-i\infty }^{i\infty }\frac{\varGamma (a + s,b + s,-s)} {\varGamma (c + s)} (-z)^{s}\,\frac{\mathrm{d}s} {2\pi \mathrm{i}} }$$

where the integration contour separates the poles of Γ(a + s) and Γ(b + s) from those of Γ(−s). (Notice that 1∕Γ(z) is an entire function). The contour, and the poles are pictured in Fig. 1.

Fig. 1
figure 1

The contour for the Barnes’ integral weaves between the poles of the integrand

The (−z)s is defined using a branch cut at the negative real axis (for − z, so not for the integration variable s). Shifting the contour to the right we encounter all the poles of Γ(−s). If we move the contour over those poles, the limit of the resulting integral goes to zero (use Stirling’s formula to prove this). Thus the result of the original integral is the infinite series of residues we have to pick up. That is

$$\displaystyle\begin{array}{rcl} & & \int _{-i\infty }^{i\infty }\frac{\varGamma (a + s,b + s,-s)} {\varGamma (c + s)} (-z)^{s}\,\frac{\mathrm{d}s} {2\pi \mathrm{i}} {}\\ & & =\sum _{ n=0}^{\infty }\mathop{\mathrm{Res}}\bigg(\frac{\varGamma (a + s,b + s,-s)} {\varGamma (c + s)} (-z)^{s},s = n\bigg) {}\\ & & =\mathop{ \mathrm{Res}}\bigg(\frac{\varGamma (a + s,b + s,-s)} {\varGamma (c + s)} (-z)^{s},s = 0\bigg)\sum _{ n=0}^{\infty }\frac{\mathop{\mathrm{Res}}{\bigl ( \frac{\varGamma (a+s,b+s,-s)} {\varGamma (c+s)} (-z)^{s},s = n\bigr )}} {\mathop{\mathrm{Res}}{\bigl ( \frac{\varGamma (a+s,b+s,-s)} {\varGamma (c+s)} (-z)^{s},s = 0\bigr )}} {}\\ & & = \frac{\varGamma (a,b)} {\varGamma (c)} \sum _{n=0}^{\infty } \frac{(a,b)_{n}} {(c,-n)_{n}}(-z)^{n} = \frac{\varGamma (a,b)} {\varGamma (c)} \sum _{n=0}^{\infty }\frac{(a,b)_{n}} {(c,1)_{n}}z^{n} {}\\ \end{array}$$

which is the hypergeometric series 2 F 1. Thus integrals involving Gamma functions are related to hypergeometric series by the picking up of residues. And a third relation is that a hypergeometric series can sometimes be evaluated, that is, written without an infinite sum in terms of simple functions, using Gamma functions. For example the famous evaluation (for convergent series)

$$\displaystyle{ _{2}F_{1}\bigg[\begin{array}{*{10}c} a,b\\ c \end{array};1\bigg] = \frac{\varGamma (c,c - a - b)} {\varGamma (c - b,c - a)}\;. }$$
(1)

Exercise 2.6

Use Stirling’s formula to show that the Gauß hypergeometric series 2 F 1(a, b; c; 1) converges for \(\mathfrak{R}(c - a - b)> 0\). When does a hypergeometric series r+1 F r evaluated at z = 1 converge?

Exercise 2.7

Fill in the details of the derivation that

$$\displaystyle{\int _{-i\infty }^{i\infty }\frac{\varGamma (a + s,b + s,-s)} {\varGamma (c + s)} (-z)^{s}\,\frac{\mathrm{d}s} {2\pi \mathrm{i}} = \frac{\varGamma (a,b)} {\varGamma (c)} _{2}F_{1}\bigg[\begin{array}{*{10}c} a,b\\ c \end{array};1\bigg]\;.}$$

That is, show that (when the series converges) the integrand converges to zero as \(\mathfrak{R}(s) \rightarrow \infty\).

3 Basic Hypergeometric Series

A generalization of the hypergeometric series are the basic hypergeometric, or q-hypergeometric series. The basic reference on this topic is [3].

Definition 3.1

A basic hypergeometric series is a series ∑d n for which the quotient of two subsequent terms r(n) = d n+1d n is a rational function of q n.

We will assume from now on that \(\vert q\vert <1\). Notice that this means that r(n) is periodic with period 2πi∕log(q). Many results from ordinary hypergeometric series generalize. There exists a q-Gamma function which generalizes the ordinary Gamma function, but it is actually often more convenient to work with the q-pochhammer symbols:

Definition 3.2

The q-pochhammer symbols are defined for \(n \in \mathbb{N} \cup \{\infty \}\) as

$$\displaystyle{(a;q)_{n} =\prod _{ k=0}^{n-1}(1 - aq^{k})\;,}$$

For n = we often omit the subscript;

Notice that the infinite product (a; q) = (a; q) converges for \(\vert q\vert <1\). If the rational function is factored as

$$\displaystyle{r(n) = \frac{(1 - a_{1}q^{n})\cdots (1 - a_{r}q^{n})} {(1 - b_{1}q^{n})\cdots (1 - b_{s}q^{n})} z}$$

then we can express the terms in the series using the q-pochhammer symbols as

$$\displaystyle{d_{n} = d_{0}\frac{(a_{1},\ldots,a_{r};q)_{n}} {(b_{1},\ldots,b_{s};q)_{n}} z^{n}\;.}$$

Notice that lim n r(n) = z, so the q-hypergeometric series converge for \(\vert z\vert <1\).

We want to keep this discussion brief, so we will end with mentioning that the limit q → 1 returns us to the theory of ordinary hypergeometric series, as

$$\displaystyle{\lim _{q\rightarrow 1} \frac{(q^{a};q)_{n}} {(1 - q)^{n}} =\prod _{ k=0}^{n-1}\lim _{ q\rightarrow 1}\frac{1 - q^{a+k}} {1 - q} =\prod _{ k=0}^{n-1}(a + k) = (a)_{ n}\;.}$$

4 Elliptic Functions

As for information about elliptic functions, I like the by now ancient Modern Analysis [12]. We just need a few basic results.

Definition 4.1

An elliptic function is a meromorphic function \(f: \mathbb{C} \rightarrow \mathbb{C}\) which is periodic in two directions. That is, there are \(\omega _{1},\omega _{2} \in \mathbb{C}\setminus \{0\}\) with \(\omega _{2}/\omega _{1}\notin \mathbb{R}\) such that f(z + ω i ) = f(z) for i = 1, 2.

We will often write elliptic functions as functions of x = q z instead of as functions of z. One of the two periods is then equal to 2πi∕log(q). If we write the other period as log(p)∕log(q), we see that f(x) = f(q z) = f(q z+log(p)∕log(q)) = f(px). Thus we say a function is p-elliptic if it is invariant under multiplying the argument by a factor p. In this section we will not use this notation yet, but we will once we get to elliptic hypergeometric series.

It can be convenient to consider the group \(\omega _{1}\mathbb{Z} +\omega _{2}\mathbb{Z}\) (under addition) which acts on \(\mathbb{C}\). A fundamental domain of this action is the parallelogram with vertices 0, ω 1, ω 1 + ω 2 and ω 2.

It should be realized that the condition of ellipticity is a rather strict condition. For example

Theorem 4.2

An analytic, elliptic function is constant.

Proof

Let f be an analytic, elliptic function. On the closure of the fundamental domain f is a continuous function on a compact domain, hence it is bounded. But as it takes all its values on this fundamental domain, f is bounded on \(\mathbb{C}\). Liouville’s theorem now shows that f must be constant. □

Moreover it can have only as many zeros as poles

Theorem 4.3

If f is an elliptic function, which is not constant zero, then it has as many poles as zeros counted with multiplicity.

Proof

Assume no poles/zeros are located on the boundary of the fundamental domain D. If there are, you should shift the fundamental domain so that this is the case (which is possible since there are at most countably many poles/zeros). Then consider the integral \(\int _{\partial D}f^{{\prime}}(z)/f(z)\mathop{}\!\mathrm{d}z/(2\pi \mathrm{i})\) which gives the difference between the number of poles and the number of zeros. The contour consists of four parts: let’s call the line from 0 to ω 1 the bottom, the line from ω 1 to ω 1 + ω 2 the right side, the line from ω 1 + ω 2 to ω 2 the top, and the line from ω 2 back to 0 the left side. This is consistent with the following picture (note that if the orientation of the contour is different the argument remains the same)

This integral now evaluates to zero, as the part of the integral over the left edge of the fundamental domain equals the part of the integral over the right edge with opposite orientation. (One shift by a period does not change the integrand). Thus the left and right edge cancel each other. Likewise for the top and bottom edge. □

And you can even prove

Theorem 4.4

If an elliptic function f has poles in a fundamental domain at p 1, , p k and zeros at z 1, , z k (multiple poles /zeros listed multiple times ) then \(\sum _{n=1}^{k}p_{n} =\sum _{ n=1}^{k}z_{n}\bmod \omega _{1}\mathbb{Z} +\omega _{2}\mathbb{Z}\) .

Proof

Consider the contour integral \(\int _{\partial D}zf^{{\prime}}(z)/f(z)\mathop{}\!\mathrm{d}z/(2\pi \mathrm{i}z)\), with the contour as before. Now the integral of the bottom edge plus the integral of the top edge equals the integral \(-\omega _{2}\int f^{{\prime}}(z)/f(z)\mathop{}\!\mathrm{d}z/(2\pi \mathrm{i})\) along the bottom edge 0 to ω 1. However since f(ω 1) and f(0) are identical, the difference log(f)(ω 1) − log(f)(0) must be a multiple of 2πi. Thus the sum of the integrals over the bottom edge and the top edge is an integer multiple of ω 2. Likewise the sum of the integrals over the left and right edges equals an integer multiple of ω 1. □

Together the last two theorems imply that no elliptic function with just a single pole/zero in a fundamental domain exists. This fact can often be used to prove identities for elliptic functions:

If you can show an elliptic function has at most one pole in a fundamental domain, it must be constant !

5 Elliptic Hypergeometric Series

Elliptic hypergeometric series first appeared in [2], in which Frenkel and Turaev were considering solutions to the Yang–Baxter equation. In the second edition of [3] a chapter was added about elliptic hypergeometric series, which gives a nice overview, but for some aspects it was written a bit prematurely as the theory was at the time in big development. A nice introduction, a little more expansive than this one, can be found in the recent lecture notes [7] by Rosengren.

The definition of elliptic hypergeometric series should not come as a surprise anymore:

Definition 5.1

An elliptic hypergeometric series is a series ∑d n for which the quotient of two subsequent terms r(n) = d n+1d n is an elliptic function of n.

We would like to proceed as before by factoring the quotient r(n) in elementary building blocks for elliptic functions, but we can’t do that using elliptic functions. Indeed nonconstant elliptic functions must always have both multiple zeros and multiple poles, so they won’t function well as bricks in our construction. Therefore we consider the theta functions

Definition 5.2

We define the theta functions as

$$\displaystyle{\theta (x;p) = (x,p/x;p)_{\infty } =\prod _{ k=0}^{\infty }(1 - p^{k}x)(1 - p^{k+1}/x)\;.}$$

The associated theta pochhammer symbols are given by

$$\displaystyle{\theta (x;p;q)_{n} =\theta (x;p)_{n} =\prod _{ k=0}^{n-1}\theta (xq^{k};p)}$$

(where we often suppress the q-dependence).

These theta functions are a different way of writing the Jacobi theta function. The product formula is related to the Jacobi theta function using the famous Jacobi triple product formula:

$$\displaystyle{(x,q/x,q;q)_{\infty } =\sum _{ k=-\infty }^{\infty }q^{k(k-1)/2}(-x)^{k}\;.}$$

Now observe that these theta functions have a simple behavior if you multiply x by p:

$$\displaystyle{\theta (px;p) =\theta (1/x;p) = (1/x,px;p) = \frac{(1 - 1/x)} {1 - x} (p/x,x;p) = -\frac{1} {x}\theta (x;p)\;.}$$

So if we now write

$$\displaystyle{r(n) = \frac{\theta (a_{1}q^{n},\ldots,a_{r}q^{n};p)} {\theta (b_{1}q^{n},\ldots,b_{r}q^{n};p)} z}$$

we have a function which has one period 2πi∕log(q) and a second period log(p)∕log(q) if the balancing condition k = 1 r a k = r = 1 r b k holds (and observe that we have to take as many numerator terms as denominator terms). Note that θ(aq n; p) is an entire function of n and in a fundamental domain of \(2\pi \mathrm{i}/\log (q)\mathbb{Z} +\log (p)/\log (q)\mathbb{Z}\) it has a single zero. Thus these theta functions allow us to write elliptic functions with given zeros and poles as a simple product. Like the r F s notation we now define

Definition 5.3

If the balancing condition a 1 a 2a r = qb 1 b 2b r−1 holds then we define

$$\displaystyle{_{r}E_{r-1}\bigg[\begin{array}{*{10}c} a_{1},\ldots,a_{r} \\ b_{1},\ldots,b_{r-1} \end{array};z\bigg] =\sum _{ k=0}^{\infty } \frac{\theta (a_{1},\ldots,a_{r};p)_{k}} {\theta (q,b_{1},\ldots,b_{r-1};p)_{k}}z^{k}\;.}$$

Let us consider the convergence of this series. The ratio test does not work as the limit as the argument of an elliptic function goes to infinity does not exist. While you can have convergent elliptic hypergeometric nonterminating series, in practice everybody only works with terminating series. That is series with a θ(q n; q) k in the numerator, which becomes 0 for k > n.

It turns out that most results, and the only series we will encounter throughout these lecture notes, are so-called very well-poised series, which are a special case:

Definition 5.4

Assuming r is even and the balancing condition

$$\displaystyle{b_{1}b_{2}\cdots b_{r-6} = a^{r/2-3}q^{r/2+n-4}}$$

holds, the terminating very-well-poised series is given by

$$\displaystyle\begin{array}{rcl} & & _{r}V _{r-1}(a;b_{1},\ldots,b_{r-6},q^{-n};q,p) {}\\ & & \qquad \qquad \qquad \qquad \quad = _{r}E_{r-1}\bigg[\begin{array}{*{10}c} a,\pm q\sqrt{a},\pm q\sqrt{ap},b_{1},\ldots,b_{r-6},q^{-n} \\ \pm \sqrt{a},\pm \sqrt{ap},aq/b_{1},\ldots,aq/b_{r-6},aq^{n+1} \end{array};q\bigg] {}\\ & & \qquad \qquad \qquad \qquad \quad =\sum _{ k=0}^{n}\frac{\theta (aq^{2k};p)} {\theta (a;p)} \frac{\theta (a,b_{1},\ldots,b_{r-6},q^{-n};p)_{k}} {\theta (q,aq/b_{1},\ldots,aq/b_{r-6},aq^{n+1};p)_{k}}q^{k}\;. {}\\ \end{array}$$

Proof of equivalence of two expressions above

Here we use

$$\displaystyle{\begin{array}{rl} \theta (x;p)_{2k}& =\prod _{ r=0}^{2k-1}(xq^{r},pq^{-r}/x;p) =\prod _{ r=0}^{2k-1}\prod _{s=0}^{\infty }(1 - xq^{r}p^{s})(1 - p^{s+1}q^{-r}/x) \\ & = \prod _{r=0}^{2k-1}\prod _{s=0}^{\infty }(1 -\sqrt{x}q^{r/2}p^{s/2})(1 + \sqrt{x}q^{r/2}p^{s/2}) \\ &\qquad \qquad \qquad \qquad \quad \times (1 - p^{(s+1)/2}q^{-r/2}/\sqrt{x})(1 + p^{(s+1)/2}q^{-r/2}/\sqrt{x}) \\ & =\prod _{ r=0}^{2k-1}(\pm \sqrt{x}q^{r/2},\pm \sqrt{px}q^{r/2},\pm q^{-r/2}\sqrt{p/x},\pm pq^{-r/2}/\sqrt{x};p) \\ & =\prod _{ r=0}^{2k-1}\theta (\pm \sqrt{x}q^{r/2},\pm \sqrt{px}q^{r/2};p) \\ & =\theta (\pm \sqrt{x},\pm \sqrt{qx},\pm \sqrt{px},\pm \sqrt{pqx};p)_{k}\end{array} }$$

which implies

$$\displaystyle{\begin{array}{rl} \frac{\theta (aq^{2k};p)} {\theta (a;p)} = \frac{\theta (aq;p)_{2k}} {\theta (a;p)_{2k}} & = \frac{\theta (\pm \sqrt{aq},\pm q\sqrt{a},\pm \sqrt{pqa},\pm q\sqrt{pa};p)_{k}} {\theta (\pm \sqrt{a},\pm \sqrt{aq},\pm \sqrt{ap},\pm \sqrt{apq};p)_{k}} \\ & = \frac{\theta (\pm q\sqrt{a},\pm q\sqrt{ap};p)_{k}} {\theta (\pm \sqrt{a},\pm \sqrt{ap};p)_{k}} \;.\end{array} }$$

It should be noted that for ordinary and basic hypergeometric series we have

$$\displaystyle{\frac{a + 2k} {a} = \frac{(a/2 + 1)_{k}} {(a/2)_{k}} \;,\quad \frac{1 - aq^{2k}} {1 - a} = \frac{(\pm q\sqrt{a};q)_{k}} {(\pm \sqrt{a};q)_{k}} }$$

thus we only need 1 or 2 parameters for this first factor instead of 4. So a very well-poised 10 V 9 corresponds to the basic hypergeometric very-well poised 8 W 7, and the ordinary hypergeometric very-well poised 7 F 6.

As far as I know, all identities for elliptic hypergeometric series (and integrals) are based on a single identity:

Theorem 5.5

For \(w,x,y,z \in \mathbb{C}\setminus \{0\}\) we have

$$\displaystyle{\frac{1} {y}\theta (wx^{\pm 1},yz^{\pm 1};p) + \frac{1} {z}\theta (wy^{\pm 1},zx^{\pm 1};p) + \frac{1} {x}\theta (wz^{\pm 1},xy^{\pm 1};p) = 0\;.}$$

Proof

One of the most common techniques for proving identities involving theta functions is to use the argument that elliptic functions with more zeros than poles are constant zero. Let us consider what happens if we change wpw. Then the first term becomes

$$\displaystyle{\begin{array}{rl} \frac{1} {y}\theta (pwx^{\pm 1},yz^{\pm 1};p)& = \frac{1} {y}\theta (wx^{\pm 1},yz^{\pm 1};p)\bigg(- \frac{1} {wx}\bigg)\bigg(-\frac{x} {w}\bigg) \\ & = \frac{1} {y}\theta (wx^{\pm 1},yz^{\pm 1};p) \frac{1} {w^{2}} \;. \end{array} }$$

By symmetry the other two terms are also multiplied by 1∕w 2 upon setting wpw. So we do not have an elliptic function (it could not really be because it is analytic and has zeros). However if we divide by the first term the left-hand side does become an elliptic function in w, which is even (invariant under w → 1∕w)

$$\displaystyle{f(w) = 1 + \frac{y} {z} \frac{\theta (wy^{\pm 1},zx^{\pm 1};p)} {\theta (wx^{\pm 1},yz^{\pm 1};p)} + \frac{y} {x} \frac{\theta (wz^{\pm 1},xy^{\pm 1};p)} {\theta (wx^{\pm 1},yz^{\pm 1};p)}\;.}$$

In a fundamental domain, this function has at most a (simple) pole at w = x and w = x −1. If we set w = z we obtain that the function vanishes:

$$\displaystyle{f(z) = 1 + \frac{y} {z} \frac{\theta (zy^{\pm 1},zx^{\pm 1};p)} {\theta (zx^{\pm 1},yz^{\pm 1};p)} + 0 = 1 + \frac{y} {z} \frac{\theta (z/y;p)} {\theta (y/z;p)} = 0\;,}$$

using the identity θ(1∕x; p) = −(1∕x)θ(x; p). Likewise there are zeros at w = y ±1 and w = z −1. As a nonconstant elliptic function with at most two poles can have at most two zeros the function f(w) must therefore be constant zero. □

As a generalization of the ordinary Gamma function, there is an elliptic Gamma function [9] which satisfies the simple difference equation

$$\displaystyle{\varGamma _{\mathrm{e}}(qx) =\theta (x;p)\varGamma _{\mathrm{e}}(x)\;,}$$

ensuring that

$$\displaystyle{\theta (x;p)_{k} = \frac{\varGamma _{\mathrm{e}}(xq^{k})} {\varGamma _{\mathrm{e}}(x)} \;.}$$

This is a reflection of the difference relation Γ(z + 1) = (z) and (a) k = Γ(a + k)∕Γ(a) for the ordinary Gamma function.

Definition 5.6

The elliptic Gamma function is defined as

$$\displaystyle{\varGamma _{\mathrm{e}}(x) =\varGamma _{\mathrm{e}}(x;p,q) =\prod _{r,s\geq 0}\frac{1 - p^{r+1}q^{s+1}/x} {1 - p^{r}q^{s}x} \;.}$$

Notice that the elliptic Gamma function is symmetric under pq. In a specific way you can consider it to be the simplest function satisfying the difference equations Γ e(qx) = θ(x; p)Γ e(x) and Γ e(px) = θ(x; q)Γ e(x). Where the ordinary Gamma function has poles at the negative integers, the elliptic Gamma function has (generically simple) poles at \(x = p^{\mathbb{Z}_{\leq 0}}q^{\mathbb{Z}_{\leq 0}}\); as any function must if it is to satisfy the two difference equations. Just to be explicit: the elliptic Gamma function is itself not an elliptic function, however it can be used to simply express elliptic hypergeometric series.

Exercise 5.7

Prove the following basic relations for theta functions:

  1. (a)

    θ(px; p) = θ(1∕x; p)

  2. (b)

    \(\theta (p^{n}x;p) = (-1/x)^{n}p^{-\binom{n}{2}}\theta (x;p)\)

Exercise 5.8

Prove the following basic relations for the elliptic Gamma function:

  1. (a)

    Reflection identity: Γ e(x, pqx) = 1

  2. (b)

    Limit: lim p → 0 Γ e(x; p, q) = 1∕(z; q)

  3. (c)

    Quadratic transformation: \(\varGamma _{\mathrm{e}}(x^{2};p,q) =\varGamma _{\mathrm{e}}(\pm x;\sqrt{p},\sqrt{q})\)

  4. (d)

    Quadratic transformation: Γ e(x; p, q) = Γ e(x, qx; p, q 2)

Exercise 5.9

Calculate the residue

$$\displaystyle{\mathop{\mathrm{Res}}\bigg(\frac{1} {z}\varGamma _{\mathrm{e}}(az);z = 1/a\bigg) = \frac{1} {(p;p)(q;q)}\;.}$$

Exercise 5.10

Show that an r V r−1 is a p-elliptic function of its arguments, as long as the balancing condition b 1 b 2b r−6 = a r∕2−3 q r∕2+n−4 remains satisfied. Thus for example

$$\displaystyle{_{r}V _{r-1}(a;b_{1},\ldots,b_{r-6},q^{-n};q,p) = _{ r}V _{r-1}(a;pb_{1},b_{2}/p,b_{3},\ldots,b_{r-6},q^{-n};q,p)\;.}$$

6 6j-Symbols and Spiridonov–Zhedanov Biorthogonal Functions

This section introduces the biorthogonal functions of Spiridonov and Zhedanov [11].

The Askey scheme and q-Askey scheme [4] contain many families of orthogonal polynomials. We will not reprint them here, because the schemes take up a lot of space. Basically they contain all classical families of orthogonal polynomialsFootnote 1. The (q)-Askey scheme can even be generalized to include pairs of families of biorthogonal rational functions. That is: we consider two families of rational functions \(\mathcal{F} =\{ f_{0},f_{1},\mathop{\ldots }\}\) and \(\mathcal{G} =\{ g_{0},g_{1},\mathop{\ldots }\}\) and a bilinear form such that 〈f n , g m 〉 = δ nm μ n . Generalizing in this way you can make the schemes a few times larger. On the elliptic level the scheme for pairs of families of biorthogonal functions becomes very simple: There is just one family.

However, everything in the (q-)Askey scheme is a limit of this family. To be completely honest this is not completely fair, as there are biorthogonal functions with respect to a continuous measure, and specializing the product of two parameters to q N you obtain biorthogonality with respect to a finite point measure (for a finite set of functions). This difference is similar to the relation of Wilson to Racah polynomials. So in a way you might consider there to be two pairs of families of biorthogonal functions. In this section we will focus on the discrete biorthogonality on a measure with finite support, while in Sect. 8 we consider the continuous measure. In Exercise 8.6 you can verify the relation between these two measures yourselves.

I feel that Rosengren’s [8] elementary derivation of the biorthogonality using 6j-symbols is a nice exposition, so I will follow that here. We first introduce the functions

$$\displaystyle{h_{k}(x;a) =\theta (a\xi,a\xi ^{-1};p)_{ k}\;,\quad \xi +\xi ^{-1} = x\;.}$$

which are entire functions of x for a ≠ 0. Then we can consider (for given N, a and b) the set of functions

$$\displaystyle{\mathcal{B} =\{ h_{k}(x;a)h_{N-k}(x;b)\mid 0 \leq k \leq N\}\;.}$$

Then these N + 1 functions are all of the form f(x) = j = 1 N θ(a j ξ, a j ξ −1; p). That is, writing ξ = e2πiz and F(z) = f(e2πiz + e−2πiz), we have even theta functions of degree 2N with characteristic 0, which means these functions satisfy: f(−z) = f(z), f(z + 1) = f(z) and f(z + τ) = e−2πiN(2z+τ) f(z) where p = e2πiτ. These functions form a space of degree N + 1. It turns out that if

$$\displaystyle{p^{m}a/b\notin \{q^{k}\mid 1 - N \leq k \leq N - 1\}\;,\quad p^{m}ab\notin \{q^{k}\mid 0 \leq k \leq N - 1\}}$$

the functions in \(\mathcal{B}\) form a basis of this space. But if we replace the parameters a and b by c and d, we obtain another basis for this same space. As such there is a basis transformation

$$\displaystyle{h_{k}(x;a)h_{N-k}(x;b) =\sum _{ l=0}^{N}R_{ k}^{l}(a,b,c,d;N;q,p)h_{ l}(x;c)h_{N-l}(x;d)\;.}$$

Let us calculate these coefficients R k l. First we prove a binomial theorem

Theorem 6.1

We have

$$\displaystyle{h_{N}(x;a) =\sum _{ k=0}^{N}C_{ k}^{N}(a,b,c)h_{ k}(x;b)h_{N-k}(x;c)}$$

with

$$\displaystyle{C_{k}^{N} = q^{k(k-N)} \frac{\theta (q;p)_{N}\theta (a/c,q^{N-k}ac;p)_{ k}\theta (a/b,abq^{k};p)_{ N-k}} {\theta (bc;p)_{N}\theta (q,(b/c)q^{k-N};p)_{k}\theta (q,(c/b)q^{-k};p)_{N-k}}\;.}$$

Compare this theorem to \((x + y)^{N} =\sum _{ k=0}^{N}\binom{N}{k}x^{k}y^{N-k}\).

Proof

We will prove this using recurrence relations for the binomial coefficients, so let us see what happens if we increase N by one: First of all

$$\displaystyle{h_{N+1}(x;a) = h_{N}(x;a)\theta (a\xi q^{N},a\xi ^{-1}q^{N};p)\;.}$$

To get a nice expression on the right-hand side we have to somehow write

$$\displaystyle\begin{array}{rcl} & & h_{k}(x;b)h_{N-k}(x;c)\theta (a\xi q^{N},a\xi ^{-1}q^{N};p) {}\\ & & \qquad \qquad \qquad \qquad \qquad \qquad = c_{1}h_{k}(x;b)h_{N-k+1}(x;c) + c_{2}h_{k+1}(x;b)h_{N-k}(x;c)\;, {}\\ \end{array}$$

that is

$$\displaystyle{\theta (a\xi q^{N},a\xi ^{-1}q^{N};p) = c_{ 1}\theta (c\xi q^{N-k},c\xi ^{-1}q^{N-k};p) + c_{ 2}\theta (b\xi q^{k},b\xi ^{-1}q^{k};p)\;.}$$

Now we need to find an identity between three terms involving theta functions, so we hope we can apply Theorem 5.5. We have got three terms of the form θ( ±1; p) for s = aq N, s = cq Nk and s = bq k, so we know what the parameters in Theorem 5.5 should be. Taking w = ξ, x = aq N, y = bq k and z = cq Nk the identity from the theorem becomes

$$\displaystyle\begin{array}{rcl} & & \frac{1} {bq^{k}}\theta (\xi aq^{N},\xi a^{-1}q^{-N},bcq^{N}, \frac{b} {c}q^{2k-N};p) {}\\ & & \qquad \qquad \qquad \qquad \qquad + \frac{1} {cq^{N-k}}\theta \bigg(\xi bq^{k},\xi b^{-1}q^{-k},acq^{2N-k}, \frac{c} {a}q^{-k};p\bigg) {}\\ & & \qquad \qquad \qquad \qquad \qquad + \frac{1} {aq^{N}}\theta \bigg(\xi cq^{N-k},\xi c^{-1}q^{k-N},abq^{N+k}, \frac{a} {b}q^{N-k};p\bigg) = 0\;. {}\\ \end{array}$$

Using θ(1∕x; p) = −(1∕x)θ(x; p) we can clean this up to

$$\displaystyle\begin{array}{rcl} & & \theta \bigg(a\xi q^{N},a\xi ^{-1}q^{N},bcq^{N}, \frac{b} {c}q^{2k-N};p\bigg) {}\\ & & \qquad \qquad \qquad \qquad \quad + \frac{aq^{k}} {c} \theta \bigg(b\xi q^{k},b\xi ^{-1}q^{k},acq^{2N-k}, \frac{c} {a}q^{-k};p\bigg) {}\\ & & \qquad \qquad \qquad \qquad \quad + \frac{b} {cq^{N-2k}}\theta \bigg(c\xi q^{N-k},c\xi ^{-1}q^{N-k},abq^{N+k}, \frac{a} {b}q^{N-k};p\bigg) = 0 {}\\ \end{array}$$

and then find

$$\displaystyle\begin{array}{rcl} & & \theta (a\xi q^{N},a\xi ^{-1}q^{-N};p) = \frac{\theta (acq^{2N-k},(a/c)q^{k};p)} {\theta (bcq^{N},(b/c)q^{2k-N};p)}\theta (b\xi q^{k},b\xi ^{-1}q^{k};p) {}\\ & & \qquad \qquad \qquad \qquad \qquad + \frac{\theta (abq^{N+k},(a/b)q^{N-k};p)} {\theta (bcq^{N},(c/b)q^{N-2k};p)} \theta (c\xi q^{N-k},c\xi ^{-1}q^{N-k};p)\;. {}\\ \end{array}$$

Therefore we find the recurrence relation

$$\displaystyle{C_{k}^{N+1} = \frac{\theta (acq^{2N+1-k},(a/c)q^{k-1};p)} {\theta (bcq^{N},(b/c)q^{2k-N-2};p)} C_{k-1}^{N} + \frac{\theta (abq^{N+k},(a/b)q^{N-k};p)} {\theta (bcq^{N},(c/b)q^{N-2k};p)} C_{k}^{N}\;.}$$

With the initial conditions C 0 0 = 1 and C −1 N = C N+1 N = 0 the coefficients are determined uniquely. Indeed we can now prove the formula for C k N by induction. Note first that 1∕θ(q; p)−1 = θ(qq −1; p) = 0, so indeed the expression satisfies the initial conditions C −1 N = C N+1 N = 0. Next we use induction and using Theorem 5.5 once more in a tedious calculation find that our formula for C k N is correct. □

Now we can determine the coefficients R k l explicitly by twice applying this binomial theorem:

Theorem 6.2

We have

$$\displaystyle\begin{array}{rcl} & & R_{k}^{l}(a,b,c,d;N;q,p) = q^{l(l-N)} \frac{\theta (q;p)_{N}} {\theta (q;p)_{l}\theta (q;p)_{N-l}} {}\\ & & \qquad \qquad \qquad \quad \times \frac{\theta (ac^{\pm 1};p)_{k}\theta \bigg(bdq^{N-l}, \dfrac{b} {d};p\bigg)_{l}\theta \bigg(\dfrac{b} {c},bc;p\bigg)_{N-k}\theta \bigg(\dfrac{b} {c};p\bigg)_{N-l}} {\theta \bigg(\dfrac{c} {d}q^{l-N};p\bigg)_{l}\theta \bigg(\dfrac{d} {c}q^{-l};p\bigg)_{N-l}\theta \bigg(cd, \dfrac{b} {c};p\bigg)_{N}\theta (bc;p)_{l}} {}\\ & & \qquad \qquad \qquad \quad \quad \times _{12}V _{11}\bigg(\dfrac{c} {b}q^{-N};q^{-k},q^{-l}, \frac{a} {b}q^{k-N}, \frac{c} {d}q^{l-N},cd, \frac{1} {ab}q^{1-N}, \frac{qc} {b} \bigg)\;. {}\\ \end{array}$$

Proof

Indeed we have

$$\displaystyle\begin{array}{rcl} & & h_{k}(x;a)h_{N-k}(x;b) {}\\ & & \qquad \qquad \qquad \quad =\sum _{ j=0}^{k}C_{ j}^{k}(a,c,bq^{N-k})h_{ j}(x;c)h_{N-j}(x;b) {}\\ & & \qquad \qquad \qquad \quad =\sum _{ j=0}^{k}\sum _{ m=0}^{N-j}C_{ j}^{k}(a,c,bq^{N-k})C_{ m}^{N-j}(b,cq^{j},d)h_{ j+m}(x;c)h_{N-j-m}(x;d) {}\\ & & \qquad \qquad \qquad \quad =\sum _{ l=0}^{N}\sum _{ j=0}^{\min (k,l)}C_{ j}^{k}(a,c,bq^{N-k})C_{ l-j}^{N-j}(b,cq^{j},d)h_{ l}(x;c)h_{N-l}(x;d)\;. {}\\ \end{array}$$

Thus we obtain

$$\displaystyle{R_{k}^{l}(a,b,c,d;N;q,p) =\sum _{ j=0}^{\min (k,l)}C_{ j}^{k}(a,c,bq^{N-k})C_{ l-j}^{N-j}(b,cq^{j},d)}$$

which is exactly the series from the statement of the theorem. □

Note that we can find several different expressions by using a different proof. So this gives transformation formulas for the 12 V 11.

Having proved that these elliptic hypergeometric series form the coefficients in a base transformation, we can derive some properties. First of all

Theorem 6.3 (Biorthogonality)

We have

$$\displaystyle{\sum _{l=0}^{N}R_{ n}^{l}(a,b,c,d;N;q,p)R_{ l}^{m}(c,d,a,b;N;q,p) =\delta _{ nm}\;.}$$

You can view this theorem as 〈R n ⋅ , R ⋅  m〉 = δ nm , where ⋅ denotes the parameter, and the bilinear form has support on the set of integers {0, 1, , N}.

Proof

Perform the basis transformation first from the basis h k (x; a)h Nk (x; b) to the basis h l (x; c)h Nl (x; d) and then back again. This gives

$$\displaystyle\begin{array}{rcl} & & h_{k}(x;a)h_{N-k}(x;b) {}\\ & & =\sum _{ l=0}^{N}R_{ k}^{l}(a,b,c,d;N;q,p)h_{ l}(x;c)h_{N-l}(x;d) {}\\ & & =\sum _{ l=0}^{N}\sum _{ j=0}^{N}R_{ k}^{l}(a,b,c,d;N;q,p)R_{ l}^{j}(c,d,a,b;N;q,p)h_{ j}(x;a)h_{N-j}(x;b) {}\\ & & =\sum _{ j=0}^{N}\sum _{ l=0}^{N}R_{ k}^{l}(a,b,c,d;N;q,p)R_{ l}^{j}(c,d,a,b;N;q,p)h_{ j}(x;a)h_{N-j}(x;b)\;. {}\\ \end{array}$$

Then we see that the final series only has a nonzero term for the j = k case, and the corresponding coefficient must be 1. □

If we consider the special case of the biorthogonality where n = m = 0, the series in the functions R n l and R l m both contain just a single term, and thus the resulting identity becomes an evaluation of a single sum. This summation is the original Frenkel–Turaev summation formula [2]

Theorem 6.4

We have

$$\displaystyle{_{10}V _{9}\bigg(a;b_{1},b_{2},b_{3}, \frac{q^{n+1}a} {b_{1}b_{2}b_{3}},q^{-n}\bigg) = \frac{\theta \bigg( \dfrac{aq} {b_{1}b_{2}}, \dfrac{aq} {b_{1}b_{3}}, \dfrac{aq} {b_{2}b_{3}},aq;p\bigg)_{n}} {\theta \bigg(\dfrac{aq} {b_{1}}, \dfrac{aq} {b_{2}}, \dfrac{aq} {b_{3}}, \dfrac{aq} {b_{1}b_{2}b_{3}};p\bigg)_{n}} \;.}$$

Proof

For n = m = 0 the result simplifies to

$$\displaystyle\begin{array}{rcl} 1& =& \sum _{k=0}^{N}R_{ 0}^{k}(a,b,c,d;N;q,p)R_{ k}^{0}(c,d,a,b;N;q,p) {}\\ & =& \frac{\theta (q,bc;p)_{N}} {\theta (cd,b/a,ab;p)_{N}} {}\\ & & \qquad \times \sum _{k=0}^{N}q^{k(k-N)}\frac{\theta (bdq^{N-k},b/d,ca^{\pm 1};p)_{ k}\theta (b/c,da^{\pm 1};p)_{ N-k}} {\theta (q,(c/d)q^{k-N},bc;p)_{k}\theta (q,(d/c)q^{-k};p)_{N-k}}\;. {}\\ \end{array}$$

Next we use the elementary identities

$$\displaystyle\begin{array}{rcl} & \begin{array}{rcll} \theta (xq^{-k};p)_{k}& =\theta (pq/x;p)_{k}&\theta (x;p)_{N-k} & = \frac{\theta (x;p)_{N}} {\theta (pq^{1-N}/x;p)_{k}} \\ \theta (xq^{k};p)_{k}& = \frac{\theta (x;p)_{2k}} {\theta (x;p)_{k}} & \theta (xq^{-k};p)_{N-k}& = \frac{\theta (x;p)_{N}\theta (pq/x;p)_{k}} {\theta (pq^{1-N}/x;p)_{2k}}\end{array} & {}\\ & \theta (px;p)_{k} =\bigg (-\frac{1} {x}\bigg)^{k}q^{-\binom{k}{2}}\theta (x;p)_{ k} & {}\\ \end{array}$$

to obtain

$$\displaystyle{\begin{array}{rl} 1& = \frac{\theta (bc^{\pm 1},da^{\pm 1};p)_{ n}} {\theta (dc^{\pm 1},ba^{\pm 1};p)_{N}} \\ & \quad \times \sum _{k=0}^{N}\frac{\theta ((c/d)q^{1-N};p)_{ 2k}} {\theta ((c/d)q^{-N};p)_{2k}} \frac{\theta (q^{1-N}/(bd),b/d,ca^{\pm 1},(c/d)q^{-N},q^{-N};p)_{ k}} {\theta (q,bc,qc/d,q^{1-N}c/b,q^{1-N}(1/d)a^{\pm 1};p)_{k}} q^{k} \\ & = \frac{\theta (bc^{\pm 1},da^{\pm 1};p)_{ n}} {\theta (dc^{\pm 1},ba^{\pm 1};p)_{N}}_{10}V _{9}\bigg(\frac{c} {d}q^{-N}; \frac{q^{1-N}} {bd}, \frac{b} {d},ca^{\pm 1},q^{-N}\bigg)\;. \end{array} }$$

Renaming the parameters gives the desired result. □

Exercise 6.5

Prove the addition formula

$$\displaystyle{R_{n}^{m}(a,b,e,f;N;q) =\sum _{ k=0}^{N}R_{ n}^{k}(a,b,c,d;N;q)R_{ k}^{m}(c,d,e,f;N;q)\;.}$$

Exercise 6.6

The two functions which are biorthogonal are quite similar. Find a relation

$$\displaystyle{R_{k}^{l}(a,b,c,d;N;q,p) =\mathop{ \mathrm{Prefactor}}\nolimits R_{ l}^{k}(?;N;q,p)\;.}$$

You can do this by choosing a new set of parameters such that the arguments in the 12 V 11 on both sides are equal.

7 Elliptic Beta Integral

In contrast to the ordinary and basic hypergeometric theory, we do not want to consider nonterminating elliptic hypergeometric series. Thus to find a proper generalization of nonterminating identities we consider integrals. The elliptic beta integral was proven by Spiridonov [10]. Taking the proper limit p → 0 and then q → 1 you can reduce it to the classical beta integral (proper means that we let the parameters behave in a certain way as p → 0 and q → 1), but many other famous integrals and nonterminating series of hypergeometric type are possible limits as well. For example the identity (1) is also a limit.

Theorem 7.1

For parameters satisfying the balancing condition ∏ r = 1 6 t r = pq we have

$$\displaystyle{\frac{(p;p)(q;q)} {2} \int _{\mathcal{C}}\frac{\prod _{r=1}^{6}\varGamma _{\mathrm{e}}(t_{r}z^{\pm 1})} {\varGamma _{\mathrm{e}}(z^{\pm 2})} \frac{\mathrm{d}z} {2\pi \mathrm{i}z} =\prod _{ \begin{array}{c}r,s=1 \\ r<s \end{array}}^{6}\varGamma _{ \mathrm{e}}(t_{r}t_{s})\;.}$$

Here the contour \(\mathcal{C}\) is a deformation of the unit circle traversed in positive direction which contains the poles at \(z = t_{r}p^{\mathbb{Z}_{\geq 0}}q^{\mathbb{Z}_{\geq 0}}\) and excludes the poles at \(z = t_{r}^{-1}p^{\mathbb{Z}_{\leq 0}}q^{\mathbb{Z}_{\leq 0}}\) . In particular for \(\vert t_{r}\vert <1\) you can take the unit circle itself.

There are several different proofs, but I prefer the one below. The bilinear form returns later as the form with respect to which we obtain biorthogonal functions.

Proof

Let us define the bilinear form

$$\displaystyle{ \langle f,g\rangle _{t_{1},\ldots,t_{6}} = \frac{(p;p)(q;q)} {2\prod _{\begin{array}{c}r,s=1 \\ r<s \end{array}}^{6}\varGamma _{\mathrm{e}}(t_{r}t_{s})}\int _{\mathcal{C}}f(z)g(z)\frac{\prod _{r=1}^{6}\varGamma _{\mathrm{e}}(t_{r}z^{\pm 1})} {\varGamma _{\mathrm{e}}(z^{\pm 2})} \frac{\mathrm{d}z} {2\pi \mathrm{i}z}\;. }$$
(2)

Here we want f(z) and g(z) to be even (that is, zz −1-symmetric) meromorphic functions such that f(z)Γ e(t 6 z ±1)∕Γ e(q m t 6 z ±1) is an analytic function (which restricts the poles of f) and likewise g(z)Γ e(t 5 z ±1)∕Γ e(q l t 5 z ±1) is analytic. The contour then has to be adjusted to consider the poles of Γ e(q m t 6 z ±1) and Γ e(q l t 5 z ±1).

Let us also consider the difference operator

$$\displaystyle{ (D(u_{1},u_{2},u_{3})f)(z) =\sum _{\sigma =\pm 1}\frac{\theta (u_{1}z^{\sigma },u_{2}z^{\sigma },u_{3}z^{\sigma },u_{1}u_{2}u_{3}z^{-\sigma };p)} {\theta (u_{1}u_{2},u_{1}u_{3},u_{2}u_{3},z^{2\sigma };p)} f(q^{\sigma /2}z)\;. }$$
(3)

First observe that D maps even functions to even functions, and a direct calculation shows that p-elliptic functions are mapped to p-elliptic functions. Then the result of the application of the difference operator to the constant function 1 is an even p-elliptic function with poles at most when θ(z ±2; p) = 0. But this means it must be a constant function. Plugging in the value z = u 1 the term with σ = −1 vanishes and we obtain

$$\displaystyle{(D(u_{1},u_{2},u_{3})1)(z) = \frac{\theta (u_{1}^{2},u_{2}u_{1},u_{3}u_{1},u_{2}u_{3};p)} {\theta (u_{1}u_{2},u_{1}u_{3},u_{2}u_{3},u_{1}^{2};p)} = 1\;.}$$

This identity can also be viewed as a form of the fundamental theta-function identity from Theorem 5.5. Moreover we can calculate that

$$\displaystyle\begin{array}{rcl} & & \langle D(t_{1},t_{2},t_{6})f,g\rangle _{t_{1},\ldots,t_{6}} {}\\ & & \qquad \qquad \quad =\langle f,D(q^{-1/2}t_{ 3},q^{-1/2}t_{ 4},q^{-1/2}t_{ 5})g\rangle _{q^{1/2}t_{1},q^{1/2}t_{2},q^{-1/2}t_{3},q^{-1/2}t_{4},q^{-1/2}t_{5},q^{1/2}t_{6}} {}\\ \end{array}$$

where we apply the difference equation θ(z; p)Γ e(z) = Γ e(qz) several times, split the sum in its two constituent parts and shift the integration variable zzq −1∕2σ and then recombine. Notice that we do not have to worry about the contour in the shift, as we have defined the contour as separating several different poles and that definition just shifts the contour along with z. We also have to use the balancing condition r = 1 6 t r = pq to equate θ(q 1∕2 t 1 t 2 t 6 z σ; p) = θ(t 3 t 4 t 5∕(q 3∕2 z σ); p).

Specializing this to f = g = 1 we obtain for the constant term

$$\displaystyle{\langle 1,1\rangle _{t_{1},\ldots,t_{6}} =\langle 1,1\rangle _{q^{1/2}t_{1},q^{1/2}t_{2},q^{-1/2}t_{3},q^{-1/2}t_{4},q^{-1/2}t_{5},q^{1/2}t_{6}}\;.}$$

In particular we can shift three parameters by some half-integer power of q upwards, and three others downwards. Applying this twice we obtain

$$\displaystyle{\langle 1,1\rangle _{t_{1},\ldots,t_{6}} =\langle 1,1\rangle _{qt_{1},t_{2},t_{3},t_{4},t_{5},q^{-1}t_{6}}\;.}$$

Thus if we consider the constant term as a function of t 1 up to t 5 (with t 6 determined by the balancing condition), it is invariant under multiplying one of the parameters by an integer power of q. Due to pq symmetry it is also invariant under multiplication by integer powers of p.

Since the constant term is a meromorphic function of the parameters, and since for generic values of p and q the set \(p^{\mathbb{Z}}q^{\mathbb{Z}}\) has an accumulation point (other than 0 or infinity), we can conclude that the constant term is a constant function. It remains to see what the constant is.

Therefore we want to evaluate it at a single point. For t 1 t 2 = 1 there is no contour of the desired shape as the pole at z = t 1 should be included, whereas the pole at z = 1∕t 2 should be excluded (and in this specialization they are at the same point). The same holds for the poles at z = t 2 and z = 1∕t 1. This problem can be resolved by shifting the contour first over the poles at z = t 1 and z = 1∕t 1, picking up the associated residues, and then specializing to t 1 t 2 = 1 (which is then perfectly possible). Due to symmetry the residue at z = t 1 is minus the residue at z = 1∕t 1 (and we have to add the one at z = t 1 and subtract the one at z = 1∕t 1) so this gives a factor 2. The prefactor of the remaining integral contains the factor 1∕Γ e(t 1 t 2) = 0 at t 1 t 2 = 1, so the remaining integral vanishes. The result of the constant is thus equal to twice the residue at z = t 1 evaluated at t 1 t 2 = 1. Thus we find

$$\displaystyle\begin{array}{rcl} & & \langle 1,1\rangle _{t_{1},\ldots,t_{6}} {}\\ & & \qquad \qquad \quad = 2\mathop{\mathrm{Res}}\bigg( \frac{(p;p)(q;q)} {2\prod _{\begin{array}{c}r,s=1 \\ r<s \end{array}}^{6}\varGamma _{\mathrm{e}}(t_{r}t_{s})} \frac{\prod _{r=1}^{6}\varGamma _{\mathrm{e}}(t_{r}z^{\pm 1})} {\varGamma _{\mathrm{e}}(z^{\pm 2})} \frac{1} {z},z = t_{1}\bigg)\bigg\vert _{t_{1}t_{2}=1} {}\\ & & \qquad \qquad \quad = \frac{(p;p)(q;q)} {\prod _{\begin{array}{c}r,s=1 \\ r<s \end{array}}^{6}\varGamma _{\mathrm{e}}(t_{r}t_{s})} \frac{\prod _{r=2}^{6}\varGamma (t_{1}^{2},t_{r}t_{1}^{\pm 1})} {\varGamma (t_{1}^{\pm 2})} \mathop{\mathrm{Res}}\bigg(\frac{\varGamma _{\mathrm{e}}(t_{1}/z)} {z},z = t_{1}\bigg)\bigg\vert _{t_{1}t_{2}=1} {}\\ & & \qquad \qquad \quad = \frac{\prod _{r=2}^{6}\varGamma (t_{r}/t_{1})} {\prod _{\begin{array}{c}r,s=2 \\ r<s \end{array}}^{6}\varGamma _{\mathrm{e}}(t_{r}t_{s})\varGamma (t_{1}^{-2})}\bigg\vert _{t_{1}t_{2}=1} = \frac{1} {\prod _{\begin{array}{c}r,s=3 \\ r<s \end{array}}^{6}\varGamma _{\mathrm{e}}(t_{r}t_{s})}\bigg\vert _{t_{1}t_{2}=1} = 1 {}\\ \end{array}$$

where in the last step we use that Γ e(x) = Γ e(pqx) and that t 3 t 4 = pqt 1 t 2 t 5 t 6 = pqt 5 t 6 (and similar). □

To make the connection from integral to elliptic hypergeometric series we again can use the technique of picking up residues (as we did for ordinary hypergeometric series). In order to specialize the elliptic beta integral to t 1 t 2 = q n we have to change the contour by moving it over the poles at z ±1 = t 1 q k for 0 ≤ kn and their inverses. After picking up these residues we can make the specialization. Due to the prefactor 1∕Γ e(t 1 t 2) of the integral, the remaining integral is multiplied by zero. Thus we are left with a sum of residues: \(\sum _{k=0}^{n}\mathop{ \mathrm{Res}}(\cdot,z = t_{1}q^{k})\). This will be a terminating elliptic hypergeometric series. In fact, in this way we will recover the original Frenkel–Turaev summation formula (Theorem 6.4).

A series obtained in this way will be an elliptic hypergeometric series for any integral \(\int \varDelta (z)\mathop{}\!\mathrm{d}z/(2\pi \mathrm{i}z)\) for which the integrand satisfies

$$\displaystyle{ \frac{\varDelta (qz)} {\varDelta (z)} = \frac{\varDelta (pqz)} {\varDelta (pz)} \;. }$$
(4)

In particular we will only consider integrals which satisfy this condition. For the elliptic beta integral it can be checked directly by using the difference equations of the elliptic Gamma function and some elementary identities of the theta function.

Transformations for more complicated integrals can now be easily derived. For an elliptic beta integral with 8 parameters (satisfying a balancing condition) the symmetries can be obtained by iterating the identity below.

Theorem 7.2

For parameters ∏ r = 1 8 t r = (pq)2 define the integral

$$\displaystyle{ I(t_{1},\ldots,t_{8};p,q) = \frac{(p;p)(q;q)} {2} \int _{\mathcal{C}}\frac{\prod _{r=1}^{8}\varGamma _{\mathrm{e}}(t_{r}z^{\pm 1})} {\varGamma _{\mathrm{e}}(z^{\pm 2})} \frac{\mathrm{d}z} {2\pi \mathrm{i}z} }$$

where the contour circles the origin in positive direction separating the poles of Γ e (t r z) from those of Γ e (t r z). If \(\vert t_{r}\vert <1\) then we can take the unit circle. Then we have

$$\displaystyle{I(t_{1},\ldots,t_{8};p,q) =\prod _{ \begin{array}{c}r,s=1 \\ r<s \end{array}}^{4}\varGamma _{ \mathrm{e}}(t_{r}t_{s})\prod _{\begin{array}{c}r,s=5 \\ r<s \end{array}}^{8}\varGamma _{ \mathrm{e}}(t_{r}t_{s})I\bigg(\frac{t_{1}} {\sigma },\ldots, \frac{t_{4}} {\sigma },t_{5}\sigma,\ldots,t_{8}\sigma;p,q\bigg)}$$

where σ 2 = t 1 t 2 t 3 t 4∕(pq) = pq∕(t 5 t 6 t 7 t 8).

Proof

The equation is obtained by evaluating in two different orders the double integral

$$\displaystyle{\iint _{\mathcal{C}\times \mathcal{C}}\frac{\varGamma _{\mathrm{e}}(\sigma z^{\pm 1}w^{\pm 1})\prod _{r=1}^{4}\varGamma _{\mathrm{e}}((t_{r}/\sigma )z^{\pm 1})\prod _{r=5}^{8}\varGamma _{\mathrm{e}}(t_{r}w^{\pm 1})} {\varGamma _{\mathrm{e}}(z^{\pm 2},w^{\pm 2})} \frac{\mathrm{d}z} {2\pi \mathrm{i}z} \frac{\mathrm{d}w} {2\pi \mathrm{i}w}\;.}$$

If you turn this integral into a series by specializing two parameters to (for example) t 1 t 8 = q N you obtain a 12 V 11 series on both sides of the equation. As this can be done for both sides of the integral transformation at once, you can derive transformation formulas for the 12 V 11 in this way.

Exercise 7.3

Show that an elliptic beta integral

$$\displaystyle{\int _{\mathcal{C}}\frac{\prod _{r=1}^{2m+4}\varGamma _{\mathrm{e}}(t_{r}z^{\pm 1})} {\varGamma _{\mathrm{e}}(z^{\pm 2})} \frac{\mathrm{d}z} {2\pi \mathrm{i}z}}$$

satisfies (4) if and only if \({\bigl (\prod _{r=1}^{2m+4}t_{r}\bigr )}^{2} = (pq)^{2m}\).

Exercise 7.4

Show that the beta integral

$$\displaystyle{ \frac{(p;p)(q;q)} {2\prod _{1\leq r<s\leq 2m+4}\varGamma _{\mathrm{e}}(t_{r}t_{s})}\int _{\mathcal{C}}\frac{\prod _{r=1}^{2m+4}\varGamma _{\mathrm{e}}(t_{r}z^{\pm 1})} {\varGamma _{\mathrm{e}}(z^{\pm 2})} \frac{dz} {2\pi iz}\;,\quad \prod _{r=1}^{2m}t_{ r} = (pq)^{m}}$$

evaluates at t 1 t 2 = q n to the series

$$\displaystyle{\mathop{\mathrm{prefactor}}\nolimits _{2m+8}V _{2m+7}\bigg(t_{1}^{2};t_{ 1}t_{3},\ldots,t_{1}t_{2m+3}, \frac{q^{n+m}t_{1}} {t_{3}\cdots t_{2m+3}},q^{-n}\bigg)\;.}$$

Also determine the prefactor explicitly. Hint: Move the integration contour over an appropriate sequence of poles and pick up the associated residues.

Exercise 7.5

Obtain Frenkel–Turaev’s summation formula for a 10 V 9 as a special case of the elliptic beta integral evaluation.

Exercise 7.6

  1. (a)

    Assume the balancing condition a 3 q n+2 = b 1 b 2 b 3 b 4 b 5 b 6 holds. Obtain the transformation formula

    $$\displaystyle\begin{array}{rcl} & & _{12}V _{11}(a;b_{1},b_{2},b_{3},b_{4},b_{5},b_{6},q^{-n};p,q) {}\\ & & = \frac{\theta \bigg(aq, \dfrac{a} {b_{4}b_{5}}, \dfrac{aq} {b_{4}b_{6}}, \dfrac{aq} {b_{5}b_{6}};p\bigg)_{n}} {\theta \bigg(\dfrac{aq} {b_{4}}, \dfrac{aq} {b_{5}}, \dfrac{aq} {b_{6}}, \dfrac{aq} {b_{4}b_{5}b_{6}};p\bigg)_{n}} {}\\ & & \quad \times _{12}V _{11}\bigg( \frac{a^{2}q} {b_{1}b_{2}b_{3}}; \frac{aq} {b_{2}b_{3}}, \frac{aq} {b_{1}b_{3}}, \frac{aq} {b_{1}b_{2}},b_{4},b_{5},b_{6},q^{-n};p,q\bigg) {}\\ \end{array}$$

    as a special case of Theorem 7.2. Alternatively you can derive this directly copying the proof of the transformation for the integrals, using a double sum and Frenkel–Turaev’s summation formula.

  2. (b)

    Iterate the above formula to obtain

    $$\displaystyle\begin{array}{rcl} & & _{12}V _{11}(a;b_{1},b_{2},b_{3},b_{4},b_{5},b_{6},q^{-n};p,q) {}\\ & & = \frac{\theta \bigg(aq, \dfrac{b_{2}} {p}, \dfrac{aq} {b_{1}b_{3}}, \dfrac{aq} {b_{1}b_{4}}, \dfrac{aq} {b_{1}b_{5}}, \dfrac{aq} {b_{1}b_{6}};p\bigg)} {\theta \bigg(\dfrac{aq} {b_{1}}, \dfrac{b_{2}} {pb_{1}}, \dfrac{aq} {b_{3}}, \dfrac{aq} {b_{4}}, \dfrac{aq} {b_{5}}, \dfrac{aq} {b_{6}};p\bigg)} {}\\ & & \quad \times _{12}V _{11}\bigg( \frac{b_{1}} {q^{n}b_{2}};b_{1}, \frac{b_{1}} {aq^{n}}, \frac{aq} {b_{2}b_{3}}, \frac{aq} {b_{2}b_{4}}, \frac{aq} {b_{2}b_{5}}, \frac{aq} {b_{2}b_{6}},q^{-n}\bigg)\;. {}\\ \end{array}$$
  3. (c)

    Can you find more transformations satisfied by the 12 V 11 by iterating the transformation from part (a)? (You should be able to find two more, one of which is the inversion of summation transformation which sends the summation index knk.)

8 Continuous Biorthogonality

This section discusses a slightly more general pair of families of biorthogonal functions than we did before. The difference between what we did before is similar to the difference between Wilson polynomials and Racah polynomials, in that a specialization of parameters in the Wilson polynomials gives the Racah polynomials, such that you obtain only a finite set of orthogonal polynomials (which then are orthogonal to a discrete measure). The discussion here follows the work of Eric Rains [5, 6], who actually generalized the theory to multivariate biorthogonality (such as how Koornwinder polynomials generalize Askey–Wilson polynomials). In these notes we restrict ourselves to the univariate case though; which makes some formulas more explicit.

The relevant functions are defined as

Definition 8.1

Suppose t 1 t 2 t 3 t 4 u 1 u 2 = pq then we define

$$\displaystyle\begin{array}{rcl} & & R_{n}(z;t_{1}: t_{2},t_{3},t_{4};u_{1},u_{2}) {}\\ & & \qquad \qquad \qquad \qquad \qquad \quad = _{12}V _{11}\bigg( \frac{t_{1}} {u_{1}}; \frac{pq^{n}} {u_{1}u_{2}},q^{-n},t_{ 1}z^{\pm 1}, \frac{q} {u_{1}t_{2}}, \frac{q} {u_{1}t_{3}}, \frac{q} {u_{1}t_{4}};p,q\bigg)\:. {}\\ \end{array}$$

Recall the bilinear form introduced to prove the elliptic beta integral (2). These functions are biorthogonal with respect to this form. To be precise we have

Theorem 8.2

We have

$$\displaystyle\begin{array}{rcl} & & \langle R_{n}( \cdot;t_{1}: t_{2},t_{3},t_{4};u_{1},u_{2}),R_{m}( \cdot;t_{1}: t_{2},t_{3},t_{4};u_{2},u_{1})\rangle _{t_{1},t_{2},t_{3},t_{4},u_{1},u_{2}} {}\\ & & \qquad \qquad \qquad =\delta _{n,m} \frac{\theta \bigg( \dfrac{p} {u_{1}u_{2}};p\bigg)_{2n}\theta \bigg(q,t_{2}t_{3},t_{2}t_{4},t_{3}t_{4}, \dfrac{qt_{1}} {u_{1}}, \dfrac{pqt_{1}} {u_{2}};p\bigg)_{n}} {\theta \bigg( \dfrac{pq} {u_{1}u_{2}};p\bigg)_{2n}\theta \bigg( \dfrac{p} {u_{1}u_{2}},t_{1}t_{2},t_{1}t_{3},t_{1}t_{4}, \dfrac{p} {t_{1}u_{2}}, \dfrac{1} {t_{1}u_{1}};p\bigg)_{n}}q^{-n}\;. {}\\ \end{array}$$

Observe that u 1 and u 2 are interchanged in the second function; hence we have biorthogonality and not plain orthogonality. After specializing t 1 t 2 = q N this biorthogonality reduce to the biorthogonality from Theorem 6.3 (see Exercise 8.6). The proof of this biorthogonality follows at the end of these notes.

The biorthogonal functions are self-dual

Theorem 8.3

We have

$$\displaystyle{R_{n}(t_{1}q^{k};t_{ 1}: t_{2},t_{3},t_{4};u_{1},u_{2}) = R_{k}(\hat{t}_{1}q^{n};\hat{t}_{ 1};\hat{t}_{2},\hat{t}_{3},\hat{t}_{4};\hat{u}_{1},\hat{u}_{2})}$$

for dual parameters

$$\displaystyle{\hat{t}_{1} = \sqrt{\frac{t_{1 } t_{2 } t_{3 } t_{4 } } {pq}} \;,\quad \hat{t}_{1}\hat{t}_{r} = t_{1}t_{r}\ (r = 2,3,4)\;,\quad \frac{\hat{t}_{1}} {\hat{u}_{r}} = \frac{t_{1}} {u_{r}}\ (r = 1,2)\;.}$$

Proof

This can be seen by directly plugging the values in the definition. □

The biorthogonal functions are “eigenfunctions” of the difference operator from (3) (with “eigenvalue” 1).

Theorem 8.4

We have

$$\displaystyle\begin{array}{rcl} & & D(u_{1},t_{1},t_{2})R_{n}( \cdot;q^{1/2}t_{ 1}: q^{1/2}t_{ 2},q^{-1/2}t_{ 3},q^{-1/2}t_{ 4};q^{1/2}u_{ 1},q^{-1/2}u_{ 1}) {}\\ & & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad = R_{n}( \cdot;t_{1}: t_{2},t_{3},t_{4};u_{1},u_{2})\;. {}\\ \end{array}$$

Proof

You can apply the difference operator to the individual terms on the left-hand side and equate the results term by term. The required theta function identity is (what else) Theorem 5.5. □

On the level of elliptic hypergeometric series the previous result is an example of a contiguous relation. It relates three series whose parameters are almost equal, they differ only by some (integer) powers of q. There is an identity between any three 12 V 11’s whose parameters differ by some integer powers of q, though making it explicit is often quite a lot of work.Footnote 2

A less trivial result is the fact that the biorthogonal functions are symmetric under permutations of t 1, t 2, t 3 and t 4 (up to a factor independent of z). From the definition as a series you can only read off the permutation symmetry of t 2, t 3, and t 4. Indeed we have

Theorem 8.5

We have

$$\displaystyle{R_{n}(z;t_{2}: t_{1},t_{3},t_{4};u_{1},u_{2}) = \frac{R_{n}(z;t_{1}: t_{2},t_{3},t_{4};u_{1},u_{2})} {R_{n}(t_{2};t_{1}: t_{2},t_{3},t_{4};u_{1},u_{2})}}$$

and

$$\displaystyle{R_{n}(t_{2};t_{1}: t_{2},t_{3},t_{4};u_{1},u_{2}) = \frac{\theta (t_{2}t_{3},t_{2}t_{4},p/(u_{2}t_{2}),qt_{1}/u_{1};p)_{n}} {\theta (qt_{2}/u_{1},t_{1}t_{3},t_{1}t_{4},p/(t_{1}u_{2});p)_{n}}\;.}$$

Proof

The evaluation of R n (t 2; t 1: t 2, t 3, t 4; u 1, u 2) is Frenkel–Turaev’s summation formula, Theorem 6.4. We can apply it after cancelling identical parameters from the numerator and the denominator of the elliptic hypergeometric series.

The t 1t 2 symmetry of the biorthogonal functions is given by a transformation formula for 12 V 11’s. This formula can be derived as a discrete version of the transformation for elliptic beta integrals from Theorem 7.2 by setting the product of two parameters equal to q n, see Exercise 7.6. □

Given the symmetry above and the difference operators, we find that the R n are “eigenfunctions” of D(u 1, t 2, t 3) (etc.) with “generically different eigenvalues” for different n (as we will see shortly). Since the D’s are “self-adjoint,” we would expect that biorthogonality now follows directly. Unfortunately the fact that the parameters change after applying the difference operators/taking the adjoint means that the standard proof does not apply anymore. Thus we have to resort to a slightly different proof.

Proof of biorthogonality

Using the symmetry of the biorthogonal functions, and the effect of the difference operator D(u 1, t 1, t 2) we find that

$$\displaystyle\begin{array}{rcl} & & D(u_{1},t_{3},t_{4})R_{n}( \cdot;q^{-1/2}t_{ 1}: q^{-1/2}t_{ 2},q^{1/2}t_{ 3},q^{1/2}t_{ 4};q^{1/2}u_{ 1},q^{-1/2}u_{ 2}) {}\\ & & \qquad \qquad \qquad = \frac{\theta (t_{1}t_{2}q^{n-1},q^{n}t_{3}t_{4},u_{2}t_{1},t_{1}/u_{1};p)} {\theta (t_{1}t_{2}/q,t_{3}t_{4},u_{2}t_{1}q^{-n},(t_{1}/u_{1})q^{n};p)}R_{n}( \cdot;t_{1}: t_{2},t_{3},t_{4};u_{1},u_{2})\;. {}\\ \end{array}$$

Thus we obtain as result of repeated difference operators

$$\displaystyle\begin{array}{rcl} & & D(u_{1},t_{1},t_{2})D(q^{1/2}u_{ 1},q^{-1/2}t_{ 3},q^{-1/2}t_{ 4})R_{n}( \cdot;t_{1}: t_{2},t_{3},t_{4};qu_{1},u_{2}/q) {}\\ & & \qquad \qquad \qquad = \frac{\theta (t_{1}t_{2}q^{n},q^{n-1}t_{3}t_{4},u_{2}t_{1},t_{1}/u_{1};p)} {\theta (t_{1}t_{2},t_{3}t_{4}/q,u_{2}t_{1}q^{-n},(t_{1}/u_{1})q^{n};p)}R_{n}( \cdot;t_{1}: t_{2},t_{3},t_{4};u_{1},u_{2}) {}\\ \end{array}$$

and likewise

$$\displaystyle\begin{array}{rcl} & & D(u_{1},t_{1},t_{3})D(q^{1/2}u_{ 1},q^{-1/2}t_{ 2},q^{-1/2}t_{ 4})R_{n}( \cdot;t_{1}: t_{2},t_{3},t_{4};qu_{1},u_{2}/q) {}\\ & & \qquad \qquad \qquad = \frac{\theta \bigg(t_{1}t_{3}q^{n},q^{n-1}t_{2}t_{4},u_{2}t_{1},t_{1}/u_{1};p\bigg)} {\theta \bigg(t_{1}t_{3}, \dfrac{t_{2}t_{4}} {q},u_{2}t_{1}q^{-n},(t_{1}/u_{1})q^{n};p\bigg)}R_{n}( \cdot;t_{1}: t_{2},t_{3},t_{4};u_{1},u_{2})\:. {}\\ \end{array}$$

Now notice that we have two different (second order) difference operators, which map the same old R n to the same new R n with a different factor. In particular we find

$$\displaystyle\begin{array}{rcl} & & D(u_{1},t_{1},t_{2})D(q^{1/2}u_{ 1},q^{-1/2}t_{ 3},q^{-1/2}t_{ 4})R_{n}( \cdot;t_{1}: t_{2},t_{3},t_{4};qu_{1},u_{2}/q) {}\\ & & \quad = \frac{\theta (t_{1}t_{2}q^{n},q^{n-1}t_{3}t_{4},t_{1}t_{3},t_{2}t_{4}/q;p)} {\theta (t_{1}t_{2},t_{3}t_{4}/q,t_{1}t_{3}q^{n},q^{n-1}t_{2}t_{4};p)} {}\\ & & \quad \times D(u_{1},t_{1},t_{3})D(q^{1/2}u_{ 1},q^{-1/2}t_{ 2},q^{-1/2}t_{ 4})R_{n}( \cdot;t_{1}: t_{2},t_{3},t_{4};qu_{1},u_{2}/q)\;. {}\\ \end{array}$$

Now we observe that these generalized eigenvalues are different for different choices of n (for generic values of the t r ). Let us write

$$\displaystyle\begin{array}{rcl} L_{1}& =& D(u_{1},t_{1},t_{2})D(q^{1/2}u_{ 1},q^{-1/2}t_{ 3},q^{-1/2}t_{ 4})\;, {}\\ L_{2}& =& D(u_{1},t_{1},t_{3})D(q^{1/2}u_{ 1},q^{-1/2}t_{ 2},q^{-1/2}t_{ 4})\;, {}\\ L_{1}^{{\ast}}& =& D(u_{ 2}/q,t_{1},t_{2})D(q^{-1/2}u_{ 2},q^{-1/2}t_{ 3},q^{-1/2}t_{ 4})\;, {}\\ L_{2}^{{\ast}}& =& D(u_{ 2}/q,t_{1},t_{3})D(q^{-1/2}u_{ 2},q^{-1/2}t_{ 2},q^{-1/2}t_{ 3})\;, {}\\ \end{array}$$

and then re-express the difference equations as

$$\displaystyle\begin{array}{rcl} & & L_{i}R_{n}( \cdot;t_{1}: t_{2},t_{3},t_{4};qu_{1},u_{2}/q) {}\\ & & \qquad \qquad \qquad \qquad \qquad \qquad =\lambda _{i,n}(t_{1},t_{2},t_{3},t_{4},u_{1},u_{2})R_{n}( \cdot;t_{1}: t_{2},t_{3},t_{4};u_{1},u_{2})\;. {}\\ \end{array}$$

Note that L 1 is the same as L 1 with u 1u 2q, so we also have difference equations

$$\displaystyle\begin{array}{rcl} & & L_{i}^{{\ast}}R_{ n}( \cdot;t_{1}: t_{2},t_{3},t_{4};u_{2},u_{1}) {}\\ & & \qquad \qquad \qquad \qquad \qquad =\lambda _{ i,n}^{{\ast}}(t_{ 1},t_{2},t_{3},t_{4},u_{1},u_{2})R_{n}( \cdot;t_{1}: t_{2},t_{3},t_{4};u_{2}/q,qu_{1})\;. {}\\ \end{array}$$

Now we can calculate

$$\displaystyle\begin{array}{rcl} & & \langle R_{n}( \cdot;t_{1}: t_{2},t_{3},t_{4};u_{1},u_{2}),R_{m}( \cdot;t_{1}: t_{2},t_{3},t_{4};u_{2},u_{1})\rangle _{t_{r},u_{r}} {}\\ & & \quad = \frac{1} {\lambda _{1,n}}\langle L_{1}R_{n}( \cdot;t_{1}: t_{2},t_{3},t_{4};qu_{1},u_{2}/q),R_{m}( \cdot;t_{1}: t_{2},t_{3},t_{4};u_{2},u_{1})\rangle _{t_{r},u_{r}} {}\\ & & \quad = \frac{1} {\lambda _{1,n}}\langle R_{n}( \cdot;t_{1}: t_{2},t_{3},t_{4};qu_{1},u_{2}/q),L_{1}^{{\ast}}R_{ m}( \cdot;t_{1}: t_{2},t_{3},t_{4};u_{2},u_{1})\rangle _{t_{r},qu_{1},u_{2}/q} {}\\ & & \quad =\frac{\lambda _{1,m}^{{\ast}}} {\lambda _{1,n}} \langle R_{n}( \cdot;t_{1}: t_{2},t_{3},t_{4};qu_{1},u_{2}/q),R_{m}( \cdot;t_{1}: t_{2},t_{3},t_{4};u_{2}/q,qu_{1})\rangle _{t_{r},qu_{1},u_{2}/q} {}\\ & & \quad =\frac{\lambda _{2,n}\lambda _{1,m}^{{\ast}}} {\lambda _{1,n}\lambda _{2,m}^{{\ast}}}\langle R_{n}( \cdot;t_{1}: t_{2},t_{3},t_{4};u_{1},u_{2}),R_{m}( \cdot;t_{1}: t_{2},t_{3},t_{4};u_{2},u_{1})\rangle _{t_{r},u_{r}}\;. {}\\ \end{array}$$

Thus we see that the bilinear form vanishes unless the prefactor equals 1. However we have

$$\displaystyle\begin{array}{rcl} & & \frac{\lambda _{2,n}\lambda _{1,m}^{{\ast}}} {\lambda _{1,n}\lambda _{2,m}^{{\ast}}} {}\\ & &\qquad = \frac{\theta \bigg(t_{1}t_{2}, \dfrac{t_{3}t_{4}} {q},u_{2}t_{1}q^{-n}, \dfrac{t_{1}} {u_{1}}q^{n};p\bigg)} {\theta \bigg(t_{1}t_{2}q^{n},q^{n-1}t_{3}t_{4},u_{2}t_{1}, \dfrac{t_{1}} {u_{1}};p\bigg)} \frac{\theta \bigg(t_{1}t_{3}q^{n},q^{n-1}t_{2}t_{4},u_{2}t_{1}, \dfrac{t_{1}} {u_{1}};p\bigg)} {\theta \bigg(t_{1}t_{3}, \dfrac{t_{2}t_{4}} {q},u_{2}t_{1}q^{-n}, \dfrac{t_{1}} {u_{1}}q^{n};p\bigg)} {}\\ & & \qquad \qquad \times \frac{\theta \bigg(t_{1}t_{2}q^{m},q^{m-1}t_{3}t_{4},qu_{1}t_{1}, \dfrac{qt_{1}} {u_{2}};p\bigg)} {\theta \bigg(t_{1}t_{2}, \dfrac{t_{3}t_{4}} {q},qu_{1}t_{1}q^{-m}, \dfrac{qt_{1}} {u_{2}} q^{m};p\bigg)} \frac{\theta \bigg(t_{1}t_{3}, \dfrac{t_{2}t_{4}} {q},qu_{1}t_{1}q^{-m}, \dfrac{qt_{1}} {u_{2}} q^{m};p\bigg)} {\theta \bigg(t_{1}t_{3}q^{m},q^{m-1}t_{2}t_{4},qu_{1}t_{1}, \dfrac{qt_{1}} {u_{2}};p\bigg)} {}\\ & & \qquad = \frac{\theta (t_{1}t_{2}q^{m},q^{m-1}t_{3}t_{4},t_{1}t_{3}q^{n},q^{n-1}t_{2}t_{4};p)} {\theta (t_{1}t_{2}q^{n},q^{n-1}t_{3}t_{4},t_{1}t_{3}q^{m},q^{m-1}t_{2}t_{4};p)}\;. {}\\ \end{array}$$

And thus the prefactor is generically not equal to 1 (except when n = m). □

We will not prove the squared norm formula for the biorthogonal functions under the continuous measure. There are several proofs in the quoted literature. The proof in [6] proceeds by induction using a raising operator, and you can explore this in the exercises. This raising operator also gives rise to a Rodriguez-type formula for the biorthogonal functions: If you define

$$\displaystyle\begin{array}{rcl} & & D_{+}(u_{1}: u_{2}: u_{3},u_{4},u_{5})f(z) {}\\ & & \qquad \qquad \qquad \quad = \frac{\theta (pqu_{2}/u_{1};p)} {\theta (u_{2}u_{3},u_{2}u_{4},u_{2}u_{5},u_{2}u_{6};p)}\sum _{\sigma =\pm 1} \frac{\prod _{r=2}^{6}\theta (u_{r}z^{\sigma };p)} {\theta (pqz^{\sigma }/u_{1},z^{2\sigma };p)}f(q^{\sigma /2}z) {}\\ \end{array}$$

where

$$\displaystyle{u_{6} = \frac{p^{2}q} {u_{1}u_{2}u_{3}u_{4}u_{5}}\;,}$$

then the following identity holds:

$$\displaystyle\begin{array}{rcl} & & D_{+}(u_{1}: t_{1}: t_{2},t_{3},t_{4})R_{n}( \cdot;q^{1/2}t_{ 1}: q^{1/2}t_{ 2},q^{1/2}t_{ 3},q^{1/2}t_{ 4};q^{-1/2}u_{ 1},q^{-3/2}u_{ 2}) \\ & & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad = R_{n+1}( \cdot;t_{1}: t_{2},t_{3},t_{4};u_{1},u_{2}) {}\end{array}$$
(5)

Indeed you can now write R n as the result of applying n difference operators consecutively on the constant function 1, which is a Rodriguez-type formula.

Exercise 8.6

  1. (a)

    Specialize the biorthogonal functions of this section at t 1 t 2 = q N and z = t 1 q k. Now show that there is a change of parameters which turns the resulting functions into the biorthogonal functions of Sect. 6. That is, you want an identity

    $$\displaystyle{R_{l}(t_{1}q^{k};t_{ 1}: 1/(t_{1}q^{N}),t_{ 3},t_{4};u_{1},u_{2}) =\mathop{ \mathrm{prefactor}}\nolimits R_{l}^{k}(a,b,c,d;N)\;,}$$

    where t r = t r (a, b, c, d, N) and u r = u r (a, b, c, d, N).

  2. (b)

    Check that interchanging u 1 and u 2 corresponds to changing R l k(a, b, c, d; N) to R k l(c, d, a, b; N).

You can now check that the biorthogonality for the R k l of Sect. 6 is indeed a special case of the biorthogonality of the R l from this section. Note that in the special case t 1 t 2 = q N there is no proper contour for the integral defining the biorthogonality. In order to make sense of the integral, you have to move the contour over the poles at z = t 1, t 1 q, , t 1 q N before taking this specialization, thus creating discrete point masses in the measure at these values.

In the final exercises you can explore difference equations satisfied by the biorthogonal functions.

Exercise 8.7

In this exercise you will derive the elliptic hypergeometric equation: the second order difference equation satisfied by the biorthogonal functions generalizing the Askey–Wilson difference equation.

  1. (a)

    Show that

    $$\displaystyle\begin{array}{rcl} & & _{12}V _{11}(a;b_{1},b_{2},b_{3},b_{4},b_{5},b_{6},q^{-n};p,q) {}\\ & & \quad = \frac{\theta \bigg(b_{2}, \dfrac{b_{2}} {a}, \dfrac{b_{1}} {b_{3}q}, \dfrac{b_{1}b_{3}} {aq};p\bigg)} {\theta \bigg(\dfrac{b_{1}} {q}, \dfrac{b_{1}} {aq}, \dfrac{b_{2}} {b_{3}}, \dfrac{b_{2}b_{3}} {a};p\bigg)} _{12}V _{11}(a;b_{1}/q,b_{2}q,b_{3},b_{4},b_{5},b_{6},q^{-n};p,q) {}\\ & & \qquad \quad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad + (b_{2} \leftrightarrow b_{3}) {}\\ \end{array}$$

    by considering this identity term by term. The notation means that we have a second term on the right-hand side, identical to the first term except with b 2 and b 3 interchanged.

  2. (b)

    Use the transformation formula from Exercise 7.6(a) on all three terms of the above transformation to obtain other contiguous relations, in particular the relation

    $$\displaystyle\begin{array}{rcl} & & _{12}V _{11}(a;b_{1},b_{2},b_{3},b_{4},b_{5},b_{6},q^{-n};p,q) {}\\ & & \qquad \quad = \frac{\theta \bigg( \dfrac{a} {b_{1}}, \dfrac{b_{2}} {aq^{n+1}}, \dfrac{b_{3}} {b_{1}q};p\bigg)} {\theta \bigg( \dfrac{b_{1}} {aq^{n}}, \dfrac{aq} {b_{2}}, \dfrac{b_{3}} {b_{2}};p\bigg)} \prod _{r=4}^{6}\frac{\theta \bigg( \dfrac{aq} {b_{2}b_{r}};p\bigg)} {\theta \bigg( \dfrac{a} {b_{1}b_{r}};p\bigg)} {}\\ & & \qquad \qquad \qquad \qquad \qquad \times _{12}V _{11}(a;b_{1}q,b_{2}/q,b_{3},b_{4},b_{5},b_{6},q^{-n};p,q) {}\\ & & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad + (b_{2} \leftrightarrow b_{3}) {}\\ \end{array}$$
  3. (c)

    Combine the contiguous relations above to obtain a relation

    $$\displaystyle\begin{array}{rcl} \begin{array}{rcl} &&\left (\frac{\theta \bigg( \dfrac{q} {b_{1}}, \dfrac{b_{1}} {aq^{n+1}},\dfrac{b_{1}q} {b_{2}},\dfrac{b_{2}} {b_{3}},\dfrac{b_{2}b_{3}} {a}, \dfrac{aq} {b_{1}b_{4}}, \dfrac{aq} {b_{1}b_{5}}, \dfrac{aq} {b_{1}b_{6}};p\bigg)} {\theta \bigg(\dfrac{b_{2}} {b_{1}},\dfrac{b_{3}q} {b_{1}};p\bigg)} \right. \\ && + \frac{\theta \bigg( \dfrac{q} {b_{2}}, \dfrac{b_{2}} {aq^{n+1}},\dfrac{b_{2}q} {b_{1}},\dfrac{b_{1}} {b_{3}},\dfrac{b_{1}b_{3}} {a}, \dfrac{aq} {b_{2}b_{4}}, \dfrac{aq} {b_{2}b_{5}}, \dfrac{aq} {b_{2}b_{6}};p\bigg)} {\theta \bigg(\dfrac{b_{1}} {b_{2}},\dfrac{b_{3}q} {b_{2}};p\bigg)} \\ & & \left. + \frac{\theta \bigg(\dfrac{b_{1}q} {b_{2}},\dfrac{b_{2}q} {b_{1}},\dfrac{b_{1}b_{2}} {aq}, \dfrac{1} {b_{3}}, \dfrac{b_{3}} {aq^{n}}, \dfrac{a} {b_{3}b_{4}}, \dfrac{a} {b_{3}b_{5}}, \dfrac{a} {b_{3}b_{6}};p\bigg)} {\theta \bigg( \dfrac{b_{1}} {qb_{3}}, \dfrac{b_{2}} {qb_{3}};p\bigg)} \right ) \\ &&\qquad \qquad \qquad \times _{12}V _{11}(a;b_{1},b_{2},b_{3},b_{4},b_{5},b_{6},q^{-n};p,q) \\ &&\qquad \qquad = \frac{\theta \bigg(b_{1},\dfrac{b_{1}} {a}, \dfrac{b_{2}} {aq^{n+1}},\dfrac{b_{2}q} {b_{1}};p\bigg)} {\theta \bigg(\dfrac{aq} {b_{2}},\dfrac{b_{1}} {b_{2}};p\bigg)} \prod _{r=3}^{6}\theta \bigg( \frac{aq} {b_{2}b_{r}};p\bigg) \\ &&\qquad \qquad \quad \times _{12}V _{11}(a;b_{1}q,b_{2}/q,b_{3},b_{4},b_{5},b_{6},q^{-n};p,q) \\ &&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad + (b_{1} \leftrightarrow b_{2})\hspace{-12.0pt}\end{array} & & {}\\ \end{array}$$
Hint: :

You need to use either one of the two relations above twice. Recall that the 12 V 11 is permutation symmetric in its 6 b r -parameters to obtain different versions of the same contiguous relation.

The coefficient on the left-hand side is of course unique, but can be written in many ways as the sum of three products of theta functions. For example the left-hand side of this equation is symmetric under interchanging b 3 and b 4, which is not apparent from its explicit expression. It might well be that you found a different expression. If you want to show equality between any pair of these different expressions you need to use Theorem 5.5.

  1. (d)

    Derive a difference equation of the form

    $$\displaystyle{A(z)R_{n}(qz) + B(z)R_{n}(z/q) = C(z)R_{n}(z)}$$

    using the above contiguous relation.

Exercise 8.8

In this exercise you will prove the Rodriguez-type formula for the biorthogonal functions.

  1. (a)

    Prove the following difference equation by equating both sides termwise. You have to shift the summation index of one of the two series on the right-hand side first.

    $$\displaystyle\begin{array}{rcl} & & _{12}V _{11}(a;b_{1},b_{2},b_{3},b_{4},b_{5},b_{6},q^{-n-1};p,q) {}\\ & & \qquad = _{12}V _{11}(a;b_{1}/q,b_{2},b_{3},b_{4},b_{5},b_{6},q^{-n};p,q) {}\\ & & - \frac{\theta \bigg(aq,aq^{2}, \dfrac{b_{1}} {aq^{n+2}},b_{1}q^{n};p\bigg)} {\theta \bigg(aq^{n+1},aq^{n+2}, \dfrac{aq} {b_{1}}, \dfrac{b_{1}} {aq^{2}};p\bigg)}\prod _{r=2}^{6} \frac{\theta (b_{r};p)} {\theta (aq/b_{r};p)} {}\\ & & \quad \times _{12}V _{11}(aq^{2};b_{ 1},b_{2}q,b_{3}q,b_{4}q,b_{5}q,b_{6}q,q^{-n};p,q) {}\\ \end{array}$$
  2. (b)

    Now apply the symmetry of the 12 V 11 of Exercise 7.6(b) to all three sides of the previous equation to derive the relation

    $$\displaystyle\begin{array}{rcl} \begin{array}{rl} &_{12}V _{11}(a;b_{1},b_{2},b_{3},b_{4},b_{5},b_{6},q^{-n-1};p,q) \\ &\qquad \qquad = \frac{\theta \bigg(aq, \dfrac{b_{1}b_{3}} {aq^{n+1}},b_{2}, \dfrac{aq} {b_{3}b_{4}}, \dfrac{aq} {b_{3}b_{5}}, \dfrac{aq} {b_{3}b_{6}};p\bigg)} {\theta \bigg( \dfrac{b_{1}} {aq^{n+1}},\dfrac{b_{2}} {b_{3}},\dfrac{aq} {b_{3}},\dfrac{aq} {b_{4}},\dfrac{aq} {b_{5}},\dfrac{aq} {b_{6}};p\bigg)} \\ & \qquad \qquad \times _{12}V _{11}(aq;b_{1}q,b_{2}q,b_{3},b_{4},b_{5},b_{6},q^{-n};p,q) \\ &\qquad \qquad \qquad \qquad \qquad \quad \qquad \qquad \qquad + (b_{2} \leftrightarrow b_{3}).\end{array} & & {}\\ \end{array}$$
  3. (c)

    Use this relation to prove (5).

Exercise 8.9

In this final exercise you will prove the squared norm formula for the biorthogonal functions. Define the difference operator

$$\displaystyle{(D_{-}(u_{0})f)(z) =\sum _{\sigma =\pm 1}\frac{\theta \bigg(u_{0}z^{\sigma },u_{0}qz^{\sigma }, \dfrac{p} {u_{0}}z^{\sigma }, \dfrac{1} {u_{0}q}z^{\sigma };p\bigg)} {\theta (z^{2\sigma })} f(q^{\sigma /2}z)}$$
  1. (a)

    Show that D is “adjoint” to the D + operator in the sense that

    $$\displaystyle\begin{array}{rcl} & & \langle D_{+}(u_{1}: t_{1}: t_{2},t_{3},t_{4})f(z),g(z)\rangle _{t_{r},u_{r}} {}\\ & & \qquad \qquad \qquad \qquad \qquad \qquad \quad =\langle f(z),D_{-}(q^{-3/2}u_{ 2})g(z)\rangle _{q^{1/2}t_{r},q^{-1/2}u_{1},q^{-3/2}u_{2}}\:. {}\\ \end{array}$$
  2. (b)

    Show that the following relation holds term-by-term

    $$\displaystyle\begin{array}{rcl} & & _{12}V _{11}(a;b_{1},\ldots,b_{6},q^{1-n};p,q) {}\\ & & \qquad \qquad \quad = \frac{\theta \bigg(b_{1}, \dfrac{b_{1}} {a}, \dfrac{aq^{n}} {b_{2}},q^{n}b_{2};p\bigg)} {\theta \bigg(q^{n},aq^{n}, \dfrac{b_{1}} {b_{2}}, \dfrac{b_{1}b_{2}} {a};p\bigg)}_{12}V _{11}(a;b_{1}q,b_{2},b_{3},b_{4},b_{5},b_{6},q^{-n};p,q) {}\\ & & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad + (b_{1} \leftrightarrow b_{2})\:. {}\\ \end{array}$$
  3. (c)

    Apply symmetries of the 12 V 11 to all terms in the above relation to derive

    $$\displaystyle\begin{array}{rcl} \begin{array}{rl} &_{12}V _{11}(a;b_{1},\ldots,b_{6},q^{1-n};p,q) \\ &\qquad \qquad = \frac{\theta \bigg(\dfrac{b_{3}} {a}, \dfrac{b_{1}b_{3}} {aq^{n+1}},\dfrac{aq^{n}} {b_{2}},\dfrac{b_{1}b_{2}} {aq}, \dfrac{a} {b_{4}}, \dfrac{a} {b_{5}}, \dfrac{a} {b_{6}};p\bigg)} {\theta \bigg(a,q^{n},\dfrac{b_{1}} {q},\dfrac{b_{3}} {b_{2}}, \dfrac{a} {b_{4}b_{5}}, \dfrac{a} {b_{4}b_{6}}, \dfrac{a} {b_{5}b_{6}};p\bigg)} \\ & \qquad \qquad \times _{12}V _{11}\bigg(\frac{a} {q}; \frac{b_{1}} {q}, \frac{b_{2}} {q},b_{3},b_{4},b_{5},b_{6},q^{-n}\bigg) \\ &\qquad \qquad \qquad \qquad \qquad \quad \qquad \qquad \qquad + (b_{2} \leftrightarrow b_{3}).\end{array} & & {}\\ \end{array}$$
  4. (d)

    Use the above relation to show that

    $$\displaystyle\begin{array}{rcl} & & D_{-}(q^{-3/2}u_{ 2})R_{n}( \cdot;t_{1}: t_{2},t_{3},t_{4};u_{1},u_{2}) {}\\ & & \qquad \qquad \qquad \quad =\lambda _{n}R_{n-1}( \cdot;q^{1/2}t_{ 1}: q^{1/2}t_{ 2},q^{1/2}t_{ 3},q^{1/2}t_{ 4};q^{-1/2}u_{ 1},q^{-3/2}u_{ 2}) {}\\ \end{array}$$

    for some coefficients λ n .

  5. (e)

    Using induction, prove the quadratic norm formula 〈R n , R n 〉 = ⋯ from Theorem 8.2.