1 Introduction

In 1813, Gauss [14] introduced a general continued fraction that represents the ratio of two \({}_2F_1\) hypergeometric functions. It is interesting because it contains a variety of continued fraction expansions of several important elementary functions and some of more transcendental ones. In 1901, Van Vleck [26] established a general result on its convergence. Gauss’s continued fraction is derived from a three-term contiguous relation for \({}_2F_1\). In 1956, using other contiguous relations, Frank [13] constructed some more (eight or so) continued fractions of a similar sort and discussed their convergence. In 2005, Borwein et al. [7] obtained an explicit bound for the error term in certain special cases of Gauss’s continued fraction. In 2011, based on Gauss’s continued fraction and other means, Colman et al. [8] developed an efficient algorithm for the validated high-precision computation of certain \({}_2F_1\) functions.

The generalized hypergeometric series of unit argument \({}_3F_2(1)\) also admits three-term contiguous relations, among which the basic twelve relations were found by Wilson [27]; see also Bailey [5]. Thus it is feasible and interesting to discuss or utilize allied continued fractions for \({}_3F_2(1)\). For instance, Zhang [29] used contiguous relations for \({}_3F_2(1)\) to give new proofs of three of Ramanujan’s elegant continued fractions for products and quotients of gamma functions, namely, entries 34, 36, and 39 in Ramanujan’s second notebook [24, Chapter 12], or in its corrected version by Berndt, Lamphere, and Wilson [6]. In a similar vein, Denis and Singh [9] dealt with entries 25 and 33 of the same notebook.

To give a further motivation for \({}_3F_2(1)\) continued fractions, we look at the special case in which one of the numerator parameters, say \(a_0\), is equal to one

$$\begin{aligned} {}_3F_2\! \begin{pmatrix} 1, &{} a_1, &{} a_2 \\ &{} b_1, &{} b_2 \end{pmatrix} := \sum _{j=0}^{\infty } \dfrac{(a_1; \, j) \, (a_2; \, j)}{(b_1; \, j) \, (b_2; \, j)}, \qquad (a; \, j) := \dfrac{\varGamma ( a+j )}{ \varGamma ( a )}, \end{aligned}$$
(1)

where \(\varGamma ( a )\) is Euler’s gamma function. This series is well defined and non-terminating if

Table 1 Some special evaluations of the series \({}_3F_2(1, a_1, a_2; b_1, b_2)\)
$$\begin{aligned} a_1, \, a_2, \, b_1, \, b_2 \, \not \in \, \mathbb {Z}_{\le 0}, \end{aligned}$$
(2)

in which case the series is absolutely convergent if and only if

$$\begin{aligned} \mathrm {Re}\, s > 0, \qquad s := b_1+b_2-a_1-a_2-1. \end{aligned}$$
(3)

This class of infinite sums are interesting because they contain a lot of special evaluations, some of which are presented in Table 1. Therefore it is important to establish a general framework for the precise and efficient computations of the series (1). Naturally, our approach here is based on three-term contiguous relations and allied continued fractions. As an illustration of a more general story to be developed in this article, we shall present a continued fraction expansion of the series (1) with an exact error term estimate for its approximants that exhibits an exponentially fast convergence (see Theorem 1.1).

To state Theorem 1.1, let \(\{q(n)\}_{n=0}^{\infty }\) and \(\{r(n)\}_{n=0}^{\infty }\) be infinite sequences defined by

$$\begin{aligned} q(n) := q_i((n-i)/3), \quad r(n) := r_i((n-i)/3), \quad \hbox {for} \quad n \equiv i \mod 3, \quad i = 0, 1, 2, \end{aligned}$$
(4)
Table 2 Partial denominators and numerators of the continued fraction (6)

where \(q_i(n)\) and \(r_i(n)\) are given by formulas in Table 2 and \(q_0(0) := 1\), \(r_0(0) := 1\), \(r_1(0) = -1\). The modulo 3 structure in (4) is the reflection of a \(\mathbb {Z}_3\)-symmetry in the relevant contiguous relations (see Sect. 2.1). Under condition (2), all the q(n) and r(n) have non-vanishing denominators, while all the r(n) have non-vanishing numerators if and only if the parameters satisfy

$$\begin{aligned} b_i - a_j \, \not \in \, \mathbb {Z}_{\le 0}, \qquad i, j = 1, 2. \end{aligned}$$
(5)

Thus the (formal) infinite continued fraction

(6)

makes sense, provided that the conditions (2) and (5) are satisfied.

Theorem 1.1

If conditions (2), (3), and (5) are fulfilled then continued fraction (6) converges to series (1) exponentially fast and there exists an exact error term estimate for its approximants

as \(n \rightarrow +\infty \), where the constant \(C(a_1, a_2; b_1, b_2)\) is given by

$$\begin{aligned} C\! \begin{pmatrix} a_1, &{} a_2 \\ b_1, &{} b_2 \end{pmatrix} := \dfrac{ \pi ^{\frac{3}{2}} \, \varGamma (b_1) \, \varGamma (b_2) \, \varGamma ^2(s) }{ \varGamma (a_1) \, \varGamma (a_2) \, \varGamma ( b_1-a_1 ) \, \varGamma ( b_1 -a_2 ) \, \varGamma ( b_2 -a_1 ) \, \varGamma ( b_2 -a_2 ) }. \end{aligned}$$

Theorem 1.1 is only a corollary to a specific example of infinitely many continued fractions with exact error estimates, which we shall establish in Theorems 3.2 and 3.3 (see Example 9.1). To generate infinitely many continued fractions, we naturally need infinitely many contiguous relations, so we then need a general theory, beyond the scopes of Bailey [5] and Wilson [27], that presides over all contiguous relations for \({}_3F_2(1)\). Our previous paper [10] develops such a theory and the present article relies substantially on the main results of that paper.

2 Contiguous and recurrence relations

The hypergeometric series of unit argument \({}_3F_2(1)\) with full five parameters is defined by

$$\begin{aligned} {}_3F_2\! \begin{pmatrix} a_0, &{} a_1, &{} a_2 \\ &{} b_1, &{} b_2 \end{pmatrix} := \sum _{j=0}^{\infty } \dfrac{(a_0; \, j) \, (a_1; \, j) \, (a_2; \, j)}{(1; \, j) \, (b_1; \, j) \, (b_2; \, j)}. \end{aligned}$$

With the notation \({\varvec{a}}= (a_0, a_1, a_2; a_3, a_4) = (a_0, a_1, a_2; b_1, b_2)\), this series is often denoted by \({}_3F_2({\varvec{a}})\). It is well defined and non-terminating as a formal sum if \({\varvec{a}}\) satisfies

$$\begin{aligned} a_0, \, a_1, \, a_2, \, b_1, \, b_2 \, \not \in \, \mathbb {Z}_{\le 0}, \end{aligned}$$
(7)

in which case \({}_3F_2({\varvec{a}})\) is absolutely convergent if and only if

$$\begin{aligned} \mathrm {Re}\, s({\varvec{a}}) > 0, \qquad s({\varvec{a}}) := b_1+b_2-a_0-a_1-a_2, \end{aligned}$$
(8)

where \(s({\varvec{a}})\) is called the parametric excess for \({}_3F_2({\varvec{a}})\). We say that \({\varvec{a}}\) is balanced if \(s({\varvec{a}}) = 0\).

In order to discuss contiguous relations, however, we find it more convenient in many respects to replace \({}_3F_2({\varvec{a}})\) by the renormalized hypergeometric series defined by

$$\begin{aligned} {}_3f_2({\varvec{a}}) := \sum _{j=0}^{\infty } \dfrac{\varGamma (a_0+j) \varGamma (a_1+j) \varGamma (a_2+j) }{\varGamma (1+j) \varGamma (b_1+j) \varGamma (b_2+j)}. \end{aligned}$$

This latter series is well defined and non-terminating as a formal sum, whenever

$$\begin{aligned} a_0, \, a_1, \, a_2 \, \not \in \, \mathbb {Z}_{\le 0} \qquad \hbox {(compare this with condition}\,(7)), \end{aligned}$$

in which case series \({}_3f_2({\varvec{a}})\) is absolutely convergent if and only if (8) is satisfied. Note that

$$\begin{aligned} {}_3f_2({\varvec{a}}) = \dfrac{\varGamma ( a_0 ) \, \varGamma ( a_1 ) \, \varGamma ( a_2 )}{\varGamma ( b_1 ) \, \varGamma ( b_2 )} \, {}_3F_2({\varvec{a}}), \end{aligned}$$
(9)

as long as both sides of Eq. (9) make sense.

2.1 Contiguous relations

It follows from [10, Theorem 1.1] that for any distinct integer vectors \(\varvec{k}\), \(\varvec{l}\in \mathbb {Z}^5\) different from \(\varvec{0}\) there exist unique rational functions \(u({\varvec{a}})\), \(v({\varvec{a}}) \in \mathbb {Q}({\varvec{a}})\) such that

$$\begin{aligned} {}_3f_2({\varvec{a}}) = u({\varvec{a}}) \cdot {}_3f_2({\varvec{a}}+ \varvec{k}) + v({\varvec{a}}) \cdot {}_3f_2({\varvec{a}}+ \varvec{l}). \end{aligned}$$
(10)

An identity of the form (10) is called a contiguous relation for \({}_3f_2(1)\). An algorithm to calculate \(u({\varvec{a}})\) and \(v({\varvec{a}})\) explicitly is given in [10, Recipe 5.4]. According to it, one calculates the connection matrix \(A({\varvec{a}}; \varvec{k})\) as in [10, Formula (30)] and defines \(r({\varvec{a}}; \varvec{k}) \in \mathbb {Q}({\varvec{a}})\) to be its (1, 2)-entry as in [10, Formula (33)]. One also calculates \(r({\varvec{a}}; \varvec{l})\) as well as \(r({\varvec{a}}; \varvec{l}-\varvec{k})\) in similar manners. If \(\varvec{k}\) and \(\varvec{l}\) are distinct then \(r({\varvec{a}}; \varvec{l}-\varvec{k})\) is non-zero in \(\mathbb {Q}({\varvec{a}})\) and the coefficients in (10) are represented as

$$\begin{aligned} u({\varvec{a}}) = \dfrac{r({\varvec{a}}; \varvec{l})}{ \det A({\varvec{a}};\varvec{k}) \cdot r({\varvec{a}}+\varvec{k}; \varvec{l}-\varvec{k})}, \qquad v({\varvec{a}}) = - \dfrac{r({\varvec{a}}; \varvec{k})}{ \det A({\varvec{a}}; \varvec{k}) \cdot r({\varvec{a}}+\varvec{k}; \varvec{l}-\varvec{k})}, \end{aligned}$$
(11)

as in [10, Proposition 5.3], where according to [10, Formula (32)] one has

$$\begin{aligned} \det A({\varvec{a}}; \varvec{k}) = \dfrac{(-1)^{k_0+k_1+k_2} (s({\varvec{a}})-1; \, s(\varvec{k})) \prod _{i=0}^2 (a_i; \, k_i) }{ \prod _{i=0}^2 \prod _{j=1}^2 (b_j-a_i; \, l_j-k_i) }. \end{aligned}$$
(12)

In order to formulate our main results in Sect. 3.2, we need one more fact about the structure of \(r({\varvec{a}}; \varvec{k})\) which is not discussed in [10]. Given a vector \(\varvec{k}= (k_0,k_1,k_2; l_1, l_2) \in \mathbb {Z}^5\), let

$$\begin{aligned} \langle {\varvec{a}}; \, \varvec{k}\rangle _{\pm } := \prod _{i=0}^2 \prod _{j=1}^2 (b_j-a_i; \, (l_j-k_i)_{\pm }), \qquad |\!| \varvec{k}|\!|_+ := \sum _{i=0}^2 \sum _{j=1}^2 (l_j-k_i)_+, \end{aligned}$$

where \(m_{\pm } := \max \{ \pm m, 0\}\). Note that \(\prod _{i=0}^2 \prod _{j=1}^2 (b_j-a_i; \, l_j-k_i) = \langle {\varvec{a}}; \varvec{k}\rangle _+/\langle {\varvec{a}}+ \varvec{k}; \varvec{k}\rangle _-\).

Lemma 2.1

For any non-zero vector \(\varvec{k}\in \mathbb {Z}_{\ge 0}^5\) with \(s(\varvec{k}) = 0\) there exists a non-zero polynomial \(\rho ({\varvec{a}}; \varvec{k}) \in \mathbb {Q}[{\varvec{a}}]\) such that the rational function \(r({\varvec{a}}; \varvec{k})\) can be written as

$$\begin{aligned} r({\varvec{a}}; \varvec{k}) = - \frac{ \{ s({\varvec{a}}) -1 \} \rho ({\varvec{a}}; \varvec{k}) }{\langle {\varvec{a}}; \, \varvec{k}\rangle _+}, \qquad \deg \rho ({\varvec{a}}; \varvec{k}) \le |\!| \varvec{k}|\!|_+ - 2. \end{aligned}$$
(13)

Proof

A non-zero polynomial \(p({\varvec{a}}) \in \mathbb {Q}[{\varvec{a}}]\) is said to be a denominator of a rational function \(r({\varvec{a}}) \in \mathbb {Q}({\varvec{a}})\) if the product \(p({\varvec{a}}) \, r({\varvec{a}})\) becomes a polynomial. A denominator of the least degree, which is unique up to constant multiples, is referred to as the reduced denominator. Any denominator is divisible by the reduced denominator in \(\mathbb {Q}[{\varvec{a}}]\). A denominator of a matrix with entries in \(\mathbb {Q}({\varvec{a}})\) is, by definition, a common denominator of those entries.

For \(i = 0, 1, 2\), \(\mu = 1, 2\), let \(\varvec{e}_{\mu }^i := (\delta _{0i}, \delta _{1i}, \delta _{2i}; \delta _{1\mu }, \delta _{2\mu })\), where \(\delta _{*\star }\) is Kronecker’s delta. A vector of this form is said to be basic. A product of contiguous matrices in [10, Table 2] yields

$$\begin{aligned} A({\varvec{a}}; \varvec{e}_{\mu }^i) = \frac{1}{ (b_{\mu }-a_j)(b_{\mu }-a_k) } \begin{pmatrix} a_i(b_{\mu }-a_j-a_k) &{} s({\varvec{a}})-1 \\ a_i a_j a_k &{} (a_i+1) b_{\mu } + a_j a_k - b_1 b_2 \end{pmatrix}, \end{aligned}$$

where \(\{i, j, k\} = \{0, 1, 2\}\). Any \(\varvec{k}= (k_0, k_1, k_2; l_1, l_2) \in \mathbb {Z}_{\ge 0}^5\) with \(s(\varvec{k}) = 0\) admits a decomposition \(\varvec{k}= \varvec{v}_l + \cdots + \varvec{v}_1\) with each \(\varvec{v}_i\) basic, so \(A({\varvec{a}}; \varvec{k})\) can be computed by the chain rule

$$\begin{aligned} A({\varvec{a}}; \varvec{k}) = A({\varvec{a}}+\varvec{v}_{l-1}+\cdots +\varvec{v}_1; \varvec{v}_l) \cdots A({\varvec{a}}+\varvec{v}_1; \varvec{v}_2) A({\varvec{a}}; \varvec{v}_1). \end{aligned}$$
(14)

Thus \(A({\varvec{a}}; \varvec{k})\) has a denominator each irreducible factor of which is of the form \(b_{\mu } - a_i + \hbox {an integer}\). A factor of this form is said to be of type \(b_{\mu }- a_i\) and the product of all factors of this type is referred to as the \(b_{\mu }-a_i\) component of the denominator.

Claim

For each \(i = 0,1,2\) and \(\mu = 1,2\) the matrix \(A({\varvec{a}}; \varvec{k})\) admits a denominator whose \(b_{\mu }- a_i\) component is exactly the factorial function \((b_{\mu }-a_i; (l_{\mu }-k_i)_+)\).

To show the claim we may assume \(i = 0\) and \(\mu = 1\) without loss of generality.

  1. (1)

    If \(m_0 := k_0-l_1 \ge 0\), then take the decomposition \(\varvec{k}= l_1 \varvec{e}_1^0 + m_0 \varvec{e}_2^0 + k_1 \varvec{e}_2^1 + k_2 \varvec{e}_2^2\).

  2. (2)

    If \(m_1 := l_1-k_0 > 0\), then take the decomposition \(\varvec{k}= k_{12} \varvec{e}_2^1 + k_{22} \varvec{e}_2^2 + k_0 \varvec{e}_1^0 + k_{11} \varvec{e}_1^1 + k_{21} \varvec{e}_1^2\), where \(k_{ij}\) are non-negative integers such that \(k_1 = k_{11}+k_{12}\), \(k_2 = k_{21}+k_{22}\), \(m_1 = k_{11}+k_{21}\) and \(l_2 = k_{12} + k_{22}\); such \(k_{ij}\) exist thanks to \(\varvec{k}\in \mathbb {Z}_{\ge 0}^5\) and \(s(\varvec{k}) = 0\).

We use the fact that \(A({\varvec{a}}; m \varvec{e}_{\mu }^i)\) has a denominator \((b_{\mu }-a_j; m)(b_{\mu }-a_k; m)\), where \(\{i, j, k\} = \{0, 1, 2\}\), which follows by induction on \(m \in \mathbb {Z}_{\ge 0}\). In case (1), the decomposition of \(\varvec{k}\) and the chain rule (14) imply that \(A({\varvec{a}}; \varvec{k})\) has a denominator without \(b_1-a_0\) component. In case (2), the decomposition of \(\varvec{k}\) leads to the product \(A({\varvec{a}}; \varvec{k}) = A_2({\varvec{a}}; \varvec{k}) A_1({\varvec{a}}; \varvec{k})\) with

$$\begin{aligned}&A_2({\varvec{a}}; \varvec{k}) := A({\varvec{a}}+k_{11} \varvec{e}_1^1 + k_{21} \varvec{e}_1^2; \, k_{12} \varvec{e}_2^1 + k_{22} \varvec{e}_2^2 + k_1 \varvec{e}_1^0), \\&A_1({\varvec{a}}; \varvec{k}) := A({\varvec{a}}; k_{11} \varvec{e}_1^1 + k_{21} \varvec{e}_1^2). \end{aligned}$$

Observe that \(A_1({\varvec{a}}; \varvec{k})\) has a denominator whose \(b_1-a_0\) component is \((b_1-a_0; k_{11}+k_{21}) = (b_1-a_0; m_1)\), while \(A_2({\varvec{a}}; \varvec{k})\) has a denominator without \(b_1-a_0\) component. So \(A({\varvec{a}}; \varvec{k})\) has a denominator whose \(b_1-a_0\) component is \((b_1-a_0; m_1)\). The claim is thus verified.

For each entry of \(A({\varvec{a}}; \varvec{k})\), the Claim implies that for \(i = 0,1,2\) and \(\mu = 1, 2\) the \(b_{\mu }-a_i\) component of its reduced denominator must divide the factorial \((b_{\mu }-a_i; \, (l_{\mu }-k_i)_+)\), so the reduced denominator itself must divide the product \(\langle {\varvec{a}}; \varvec{k}\rangle _+ = \prod _{i=0}^2 \prod _{\mu =1}^2 (b_{\mu }-a_i; \, (l_{\mu }-k_i)_+)\). Thus one can take \(\langle {\varvec{a}}; \varvec{k}\rangle _+\) as a denominator of \(A({\varvec{a}}; \varvec{k})\). The index of a rational function is the degree of its numerator minus that of its denominator. An induction on the length l of product (14) shows that the index \(\le i-j\) for the (ij)-entry of \(A({\varvec{a}}; \varvec{k})\). Another induction shows that the (1, 2)-entry is divisible by \(s({\varvec{a}})-1\). All these facts lead to expression (13) for \(r({\varvec{a}}; \varvec{k})\). \(\square \)

2.2 Symmetry and dichotomy

Let \(G = S_3 \times S_2\) be the group acting on \({\varvec{a}}= (a_0,a_1,a_2; b_1, b_2)\) by permuting \((a_0, a_1, a_2)\) and \((b_1, b_2)\) separately. It is obvious that \({}_3f_2({\varvec{a}})\) is invariant under this action, so that any element \(\tau \in G\) transforms the contiguous relation (10) into a second one

$$\begin{aligned} {}_3f_2({\varvec{a}}) = {}^{\tau }\!u({\varvec{a}}) \cdot {}_3f_2({\varvec{a}}+ \tau (\varvec{k})) + {}^{\tau }\!v({\varvec{a}}) \cdot {}_3f_2({\varvec{a}}+ \tau (\varvec{l})), \end{aligned}$$
(15)

where \({}^{\tau }\!\varphi ({\varvec{a}}) := \varphi (\tau ^{-1}({\varvec{a}}))\) is the induced action of \(\tau \) on a function \(\varphi ({\varvec{a}})\).

Take an element \(\sigma \in G\) such that \(\sigma ^3\) is identity and set

$$\begin{aligned} \varvec{l}:= \varvec{k}+ \sigma (\varvec{k}), \qquad \varvec{p}:= \varvec{k}+ \sigma (\varvec{l}) = \varvec{k}+ \sigma (\varvec{k}) + \sigma ^2(\varvec{k}). \end{aligned}$$
(16)

Formula (15) with \(\tau = \sigma \) followed by a shift \({\varvec{a}}\mapsto {\varvec{a}}+ \varvec{k}\) yields

$$\begin{aligned} {}_3f_2({\varvec{a}}+\varvec{k}) = {}^{\sigma }\! u({\varvec{a}}+\varvec{k}) \cdot {}_3f_2({\varvec{a}}+ \varvec{l}) + {}^{\sigma }\! v({\varvec{a}}+ \varvec{k}) \cdot {}_3f_2({\varvec{a}}+ \varvec{p}), \end{aligned}$$
(17)

and similarly Formula (15) with \(\tau = \sigma ^2\) followed by another shift \({\varvec{a}}\mapsto {\varvec{a}}+ \varvec{l}\) gives

$$\begin{aligned} {}_3f_2({\varvec{a}}+\varvec{l}) = {}^{\sigma ^2}\! u({\varvec{a}}+\varvec{l}) \cdot {}_3f_2({\varvec{a}}+ \varvec{p}) + {}^{\sigma ^2}\! v({\varvec{a}}+ \varvec{l}) \cdot {}_3f_2({\varvec{a}}+ \varvec{p}+ \varvec{k}). \end{aligned}$$
(18)

If \(\varvec{k}\) is non-zero, non-negative \(\varvec{k}\in \mathbb {Z}_{\ge 0}^5\) and balanced \(s(\varvec{k}) = 0\), then so are \(\varvec{l}-\varvec{k}= \sigma (\varvec{k})\) and \(\varvec{l}\) by definition (16), hence Lemma 2.1 applies not only to \(\varvec{k}\) but also to \(\sigma (\varvec{k})\) and \(\varvec{l}\). Putting Formulas (12) and (13) for these vectors into Formula (11) we have

$$\begin{aligned} u({\varvec{a}})&= \dfrac{(-1)^{k_0+k_1+k_2} \cdot \rho ({\varvec{a}}; \varvec{l}) \cdot \langle {\varvec{a}}; \varvec{k}\rangle _+ \cdot \langle {\varvec{a}}+\varvec{k}; \sigma (\varvec{k}) \rangle _+ }{ \rho ({\varvec{a}}+\varvec{k}; \sigma (\varvec{k})) \cdot \langle {\varvec{a}}; \varvec{l}\rangle _+ \cdot \langle {\varvec{a}}+\varvec{k}; \varvec{k}\rangle _- \prod _{i=0}^2 (a_i; \, k_i) }, \end{aligned}$$
(19a)
$$\begin{aligned} v({\varvec{a}})&= - \dfrac{(-1)^{k_0+k_1+k_2} \cdot \rho ({\varvec{a}}; \varvec{k}) \cdot \langle {\varvec{a}}+\varvec{k}; \sigma (\varvec{k}) \rangle _+ }{ \rho ({\varvec{a}}+\varvec{k}; \sigma (\varvec{k})) \cdot \langle {\varvec{a}}+\varvec{k}; \varvec{k}\rangle _- \prod _{i=0}^2 (a_i; \, k_i) }, \end{aligned}$$
(19b)

Definition 2.2

For any non-zero vector \(\varvec{k}\in \mathbb {Z}_{\ge 0}^5\) with \(s(\varvec{k}) = 0\), we consider two cases.

  1. (1)

    The case is said to be of straight type when \(\sigma \) is identity, \(\varvec{l}= 2 \varvec{k}\) and \(\varvec{p}= 3 \varvec{k}\).

  2. (2)

    The case is said to be of twisted type when \(\sigma \) is a cyclic permutation of the upper parameters \((a_0, a_1, a_2)\) that acts on the lower parameters \((b_1, b_2)\) trivially,

    $$\begin{aligned} \varvec{k}= \begin{pmatrix} k_0, &{} k_1, &{} k_2 \\ &{} l_1, &{} l_2 \end{pmatrix}, \qquad \varvec{p}= \begin{pmatrix} p, &{} p, &{} p \\ &{} 3 l_1, &{} 3 l_2 \end{pmatrix}, \end{aligned}$$
    (20)

    with \(p := k_0+k_1+k_2 = l_1 + l_2\), and if \(\sigma (a_0,a_1,a_2; b_1, b_2) = (a_{\lambda }, a_{\mu }, a_{\nu }; b_1, b_2)\), then

    $$\begin{aligned} \varvec{l}= \begin{pmatrix} k_0 + k_{\lambda }, &{} k_1 + k_{\mu }, &{} k_2 + k_{\nu } \\ &{} 2 l_1, &{} 2 l_2 \end{pmatrix}, \end{aligned}$$
    (21)

    where the index triple \((\lambda , \mu , \nu )\) is either (2, 0, 1) or (1, 2, 0).

This dichotomy is only due to the restriction of our attention to symmetries \(\sigma \) such that \(\sigma ^3 = 1\). Taking other symmetries from \(S_3 \times S_2\) would lead to other patterns of twists. It is an interesting problem to treat some other cases or to exhaust all cases that are possible.

2.3 Recurrence relations

In the situation of Definition 2.2, the shifts \({\varvec{a}}\mapsto {\varvec{a}}+ n \varvec{p}\), \(n \in \mathbb {Z}_{\ge 0}\), in the contiguous relation (10) and its companions (17) and (18) induce a system of recurrence relations

$$\begin{aligned} f_0(n)&= q_0(n) \cdot f_1(n) + r_1(n) \cdot f_2(n), \end{aligned}$$
(22a)
$$\begin{aligned} f_1(n)&= q_1(n) \cdot f_2(n) + r_2(n) \cdot f_0(n+1), \end{aligned}$$
(22b)
$$\begin{aligned} f_2(n)&= q_2(n) \cdot f_0(n+1) + r_0(n+1) \cdot f_1(n+1), \end{aligned}$$
(22c)

for \(n \in \mathbb {Z}_{\ge 0}\), where the sequences \(f_i(n)\), \(q_i(n)\) and \(r_i(n)\) are defined by

$$\begin{aligned}&f_0(n) := {}_3f_2({\varvec{a}}+ n \varvec{p}), \quad q_0(n) := u({\varvec{a}}+ n \varvec{p}), \quad r_1(n) := v({\varvec{a}}+ n \varvec{p}), \\&f_1(n) := {}_3f_2({\varvec{a}}+ n \varvec{p}+ \varvec{k}), \quad q_1(n) := {}^{\sigma }\!u({\varvec{a}}+ n \varvec{p}+ \varvec{k}), \\&\quad r_2(n) := {}^{\sigma }\!v({\varvec{a}}+ n \varvec{p}+ \varvec{k}), \\&f_2(n) := {}_3f_2({\varvec{a}}+ n \varvec{p}+ \varvec{l}), \quad q_2(n) := {}^{\sigma ^2}\!u({\varvec{a}}+ n \varvec{p}+ \varvec{l}),\\&\quad r_0(n) := {}^{\sigma ^2}\!v({\varvec{a}}+ (n-1) \varvec{p}+ \varvec{l}). \end{aligned}$$

In view of the modulo 3 structure in (22), it is convenient to set

$$\begin{aligned} f(n)&:= f_i((n-i)/3), \end{aligned}$$
(23a)
$$\begin{aligned} q(n)&:= q_i((n-i)/3), \qquad \hbox {for} \quad n \equiv i \mod 3, \quad i = 0,1,2. \end{aligned}$$
(23b)
$$\begin{aligned} r(n)&:= r_i((n-i)/3). \end{aligned}$$
(23c)

Then the system (22) is unified into a single three-term recurrence relation

$$\begin{aligned} f(n) = q(n) \cdot f(n+1) + r(n+1) \cdot f(n+2), \qquad n \in \mathbb {Z}_{\ge 0}. \end{aligned}$$
(24)

If \(\varvec{k}\) is non-negative, \(\varvec{k}\in \mathbb {Z}_{\ge 0}^5\), then so are \(\varvec{l}\) and \(\varvec{p}\) by Formula (16), hence all f(n), \(n \in \mathbb {Z}_{\ge 0}\), are well defined under single assumption (7). If moreover \(\varvec{k}\) is balanced, \(s(\varvec{k}) = 0\), then so are \(\varvec{l}\) and \(\varvec{p}\) again by Formula (16), hence all f(n), \(n \in \mathbb {Z}_{\ge 0}\) have the same parametric excess. Thus all these series are convergent under the single assumption (8). In what follows we refer to \(\varvec{k}\) as the seed vector while \(\varvec{p}\) as the shift vector. We remark that \(\varvec{k}\) is primary in the sense that \(\varvec{l}\) and \(\varvec{p}\) are derived from \(\varvec{k}\) by the rule (16), but \(\varvec{p}\) is likewise important because it is \(\varvec{p}\) rather than \(\varvec{k}\) that is directly responsible for the asymptotic behavior of the sequence f(n).

2.4 Simultaneousness

In place of the series \({}_3f_2({\varvec{a}})\), we consider another series

$$\begin{aligned} {}_3g_2({\varvec{a}}) = {}_3g_2\begin{pmatrix} a_0, &{} a_1, &{} a_2 \\ &{} b_1, &{} b_2 \end{pmatrix} := {}_3f_2 \! \begin{pmatrix} a_0, &{} a_0-b_1+1, &{} a_0-b_2+1 \\ &{} a_0-a_1+1, &{} a_0-a_2+1 \end{pmatrix}. \end{aligned}$$
(25)

Let \(\varvec{k}\), \(\varvec{l}\), and \(\varvec{p}\) be vectors as in (16) such that \(s(\varvec{k}) = 0\) and hence \(s(\varvec{l}) = s(\varvec{p}) = 0\). By assertion (3) of [10, Theorem 1.1] the contiguous relation (10) for \({}_3f_2({\varvec{a}})\) is simultaneously satisfied by \({}_3h_2({\varvec{a}}) := \exp ( \pi \sqrt{-1} \, s({\varvec{a}})) \, {}_3g_2({\varvec{a}})\), but the factor \(\exp ( \pi \sqrt{-1} \, s({\varvec{a}}))\) is irrelevant by \(s(\varvec{k}) = s(\varvec{l}) = 0\), thus (10) is satisfied by \({}_3g_2({\varvec{a}})\) itself. Let \(g_i(n)\) and g(n) be defined from \({}_3g_2({\varvec{a}})\) in the same manner as \(f_i(n)\) and f(n) are defined from \({}_3f_2({\varvec{a}})\) in Sect. 2.3, that is, let

$$\begin{aligned}&g_0(n) := {}_3g_2({\varvec{a}}+ n \varvec{p}), \quad g_1(n) := {}_3g_2({\varvec{a}}+ n \varvec{p}+ \varvec{k}), \quad g_2(n) := {}_3g_2({\varvec{a}}+ n \varvec{p}+ \varvec{l}), \end{aligned}$$
(26a)
$$\begin{aligned}&g(n) := g_i((n-i)/3) \qquad \hbox {for} \quad n \equiv i \mod 3, \quad i = 0,1,2. \end{aligned}$$
(26b)

Then the sequences f(n) in (23a) and g(n) in (26b) solve the same recurrence relation (24). With this observation we are now ready to consider continued fractions.

3 Continued fractions

First, we present a general principle to establish an exact error estimate for the approximants to a continued fraction. Next, we announce the final goal of this article, Theorems 3.2 and 3.3, which will be achieved by the principle after a rather long journey of asymptotic analysis.

3.1 A general error estimate

Let \(\{ q(n) \}_{n=0}^{\infty }\) and \(\{ r(n) \}_{n=1}^{\infty }\) be sequences of complex numbers such that r(n) is non-zero for every \(n \in \mathbb {N}:= \mathbb {Z}_{\ge 1}\). We consider a sequence of finite continued fractions

(27)

The convergence of (27) can be described in terms of the three-term recurrence relation

$$\begin{aligned} x(n) = q(n) \cdot x(n+1) + r(n+1) \cdot x(n+2), \qquad n \in \mathbb {Z}_{\ge 0}. \end{aligned}$$
(28)

A non-trivial solution X(n) to Eq. (28) is said to be recessive if \(X(n)/Y(n) \rightarrow 0\) as \(n \rightarrow +\infty \) for any solution Y(n) not proportional to X(n). Recessive solution, if it exists, is unique up to non-zero constant multiples. Any non-recessive solution is said to be dominant.

Theorem 3.1

(Pincherle [23]) Sequence (27) is convergent if and only if the recurrence equation (28) has a recessive solution X(n), in which case (27) converges to the ratio X(0) / X(1).

We refer to Gil et al. [16], Jones and Thron [18], and Gautschi [15] for more accessible sources on Pincherle’s theorem. Let us make this theorem more quantitative. For any non-trivial solution x(n) to Eq. (28) and any positive integer \(m \in \mathbb {N}\) one has

Thus if x(nm) is a non-trivial solution to (28) that vanishes at \(n = m+2\), then

One can express the solution x(nm) in the form

$$\begin{aligned} x(n; m) = X(n) - R(m) \cdot Y(n), \qquad R(m) := \frac{X(m+2)}{Y(m+2)}, \qquad m, n \in \mathbb {Z}_{\ge 0}, \end{aligned}$$

where X(n) and Y(n) are recessive and dominant solutions to (28), respectively, so that \(R(m) \rightarrow 0\) as \(m \rightarrow +\infty \). Hence if X(0) is non-zero then so is x(0; m) for every \(m \gg 0\) and

$$\begin{aligned} \frac{X(1)}{X(0)} - \frac{x(1; m)}{x(0; m)}= & {} \frac{X(1)}{X(0)} - \frac{X(1) - R(m) \cdot Y(1)}{X(0) - R(m) \cdot Y(0)}\\= & {} \frac{\omega (0) \cdot R(m) }{X(0)^2 \, \{1 - R(m) \cdot Y(0)/X(0) \} }, \end{aligned}$$

where \(\omega (n) := X(n) \cdot Y(n+1) - X(n+1) \cdot Y(n)\) is the Casoratian of X(n) and Y(n), thus

(29)

In order to apply this general estimate to continued fractions for \({}_3f_2(1)\), we want to set up the situation in which the sequences f(n) in (23a) and g(n) in (26b) are recessive and dominant solutions, respectively, to the recurrence relation (24). We present in Sect. 4 a sufficient condition for f(n) to be recessive, while we impose in Sect.  6 a further constraint that insures the dominance of g(n). In fact, upon assuming those conditions, we deduce asymptotic representations for f(n) and g(n) showing that they are actually recessive and dominant, respectively. The asymptotic analysis there is used not only to prove such a qualitative assertion but also to get a precise asymptotic behavior for the ratio \(R(n) = f(n+2)/g(n+2)\). We have also to evaluate the initial term \(\omega (0)\) for the Casoratian of f(n) and g(n); this final task is done in Sect. 7.

3.2 Main results on continued fractions

Let \(\{ q(n)\}_{n=0}^{\infty }\) and \(\{r(n)\}_{n=1}^{\infty }\) be sequences (23b) and (23c) derived from \(u({\varvec{a}})\) and \(v({\varvec{a}})\) as in Formula (19). Consider the continued fraction \(\mathbf {K}_{j=0}^{\infty } \, r(j)/q(j)\), where \(r(0) := 1\) by convention. It is said to be well defined if q(j) and r(j) take finite values with r(j) non-zero for every \(j \ge 0\).

Let \(\mathcal {S}(\mathbb {R})\) be the set of all real vectors \(\varvec{p}= (p_0, p_1, p_2; q_1, q_2) \in \mathbb {R}^5\) such that

$$\begin{aligned} s(\varvec{p}) = 0; \qquad p_1, \, p_2 \le p_0< q_1 \le q_2 < p_1 + p_2. \end{aligned}$$
(30)

Note that (30) in particular implies \(p_1, p_2 > 0\) and that \(\mathcal {S}(\mathbb {R})\) is a 4-dimensional polyhedral convex cone defined by a linear equation and a set of linear inequalities. It is the space to which the shift vector \(\varvec{p}\) in (16) should belong, or rather as an integer vector it should lie on

$$\begin{aligned} \mathcal {S}(\mathbb {Z}) := \mathcal {S}(\mathbb {R}) \cap \mathbb {Z}^5. \end{aligned}$$
(31)

The following functions of \(\varvec{p}\in \mathcal {S}(\mathbb {R})\) play important roles in several places of this article:

$$\begin{aligned} D(\varvec{p})&:= \dfrac{(-1)^{q_1+q_2} p_0^{p_0} p_1^{p_1} p_2^{p_2}}{\prod _{i=0}^2 \prod _{j=1}^2 (q_j - p_i)^{q_j - p_i}}, \end{aligned}$$
(32)
$$\begin{aligned} \varDelta (\varvec{p})&:= e_1^2 e_2^2 + 18 \, e_1 e_2 e_3 - 2 \, e_2^3 - 8 \, e_1^3 e_3 - 27 \, e_3^2, \end{aligned}$$
(33)

where \(e_1 := p_0+p_1+p_2 = q_1+q_2\), \(e_2 := p_0 p_1 + p_1 p_2 + p_2 p_0 + q_1 q_2\), and \(e_3 := p_0 p_1 p_2\). We remark that \(\varDelta (\varvec{p})\) is the discriminant (up to a positive constant multiple) of the cubic equation

$$\begin{aligned} (x-p_0)(x-p_1)(x-p_2) + x (x-q_1) (x-q_2) = 0, \end{aligned}$$

which plays an important role in Sect. 6.2. Moreover, for \(\varvec{k}= (k_0, k_1, k_2; l_1, l_2) \in \mathbb {Z}^5\) we put

$$\begin{aligned} \gamma ({\varvec{a}}; \varvec{k}) := \dfrac{\varGamma (a_0) \varGamma (a_1) \varGamma (a_2) \varGamma ^2(s({\varvec{a}})) }{ \prod _{i=0}^2 \prod _{j=1}^2 \varGamma (b_j-a_i+(l_j-k_i)_+)}. \end{aligned}$$
(34)

We are now able to state the main results of this article; they are stated in terms of the seed vector \(\varvec{k}\), but a large part of their proofs will be given in terms of the shift vector \(\varvec{p}\). For continued fractions of straight type in Definition 2.2, we have the following theorem.

Theorem 3.2

(Straight Case) If \(\varvec{k}= (k_0, k_1, k_2; l_1, l_2) \in \mathcal {S}(\mathbb {Z})\) satisfies either

$$\begin{aligned} (\mathrm {a}) \quad \varDelta (\varvec{k}) \le 0 \qquad \hbox {or} \qquad (\mathrm {b}) \quad 2 l_1^2 - 2(k_1 + k_2) l_1 + k_1 k_2 \ge 0, \end{aligned}$$
(35)

then \(|D(\varvec{k})| > 1\) and there exists an error estimate of continued fraction expansion

(36)

as \(n \rightarrow + \infty \), provided that \(\mathrm {Re}\, s({\varvec{a}})\) is positive, \({}_3f_2({\varvec{a}})\) is non-zero, and the continued fraction \(\mathbf {K}_{j=0}^{\infty } \, r(j)/q(j)\) is well defined, where \(D(\varvec{k})\) is defined in (32) with \(\varvec{p}\) replaced by \(\varvec{k}\), while

$$\begin{aligned} c_{\mathrm {s}}({\varvec{a}}; \varvec{k}) := \rho ({\varvec{a}}; \varvec{k}) \cdot e_{\mathrm {s}}({\varvec{a}}; \varvec{k}) \cdot \gamma ({\varvec{a}}; \varvec{k}), \end{aligned}$$

with \(\rho ({\varvec{a}}; \varvec{k}) \in \mathbb {Q}[{\varvec{a}}]\) being the polynomial in (13), explicitly computable from \(\varvec{k}\),

$$\begin{aligned} e_{\mathrm {s}}({\varvec{a}}; \varvec{k}) := (2\pi )^{ \frac{3}{2} } \, \dfrac{ \prod _{i=0}^2 \prod _{j=1}^2 (l_j-k_i)^{2(l_j-k_i)+b_j-a_i- \frac{1}{2} } }{ s_2(\varvec{k})^{2 s({\varvec{\scriptstyle a}})-1} \prod _{i=0}^2 k_i^{2 k_i + a_i - \frac{1}{2} } }, \end{aligned}$$
(37)

with \(s_2(\varvec{k}) := k_0 k_1 + k_1 k_2 + k_2 k_0 - l_1 l_2\) and \(\gamma ({\varvec{a}}; \varvec{k})\) defined by Formula (34).

A numerical inspection shows that about 43 % of the vectors in \(\mathcal {S}(\mathbb {Z})\) satisfy condition (35) (see Remark 6.2). In the straight case with \(\varvec{k}\in \mathcal {S}(\mathbb {Z})\), Formulas (19) become simpler

$$\begin{aligned} u({\varvec{a}}) = \dfrac{(-1)^{l_1+l_2} \, \rho ({\varvec{a}}; 2 \varvec{k})}{\rho ({\varvec{a}}+\varvec{k}; \varvec{k}) \, \prod _{i=0}^2 (a_i; \, k_i)}, \qquad v({\varvec{a}}) = -\dfrac{(-1)^{l_1+l_2} \, \rho ({\varvec{a}}; \varvec{k}) \cdot \langle {\varvec{a}}+\varvec{k}; \varvec{k}\rangle _+}{\rho ({\varvec{a}}+\varvec{k}; \varvec{k}) \, \prod _{i=0}^2 (a_i; \, k_i)}. \end{aligned}$$
(38)

We turn our attention to continued fractions of twisted type in Definition 2.2.

Theorem 3.3

(Twisted Case) If \(\varvec{k}= (k_0, k_1, k_2; l_1, l_2) \in \mathbb {Z}^5_{\ge 0}\) satisfies the condition

$$\begin{aligned} k_0+k_1+k_2 = l_1 + l_2, \qquad l_1 \le l_2 \le \tau \, l_1, \quad \tau := (1+ \sqrt{3})/2 = 1.36602540\cdots , \end{aligned}$$
(39)

then there exists an error estimate of continued fraction expansion

(40)

as \(n \rightarrow +\infty \), provided that \(\mathrm {Re}\, s({\varvec{a}})\) is positive, \({}_3f_2({\varvec{a}})\) is non-zero, and the continued fraction \(\mathbf {K}_{j=0}^{\infty } \, r(j)/q(j)\) is well defined, where \(E(l_1, l_2)\) and \(c_{\mathrm {t}}({\varvec{a}}; \varvec{k})\) are given by

$$\begin{aligned} E(l_1, l_2)&:= \dfrac{(-l_1 - l_2)^{l_1 + l_2}}{(2 l_1 - l_2)^{2 l_1 - l_2} (2 l_2 - l_1)^{2 l_2 - l_1}}, \qquad |E(l_1, l_2)| > 1, \\ c_{\mathrm {t}}({\varvec{a}}; \varvec{k})&:= \rho ({\varvec{a}}; \varvec{k}) \cdot e_{\mathrm {t}}({\varvec{a}}; \varvec{k}) \cdot \gamma ({\varvec{a}}; \varvec{k}), \nonumber \end{aligned}$$
(41)

with \(\rho ({\varvec{a}}; \varvec{k}) \in \mathbb {Q}[{\varvec{a}}]\) being the polynomial in (13), explicitly computable from \(\varvec{k}\),

$$\begin{aligned} e_{\mathrm {t}}({\varvec{a}}; \varvec{k}) := (2 \pi )^{ \frac{3}{2} } \, \dfrac{ (2l_1-l_2)^{2(2 l_1-l_2) + 2 b_1-b_2+s({\varvec{\scriptstyle a}})- \frac{3}{2} } \cdot (2l_2-l_1)^{2(2 l_2-l_1) + 2 b_2-b_1 + s({\varvec{\scriptstyle a}})- \frac{3}{2} } }{ 3^{s({\varvec{\scriptstyle a}}) -\frac{1}{2} } \cdot (l_1+l_2)^{2(l_1+l_2) +a_0+a_1+a_2- \frac{3}{2} } \cdot (l_1^2 -l_1 l_2 + l_2^2)^{2 s({\varvec{\scriptstyle a}})-1}}, \end{aligned}$$
(42)

and \(\gamma ({\varvec{a}}; \varvec{k})\) being defined by Formula (34).

The proofs of Theorems 3.2 and 3.3 will be completed at the end of Sect. 7.

4 Continuous Laplace method

We shall find a class of directions \(\varvec{p}= (p_0, p_1, p_2; q_1, q_2) \in \mathbb {R}^5\) in which the sequence

$$\begin{aligned} f(n) = {}_3f_2({\varvec{a}}+n \varvec{p}) = {}_3f_2\begin{pmatrix} a_0 + p_0 n, &{} a_1 + p_1 n, &{} a_2 + p_2 n \\ &{} b_1 + q_1 n, &{} b_2 + q_2 n \end{pmatrix}, \qquad n \in \mathbb {Z}_{\ge 0}, \end{aligned}$$
(43)

behaves like \(n^{\alpha }\) as \(n \rightarrow +\infty \) for some \(\alpha \in \mathbb {R}\), where we assume \(s(\varvec{p}) = 0\) so that the parametric excesses for f(n) are independent of n, always equal to \(s({\varvec{a}})\). We remark that the current f(n) corresponds to the sequence \(f_0(n)\) in Sect. 2.3, not to f(n) in Formula (23a).

In terms of the series \({}_3f_2({\varvec{a}})\), Thomae’s transformation [1, Corollary 3.3.6] reads

$$\begin{aligned} {}_3f_2\begin{pmatrix} a_0, &{} a_1, &{} a_2 \\ &{} b_1, &{} b_2 \end{pmatrix} = \dfrac{\varGamma (a_1) \varGamma (a_2)}{\varGamma (b_1-a_0) \varGamma (b_2-a_0)} \, {}_3f_2\begin{pmatrix} s({\varvec{a}}), &{} b_1-a_0, &{} b_2-a_0 \\ &{} s({\varvec{a}}) + a_1, &{} s({\varvec{a}}) + a_2 \end{pmatrix}. \end{aligned}$$
(44)

To investigate the asymptotic behavior of f(n), take Thomae’s transformation of (43) to have

$$\begin{aligned} f(n)&= \psi _1(n) \cdot f_1(n), \end{aligned}$$
(45a)
$$\begin{aligned} \psi _1(n)&:= \dfrac{\varGamma ( a_1 + p_1 n ) \varGamma ( a_2 + p_2 n )}{ \varGamma ( b_1 - a_0 + (q_1 - p_0) n ) \varGamma ( b_2 - a_0 + (q_2 - p_0) n )}, \end{aligned}$$
(45b)
$$\begin{aligned} f_1(n)&= {}_3f_2\begin{pmatrix} s({\varvec{a}}), &{} b_1 - a_0 + (q_1 - p_0) n, &{} b_2 - a_0 + (q_2 - p_0) n \\ &{} s({\varvec{a}}) + a_1 + p_1 n, &{} s({\varvec{a}}) + a_2 + p_2 n \end{pmatrix}, \end{aligned}$$
(45c)

and then apply ordinary Laplace’s method to the Euler integral representation for (45c). Since this analysis is not limited to \({}_3f_2(1)\), we shall deal with more general \({}_{p+1}f_p(1)\) series.

4.1 Euler integral representations

The renormalized generalized hypergeometric series \({}_{p+1}f_p(z)\) is defined by

$$\begin{aligned} {}_{p+1}f_p\left( \begin{matrix} a_0, &{} a_1, &{} \dots , &{} a_p \\ &{} b_1, &{} \dots , &{} b_p \end{matrix} ; z \right) := \sum _{k=0}^{\infty } \dfrac{\varGamma (a_0+k) \varGamma (a_1+k) \cdots \varGamma (a_p+k)}{\varGamma (1+k) \varGamma (b_1+k) \cdots \varGamma (b_p+k)} \, z^k, \end{aligned}$$
(46)

where \({\varvec{a}}= (a_0, \dots , a_p; b_1, \dots , b_p) \in \mathbb {C}^{p+1} \times \mathbb {C}^p\) are parameters such that none of \(a_0, \dots , a_p\) is a negative integer or zero. Then (46) is absolutely convergent on the open unit disk \(|z| < 1\).

It is well known that if the parameters \({\varvec{a}}\) satisfy the condition

$$\begin{aligned} \mathrm {Re}\, b_i> \mathrm {Re}\, a_i > 0 \qquad (i = 1, \dots , p), \end{aligned}$$
(47)

then the improper integral of Euler type

$$\begin{aligned} E_p({\varvec{a}}; z) := \int _{I^p} \phi _p( \varvec{t}; {\varvec{a}}; z) \, d \varvec{t}, \qquad \phi _p( \varvec{t}; {\varvec{a}}; z) :=\dfrac{ \prod _{i=1}^p t_i^{a_i-1} (1-t_i)^{b_i-a_i-1}}{ (1- z \, t_1 \cdots t_p)^{a_0} } \end{aligned}$$

is absolutely convergent, and the series (46) admits an integral representation

$$\begin{aligned} {}_{p+1}f_p( {\varvec{a}}; z) = \dfrac{ \varGamma ( a_0 ) \cdot E_p( {\varvec{a}}; z )}{\prod _{i=1}^p \varGamma ( b_i-a_i )} \qquad \hbox {on the open unit disk} \quad |z| < 1, \end{aligned}$$
(48)

where \(I = (0, \, 1)\) is the open unit interval, \(\varvec{t}= (t_1, \dots , t_p) \in I^p\) and \(d \varvec{t}= d t_1 \cdots d t_p\).

We are more interested in \({}_{p+1}f_p(1)\), that is, in the series (46) at unit argument \(z = 1\)

$$\begin{aligned} {}_{p+1}f_p( {\varvec{a}}) = {}_{p+1}f_p( {\varvec{a}}; 1 ) := \sum _{k=0}^{\infty } \dfrac{\varGamma (a_0+k) \varGamma (a_1+k) \cdots \varGamma (a_p+k)}{\varGamma (1+k) \varGamma (b_1+k) \cdots \varGamma (b_p+k)}. \end{aligned}$$
(49)

It is well known that series (49) is absolutely convergent if and only if

$$\begin{aligned} \mathrm {Re}\, s({\varvec{a}}) > 0, \qquad s( {\varvec{a}}) := b_1 + \cdots + b_p - a_0 - a_1 - \cdots - a_p, \end{aligned}$$
(50)

in which case we have \({}_{p+1}f_p( {\varvec{a}}; z ) \rightarrow {}_{p+1}f_p( {\varvec{a}})\) as \(z \rightarrow 1\) within the open unit disk \(|z| < 1\).

Lemma 4.1

If conditions (47) and (50) are satisfied, then the integral

$$\begin{aligned} E_p( {\varvec{a}}) := \int _{I^p} \phi _p( \varvec{t}; {\varvec{a}}) \, d \varvec{t}, \qquad \phi _p( \varvec{t}; {\varvec{a}}) := \dfrac{ \prod _{i=1}^p t_i^{a_i-1} (1-t_i)^{b_i-a_i-1}}{(1- t_1 \cdots t_p)^{a_0}} \end{aligned}$$
(51)

is absolutely convergent and the series (49) admits an integral representation

$$\begin{aligned} {}_{p+1}f_p( {\varvec{a}}) = \dfrac{\varGamma ( a_0 ) \cdot E_p( {\varvec{a}}) }{\prod _{i=1}^p \varGamma ( b_i-a_i )}. \end{aligned}$$
(52)

Proof

If r denotes the distance of \(\varvec{t}\) from \(\varvec{1} := (1, \dots ,1)\) then one has

$$\begin{aligned} \phi _p(\varvec{t}; {\varvec{a}}) = O(r^{s({\varvec{\scriptstyle a}})-p}) \qquad \hbox {as} \quad I^p \ni \varvec{t}\rightarrow \varvec{1}, \end{aligned}$$
(53)

The absolute convergence of integral (51) off a neighborhood U of \(\varvec{1}\) is due to condition (47), while that on U follows from condition (50) and estimate (53). In view of

$$\begin{aligned} \lim _{I \ni z \rightarrow 1} \phi _p( \varvec{t}; {\varvec{a}}; z) = \phi _p( \varvec{t}; {\varvec{a}}), \qquad |\phi _p( \varvec{t}; {\varvec{a}}; z)| \le {\left\{ \begin{array}{ll} \phi _p( \varvec{t}; \mathrm {Re}\, {\varvec{a}}; 0) &{} (\mathrm {Re}\, a_0 \le 0, \, z \in I), \\ \phi _p( \varvec{t}; \mathrm {Re}\, {\varvec{a}}) &{} (\mathrm {Re}\, a_0 > 0, \, z \in I), \end{array}\right. } \end{aligned}$$

Formula (52) is derived from Formula (48) by Lebesgue’s convergence theorem. \(\square \)

The series (49) is symmetric in \(a_0, a_1, \dots , a_p\), but the integral representation (52) is symmetric only in \(a_1, \dots , a_p\). This fact is efficiently used in the next subsection.

4.2 Asymptotic analysis of Euler integrals

Observing that the 0-th numerator parameter of the sequence \(f_1(n)\) in (45c) is independent of n, we consider a sequence of the form

$$\begin{aligned} f_1(n) := {}_{p+1}f_p\begin{pmatrix} a_0, &{} a_1 + k_1 n, &{} \dots , &{} a_p + k_p n \\ &{} b_1 + l_1 n, &{} \dots , &{} b_p + l_p n \end{pmatrix}, \qquad n \in \mathbb {Z}_{\ge 0}. \end{aligned}$$

The associated Euler integrals have an almost product structure which allows a particularly simple treatment in applying Laplace’s approximation method.

Proposition 4.2

If \(\varvec{k}= (0, k_1, \dots , k_p; l_1, \dots , l_p) \in \mathbb {R}^{2 p+1}\) is a real vector such that

$$\begin{aligned} l_i> k_i > 0 = k_0, \qquad i = 1, \dots , p, \end{aligned}$$
(54)

then \(E_p({\varvec{a}}+ n \, \varvec{k})\) admits an asymptotic representation as \(n \rightarrow +\infty \),

$$\begin{aligned} E_p\begin{pmatrix} a_0, &{} a_1 + k_1 n, &{} \dots , &{} a_p + k_p n \\ &{} b_1 + l_1 n, &{} \dots , &{} b_p + l_p n \end{pmatrix} = C \cdot \Phi _{\scriptstyle \mathrm {max}}^n \cdot n^{- \frac{p}{2} } \cdot \left\{ 1+ O(1/n) \right\} , \end{aligned}$$
(55)

uniform for \({\varvec{a}}=(a_0, \dots , a_p; b_1, \dots , b_p)\) in any compact subset of \((\mathbb {C}\setminus \mathbb {Z}_{\le 0}) \times \mathbb {C}^p \times \mathbb {C}^p\), where

$$\begin{aligned} \Phi _{\scriptstyle \mathrm {max}}&:= \prod _{i=1}^p \frac{k_i^{k_i} (l_i - k_i)^{l_i -k_i}}{l_i^{l_i}}, \end{aligned}$$
(56a)
$$\begin{aligned} C&:= (2 \pi )^{ \frac{p}{2} } \left( 1- \frac{k_1 \cdots k_p}{l_1 \cdots l_p} \right) ^{-a_0} \prod _{i=1}^p \frac{k_i^{a_i- \frac{1}{2} } (l_i - k_i)^{b_i-a_i- \frac{1}{2} }}{l_i^{b_i - \frac{1}{2} }}. \end{aligned}$$
(56b)

Proof

The proof is an application of the standard Laplace method to the integral (52), so only an outline of it is presented. Replacing \({\varvec{a}}\) with \({\varvec{a}}+ n \, \varvec{k}\) in definition (51), we have

$$\begin{aligned} E_p({\varvec{a}}+ n \, \varvec{k}) = \int _{I^p} \Phi (\varvec{t})^n \cdot u(\varvec{t}) \, d \varvec{t}= \int _{I^p} e^{- n \, \phi (\varvec{{\scriptstyle t}})} \cdot u(\varvec{t}) \, d \varvec{t}, \end{aligned}$$

where \(\Phi (\varvec{t})\), \(\phi (\varvec{t})\), and \(u(\varvec{t})\) are defined by

$$\begin{aligned} \Phi (\varvec{t}) := \prod _{i=1}^p t_i^{k_{i}}(1-t_i)^{l_i-k_i}, \qquad \phi (\varvec{t}) := - \log \Phi (\varvec{t}), \qquad u(\varvec{t}) := \phi _p(\varvec{t}; {\varvec{a}}). \end{aligned}$$

Observe that \(\phi (\varvec{t})\) attains a unique minimum at \(\varvec{t}_0 := (k_1/l_1, \dots , k_p/l_p)\) in the interval \(I^p\), since

$$\begin{aligned}&\frac{\partial \phi }{\partial t_i} = - \frac{k_i}{t_i} + \frac{l_i-k_i}{1-t_i} = \frac{l_i t_i - k_i}{t_i(1-t_i)}, \qquad \frac{\partial ^2 \phi }{\partial t_i^2} = \frac{k_i}{t_i^2} + \frac{l_i - k_i}{(1-t_i)^2} > 0,\\&\quad \frac{\partial ^2 \phi }{\partial t_i \partial t_j} = 0 \quad (i \ne j). \end{aligned}$$

The standard formula for Laplace’s approximation then leads to

$$\begin{aligned} \begin{aligned} \int _{I^p} e^{- n \, \phi (\varvec{{\scriptstyle t}})} \cdot u(\varvec{t}) \, d \varvec{t}&= \dfrac{u(\varvec{t}_0)}{\sqrt{\mathrm {Hess}(\phi ; \varvec{t}_0)}} \left( \frac{2 \pi }{n} \right) ^{ \frac{p}{2} } \exp (-n \, \phi (\varvec{t}_0)) \, \left\{ 1 + O (1/n) \right\} \\&= C \cdot \Phi _{\scriptstyle \mathrm {max}}^n \cdot n^{- \frac{p}{2} } \, \left\{ 1+ O(1/n) \right\} \qquad \hbox {as} \quad n \rightarrow \infty , \end{aligned} \end{aligned}$$

where \(\mathrm {Hess}(\phi ; \varvec{t}_0)\) is the Hessian of \(\phi \) at \(\varvec{t}_0\) while \(\Phi _{\scriptstyle \mathrm {max}}\) and C are given by Formulas (56). \(\square \)

4.3 Recessive sequences

We return to the special case of \({}_3f_2(1)\) series and prove the following.

Theorem 4.3

If \(\varvec{p}= (p_0,p_1,p_2;q_1,q_2) \in \mathbb {R}^5\) is balanced, \(s(\varvec{p}) = 0\), and

$$\begin{aligned} p_1> q_1 - p_0> 0, \qquad p_2> q_2 - p_0 > 0, \end{aligned}$$
(57)

then the sequence \(f(n) = {}_3f_2({\varvec{a}}+ n \varvec{p})\) in (43) admits an asymptotic representation

$$\begin{aligned} {}_3f_2({\varvec{a}}+ n \varvec{p}) = \varGamma (s({\varvec{a}})) \cdot s_2(\varvec{p})^{-s({\varvec{\scriptstyle a}})} \cdot n^{-2 s({\varvec{\scriptstyle a}})} \cdot \{ 1+O(1/n) \}, \quad \hbox {as} \quad n \rightarrow + \infty , \end{aligned}$$
(58)

uniform in any compact subset of \(\mathrm {Re}\, s({\varvec{a}}) > 0\), where \(s_2(\varvec{p}) := p_0 p_1 + p_1p_2 + p_2 p_0 -q_1 q_2\).

Proof

By Formulas (45) and (52), the sequence (43) can be written \(f(n) = \psi _2(n) \, e_2(n)\) with

$$\begin{aligned} \psi _2(n)&:= \dfrac{\varGamma ( s({\varvec{a}}) ) \, \varGamma ( a_1 + p_1 n ) \, \varGamma ( a_2 + p_2 n )}{ \prod _{j=1}^2 \prod _{i=0, j} \varGamma ( b_j-a_i+(q_j-p_i) n ) }, \end{aligned}$$
(59a)
$$\begin{aligned} e_2(n)&:= E_2\begin{pmatrix} s({\varvec{a}}), &{} b_1 - a_0 + (q_1 - p_0) n, &{} b_2 - a_0 + (q_2 - p_0) n \\ &{} s({\varvec{a}}) + a_1 + p_1 n, &{} s({\varvec{a}}) + a_2 + p_2 n \end{pmatrix}. \end{aligned}$$
(59b)

Conditions \(s(\varvec{p}) = 0\) and (57) imply that \(p_1\), \(p_2 > 0\) and \(q_j - p_i > 0\) for every \(j = 1\), 2 and \(i = 0\), j, so Stirling’s formula applied to (59a) yields an asymptotic representation

$$\begin{aligned} \psi _2(n) = B \cdot A^n \cdot n^{1-2 s({\varvec{\scriptstyle a}})} \, \{ 1 + O(1/n) \}, \end{aligned}$$
(60)

as \(n \rightarrow + \infty \), where A and B are given by

$$\begin{aligned} A := \frac{p_1^{p_1} \, p_2^{p_2}}{ \prod _{j=1}^2 \prod _{i=0,j} (q_j-p_i)^{q_j-p_i} }, \qquad B := \frac{\varGamma (s({\varvec{a}})) \cdot p_1^{a_1- \frac{1}{2} } \, p_2^{a_2- \frac{1}{2} }}{ 2 \pi \, \prod _{j=1}^2 \prod _{i=0, j} (q_j-p_i)^{b_j-a_i- \frac{1}{2} } }. \end{aligned}$$

When \(p = 2\), \(k_1 = q_1-p_0\), \(k_2 = q_2-p_0\), \(l_1 = p_1\), \(l_2 = p_2\), condition (54) becomes (57), so Proposition 4.2 applies to the sequence (59b). In this situation, we have \(\Phi _{\scriptstyle \mathrm {max}}= A^{-1}\) in (56a) and \(C = B^{-1} \cdot \varGamma (s({\varvec{a}})) \cdot \{p_1p_2-(q_1-p_0)(q_2-p_0)\}^{-s({\varvec{\scriptstyle a}})}\) in (56b), where we have \(p_1 p_2 -(q_1-p_0)(q_2-p_0) = s_2(\varvec{p})\) from \(s(\varvec{p}) = 0\). Thus Formula (55) reads

$$\begin{aligned} e_2(n) = B^{-1} \cdot \varGamma (s({\varvec{a}})) \cdot s_2(\varvec{p})^{-s({\varvec{\scriptstyle a}})} \cdot A^{-n} \cdot n^{-1} \, \{1 + O(1/n) \} \quad \hbox {as} \quad n \rightarrow + \infty . \end{aligned}$$
(61)

Combining Formulas (60) and (61), we have the asymptotic representation (58). \(\square \)

Thomae’s transformation (44) rewrites \({}_3f_2({\varvec{a}})\) so that the parametric excess \(s({\varvec{a}})\) appears as an upper parameter and the invariance \(s({\varvec{a}}) = s({\varvec{a}}+ n \varvec{p})\), \(n \in \mathbb {Z}_{\ge 0}\), for balanced \(\varvec{p}\) facilitates the analysis leading to Theorem 4.3. Note that (44) is only one of an order 120 group of transformations for \({}_3F_2(1)\) (see [19, Theorem 3] for an impressive account). We wonder if other transformations of the group could be applied to cover some non-balanced cases.

Remark 4.4

We take this opportunity to review some existing results on the large-parameter asymptotics of \({}_2F_1\) and \({}_3F_2\). For the former, we refer to a classical book of Luke [20, Chap. 7] and more recent articles of Temme [25], Paris [22], Farid Khwaja and Olde Daalhuis [11], Aoki and Tanda [2], and Iwasaki [17], where much work has used the traditional (continuous) version of Laplace’s method, while [2] employs exact WKB analysis. For the latter, there are very few to cite; some results are mentioned in [20, Sect. 7.4], but most work has focused on the asymptotics of terminating series such as the behavior as \(n \rightarrow \infty \) of the ‘extended Jacobi’ polynomials \({}_3F_2(-n, n+ \lambda , a_3; b_1, b_2; z)\), to which one can apply very different techniques such as ones based on generating series; see e.g., Fields [12]. Temme [25] comments on the difficulty of obtaining large-parameter asymptotics of \({}_3F_2\) functions, even in the terminating cases. As an attempt to overcome this difficulty, we shall introduce a discrete version of Laplace’s method.

5 Discrete Laplace method

When a solution to a recurrence equation is given in terms of hypergeometric series, we want to know its asymptotic behavior and thereby to check whether it is actually a dominant solution. To this end, regarding the series as a “discrete” integral, we develop a discrete Laplace method as an analogue to the usual (continuous) Laplace method for ordinary integrals. While Theorems 3.2 and 3.3 on continued fractions are the final goal of this article, the main result of this section, Theorem 5.2, and the method leading to it are the methodological core of the article.

5.1 Formulation

Let \(\varvec{\sigma }= (\sigma _i) \in \mathbb {R}^I\), \(\varvec{\lambda }= (\lambda _i) \in \mathbb {R}^I\), \(\varvec{\tau }= (\tau _j) \in \mathbb {R}^J\), \(\varvec{\mu }= (\mu _j) \in \mathbb {R}^J\) be real numbers indexed by finite sets I and J. Suppose that the pairs \((\varvec{\sigma }, \varvec{\tau })\) and \((\varvec{\lambda }, \varvec{\mu })\) are balanced to the effect that

$$\begin{aligned} \sum _{i \in I} \sigma _i = \sum _{j \in J} \tau _j, \qquad \sum _{i \in I} \lambda _i = \sum _{j \in J} \mu _j. \end{aligned}$$
(62)

Let \(\varvec{\alpha }(n) = (\alpha _i(n)) \in \mathbb {C}^I\) and \(\varvec{\beta }(n) = (\beta _j(n)) \in \mathbb {C}^J\) be sequences in \(n \in \mathbb {N}\) of complex numbers indexed by \(i \in I\) and \(j \in J\). Suppose that they are bounded, that is, for some constant \(R > 0\),

$$\begin{aligned} |\alpha _i(n) | \le R \quad (i \in I); \qquad |\beta _j(n) | \le R \quad (j \in J), \qquad {}^{\forall }n \in \mathbb {N}. \end{aligned}$$
(63)

In practical applications, \(\varvec{\alpha }(n)\) and \(\varvec{\beta }(n)\) will typically be independent of n; however, allowing such a moderate dependence upon n as in (63) is quite helpful in developing the theory.

Given \(0 \le r_0 < r_1 \le + \infty \), we consider the sum of gamma products

$$\begin{aligned} g(n) := \sum _{k = \lceil r_0 n \rceil }^{\lceil r_1 n \rceil -1} G(k; n), \quad G(k; n) := \dfrac{\prod _{i \in I} \varGamma ( \sigma _i \, k + \lambda _i \, n + \alpha _i(n))}{\prod _{j \in J} \varGamma ( \tau _j \, k + \mu _j \, n + \beta _j(n))}, \quad n \in \mathbb {N}, \end{aligned}$$
(64)

where \(\lceil x \rceil := \min \{ m \in \mathbb {Z}\,:\, x \le m \}\) denotes the ceiling function. We remark that the reflection of discrete variable \(k \mapsto \lceil r_0 n \rceil + \lceil r_1 n \rceil -1 -k\) in (64) induces an involution

$$\begin{aligned} \sigma _i'&= - \sigma _i, \qquad&\lambda _i'&= \lambda _i + \sigma _i (r_0 + r_1), \qquad&\alpha _i'(n)&= \alpha _i(n) - \sigma _i \, r(n), \end{aligned}$$
(65a)
$$\begin{aligned} \tau _j'&= - \tau _j, \qquad&\mu _j'&= \mu _j + \tau _j (r_0 + r_1), \qquad&\beta _j'(n)&= \beta _j(n) - \tau _j \, r(n), \end{aligned}$$
(65b)

where \(r(n) := (r_0+r_1) n +1 - \lceil r_0 n \rceil - \lceil r_1 n \rceil \) and the resulting data are indicated with a prime, while the reflection leaves \(r_0\) and \(r_1\) unchanged. Since \(-1 < r(n) \le 1\), if \(\alpha _i(n)\) and \(\beta _j(n)\) are bounded then so are \(\alpha _i'(n)\) and \(\beta _j'(n)\). This reflectional symmetry is helpful in some occasions. Moreover, for any integer \(s \le r_0\) the shift\(k \mapsto k + s n\) in (64) results in the translations

$$\begin{aligned} r_0 \!\mapsto \!r_0 - s, \quad r_1 \!\mapsto \!r_1 - s; \quad \sigma _i \mapsto \sigma _i, \quad \lambda _i \mapsto \lambda _i + \sigma _i s; \quad \tau _j \mapsto \tau _j, \quad \mu _j \mapsto \mu _j + \tau _j s. \end{aligned}$$
(66)

Taking \(s = \lfloor r_0 \rfloor \) we may assume \(0 \le r_0 < 1\), where \(\lfloor x \rfloor := \max \{ m \in \mathbb {Z}\,:\, m \le x \}\) is the floor function. This normalization is also sometimes convenient.

It is insightful to rewrite the gamma product G(kn) as

$$\begin{aligned} G(k; n) = H\left( k/n ; n \right) , \qquad H(x; n) := \dfrac{\prod _{i \in I} \varGamma ( l_i(x) \, n + \alpha _i(n))}{\prod _{j \in J} \varGamma ( m_j(x) \, n + \beta _j(n))}, \end{aligned}$$
(67)

where \(l_i(x)\) and \(m_j(x)\) are affine functions defined by

$$\begin{aligned} l_i(x) := \sigma _i x + \lambda _i \quad (i \in I), \qquad m_j(x) := \tau _j x + \mu _j \qquad (j \in J). \end{aligned}$$

We remark that condition (62) is equivalent to the balancedness of affine functions

$$\begin{aligned} \sum _{i \in I} l_i(x) = \sum _{j \in J} m_j(x), \qquad {}^{\forall } x \in {\mathbb {R}}. \end{aligned}$$
(68)

The sum g(n) is said to be admissible if

$$\begin{aligned} \sigma _i&\ne 0; \qquad&l_i( r_0 )&\ge 0, \qquad&l_i( r_1 )&\ge 0 \qquad&(i&\in I), \end{aligned}$$
(69a)
$$\begin{aligned} \tau _j&\ne 0; \qquad&m_j( r_0 )&\ge 0, \qquad&m_j( r_1 )&\ge 0 \qquad&(j&\in J), \end{aligned}$$
(69b)

where if \(r_1 = +\infty \) then by \(l_i( r_1 ) \ge 0\) and \(m_j( r_1 ) \ge 0\), we mean \(\sigma _i > 0\) and \(\tau _j > 0\). Condition (69) says that \(l_i(x)\) and \(m_j(x)\) are non-constant affine functions taking non-negative values at both ends of the interval \([r_0, \, r_1]\), so they must be positive in its interior, that is,

$$\begin{aligned} l_i(x)> 0 \quad (i \in I); \quad m_j(x) > 0 \quad (j \in J), \qquad r_0< {}^{\forall } x < r_1. \end{aligned}$$
(70)

To work near the endpoints of the interval, we introduce four index subsets

$$\begin{aligned} I_0&:= \{ i \in I \,:\, l_i( r_0 ) = 0 \}, \qquad&I_1&:= \{ i \in I \,:\, l_i( r_1 ) = 0 \}, \end{aligned}$$
(71a)
$$\begin{aligned} J_0&:= \{ j \in J \,:\, m_j( r_0 ) = 0 \}, \qquad&J_1&:= \{ j \in J \,:\, m_j( r_1 ) = 0 \}. \end{aligned}$$
(71b)

Then there exists a positive constant \(c > 0\) such that

$$\begin{aligned} l_i(x) \ge c \quad (i \in I \setminus (I_0 \cup I_1)); \quad m_j(x) \ge c \quad (j \in J \setminus (J_0 \cup J_1)), \quad r_0 \le {}^{\forall } x \le r_1. \end{aligned}$$
(72)

This “uniformly away from zero” property will be important in applying a version of Stirling’s formula which is given later in (90), especially when \(I_0 \cup I_1 \cup J_0 \cup J_1 = \emptyset \) (regular case).

Lemma 5.1

We have \(\sigma _i > 0\) for \(i \in I_0\) while \(\sigma _i < 0\) for \(i \in I_1\), in particular \(I_0 \cap I_1 = \emptyset \). Similarly we have \(\tau _j > 0\) for \(j \in J_0\) while \(\tau _j < 0\) for \(j \in J_1\), in particular \(J_0 \cap J_1 = \emptyset \). If

$$\begin{aligned} \alpha ^{(\nu )}_i(n) := \alpha _i(n) + \sigma _i (\lceil r_{\nu } n \rceil - r_{\nu } n ) \not \in \mathbb {Z}_{\le 0} + |\sigma _i| \, \mathbb {Z}_{\le -\nu }, \quad i \in I_{\nu }, \,\, \nu = 0, 1, \,\, n \in \mathbb {N}, \end{aligned}$$
(73)

then the sum g(n) is well defined, that is, every summand \(G(k; n) = H(k/n; n)\) in (64) takes a finite value for any \(n \ge (R+1)/c\) with R and c given in (63) and (72).

Proof

By condition (69a), if \(i \in I_0\) then \(0 \le l_i( r_1 ) = l_i( r_1 ) - l_i( r_0 ) = (r_1-r_0) \sigma _i\) with \(r_1-r_0 > 0\) and \(\sigma _i \ne 0\), which forces \(\sigma _i > 0\), while if \(i \in I_1\) then \(0 \le l_i( r_0 ) = l_i( r_0 ) - l_i( r_1 ) = (r_0-r_1) \sigma _i\) with \(r_0-r_1 < 0\) and \(\sigma _i \ne 0\), which forces \(\sigma _i < 0\). A similar argument using (69b) leads to the assertions for \(J_0\) and \(J_1\). The sum g(n) fails to make sense only when the argument of an upper gamma factor of a summand G(kn) takes a negative integer value or zero, that is,

$$\begin{aligned} \sigma _i k + \lambda _i n + \alpha _i(n) = l_i(k/n) \, n + \alpha _i(n) \in \mathbb {Z}_{\le 0}, \qquad {}^{\exists } i \in I, \quad \lceil r_0 n \rceil \le {}^{\exists } k \le \lceil r_1 n \rceil -1. \end{aligned}$$

This cannot occur for \(i \in I \setminus (I_0 \cup I_1)\) and \(n \ge (R+1)/c\), since (63) and (72) imply that \(l_i(k/n) \, n + \mathrm {Re}\, \alpha _i(n) \ge c n -R \ge 1\) for any \(k \in \mathbb {Z}\) such that \(r_0 \le k/n \le r_1\). Observe that

$$\begin{aligned} \sigma _i k + \lambda _i n + \alpha _i(n) = \sigma _i \, l + l_i(r_0) n + \alpha ^{(0)}_i(n) = \sigma _i \, l + \alpha ^{(0)}_i(n), \qquad i \in I_0, \end{aligned}$$

where \(l := k - \lceil r_0 n \rceil \) ranges over \(0, 1, \dots , \lceil r_1 n \rceil - \lceil r_0 n \rceil -1\). This cannot be a negative integer or zero, if condition (73) is satisfied for \(\nu = 0\). A similar argument can be made for \(\nu = 1\), since condition (73) for \(\nu =1\) is obtained from that for \(\nu =0\) by applying reflectional symmetry (65). Thus if (73) is satisfied then g(n) is well defined for \(n \ge (R+1)/c\). \(\square \)

To carry out analysis it is convenient to quantify condition (73) by writing

$$\begin{aligned} \delta _{\nu }(n) := \min \Big \{ \, 1, \, \prod _{i \in I_{\nu } } \mathrm {dist} (\alpha _i^{(\nu )} (n), \, \mathbb {Z}_{\le 0} + |\sigma _i| \mathbb {Z}_{\le - \nu }) \, \Big \} > 0, \quad \nu = 0, 1, \,\, n \in \mathbb {N}, \end{aligned}$$
(74)

where \(\mathrm {dist}(z, Z)\) stands for the distance between a point z and a set Z in \(\mathbb {C}\), and cut off by 1 is simply to make \(\delta _{\nu }(n) \le 1\) as it really works only when \(0 < \delta _{\nu }(n) \ll 1\). Condition (74) or (73) is referred to as the genericness for the data \(\varvec{\alpha }(n)\).

5.2 Main results on discrete Laplace method

To state the main result of this section we introduce the following quantities:

$$\begin{aligned} \Phi (x)&:= \prod _{i \in I} l_i(x)^{l_i(x)} \prod _{j \in J} m_j(x)^{-m_j(x)}, \end{aligned}$$
(75a)
$$\begin{aligned} u(x; n)&:= (2 \pi )^{\frac{|I|-|J|}{2}} \prod _{i \in I} l_i(x)^{\alpha _i(n) - \frac{1}{2} } \prod _{j \in J} m_j(x)^{ \frac{1}{2} - \beta _j(n)}, \end{aligned}$$
(75b)

where |I| and |J| are the cardinalities of I and J. We refer to \(\Phi (x)\) as the multiplicative phase function for the sum g(n) in (64).

Thanks to positivity (70) the function \(\Phi (x)\) is smooth and positive on \((r_0, \, r_1)\). If we employ the convention \(0^0 = 1\), which is natural in view of the limit \(x^x \rightarrow 1\) as \(x \rightarrow +0\), then \(\Phi (x)\) is continuous and positive at \(x = r_0\) as well as at \(x = r_1\) when \(r_1 < +\infty \), even if some of the \(l_i(x)\)’s or \(m_j(x)\)’s vanish at one or both endpoints. When \(r_1 = + \infty \), some calculations using balancedness condition (62) shows that

$$\begin{aligned} \Phi (x) = \left( \varvec{\sigma }^{\varvec{{\scriptstyle \lambda }}} / \varvec{\tau }^{\varvec{{\scriptstyle \mu }}} \right) \cdot \left( \varvec{\sigma }^{\varvec{{\scriptstyle \sigma }}} / \varvec{\tau }^{\varvec{{\scriptstyle \tau }}} \right) ^x \cdot \left\{ 1 + O(1/x) \right\} \quad \hbox {as} \quad x \rightarrow + \infty , \end{aligned}$$
(76)

where \(\varvec{\sigma }^{\varvec{{\scriptstyle \sigma }}} := \prod _{i \in I} \sigma _i^{\sigma _i}\), \(\varvec{\sigma }^{\varvec{{\scriptstyle \lambda }}} := \prod _{i \in I} \sigma _i^{\lambda _i}\), and so on; note that all of \(\sigma _i\) and \(\tau _j\) are positive due to the admissibility condition (69) for the \(r_1 = + \infty \) case. Thus it is natural to define

$$\begin{aligned} \Phi (+ \infty ) := {\left\{ \begin{array}{ll} 0 \quad &{} (\hbox {if} \,\,\, \varvec{\sigma }^{\varvec{{\scriptstyle \sigma }}} < \varvec{\tau }^{\varvec{{\scriptstyle \tau }}} ), \\ \varvec{\sigma }^{\varvec{{\scriptstyle \lambda }}} / \varvec{\tau }^{\varvec{{\scriptstyle \mu }}} \quad &{} (\hbox {if} \,\,\, \varvec{\sigma }^{\varvec{{\scriptstyle \sigma }}} = \varvec{\tau }^{\varvec{{\scriptstyle \tau }}} ), \\ +\infty \quad &{} (\hbox {if} \,\,\, \varvec{\sigma }^{\varvec{{\scriptstyle \sigma }}} > \varvec{\tau }^{\varvec{{\scriptstyle \tau }}} ). \end{array}\right. } \end{aligned}$$
(77)

With this understanding we assume the continuity at infinity:

$$\begin{aligned} \varvec{\sigma }^{\varvec{{\scriptstyle \sigma }}} \le \varvec{\tau }^{\varvec{{\scriptstyle \tau }}} \qquad (\hbox {when} \,\,\, r_1 = + \infty ). \end{aligned}$$
(78)

Then \(\Phi (x)\) is continuous on \([r_0, \, r_1]\) even when \(r_1 = + \infty \) and it makes sense to define

$$\begin{aligned} \Phi _{\scriptstyle \mathrm {max}}:= \max _{r_0 \le x \le r_1} \Phi (x), \end{aligned}$$

as a positive finite number. Therefore the function

$$\begin{aligned} \phi (x) := - \log \Phi (x) \end{aligned}$$
(79)

is a real-valued, continuous function on \([r_0, \, r_1)\), smooth in \((r_0, \, r_1)\); if \(r_1 < + \infty \) then it is also continuous at \(x = r_1\); otherwise, \(\phi (x)\) is either continuous at \(x = + \infty \) or tends to \(+ \infty \) as \(x \rightarrow + \infty \). We refer to \(\phi (x)\) as the additive phase function for the sum g(n) in (64).

When \(r_1 = + \infty \) we have to think of the (absolute) convergence of infinite series (64). If the strict inequality \(\varvec{\sigma }^{\varvec{{\scriptstyle \sigma }}} < \varvec{\tau }^{\varvec{{\scriptstyle \tau }}}\) holds in (78) then it certainly converges. Otherwise, in order to guarantee its convergence, suppose that there is a constant \(\sigma > 0\) such that for any \(n \in \mathbb {N}\),

$$\begin{aligned} \mathrm {Re}\, \gamma (n) \le -1-\sigma \qquad (\hbox {if } \varvec{\sigma }^{\varvec{{\scriptstyle \sigma }}} = \varvec{\tau }^{\varvec{{\scriptstyle \tau }}}), \end{aligned}$$
(80)

where

$$\begin{aligned} \gamma (n) := \sum _{i \in I} \alpha _i(n) - \sum _{j \in J} \beta _j(n) + \frac{|J|-|I|}{2}. \end{aligned}$$
(81)

Thanks to positivity (70), the function u(xn) is also smooth and nowhere vanishing on \((r_0, \, r_1)\), but it may be singular at one or both ends of the interval when some of the \(l_i(x)\)’s or \(m_j(x)\)’s vanish there. To deal with this situation we say that g(n) is left-regular if \(I_0 \cup J_0 = \emptyset \); right-regular if \(I_1 \cup J_1 = \emptyset \); and regular if \(I_0 \cup J_0 \cup I_1 \cup J_1 = \emptyset \). If g(n) is left-regular resp. right-regular with \(r_1 < + \infty \), then u(xn) is continuous at \(x = r_0\) resp. \(x = r_1\). When \(r_1 < + \infty \) the reflectional symmetry (65) exchanges left and right regularities to each other. We remark that if \(r_1 = + \infty \) then right-regularity automatically follows from admissibility.

The maximum of \(\Phi (x)\) or equivalently the minimum of \(\phi (x)\) plays a leading role in our analysis, so it is important to think of the first and second derivatives of \(\phi (x)\). Differentiations of (79) with balancedness condition (62) took into account yield

$$\begin{aligned} \phi '(x)&= \log \prod _{j \in J} m_j(x)^{\tau _j} \prod _{i \in I} l_i(x)^{-\sigma _i}, \end{aligned}$$
(82a)
$$\begin{aligned} \phi ''(x)&= \sum _{j \in J} \dfrac{\tau _j^2}{m_j(x)} - \sum _{i \in I} \dfrac{\sigma _i^2}{l_i(x)}. \end{aligned}$$
(82b)

Denote by \(\mathrm {M{\scriptstyle ax}}\) the set of all maximum points of \(\Phi (x)\) on \([r_0, \, r_1]\). Suppose that \(\Phi (x)\) attains its maximum \(\Phi _{\scriptstyle \mathrm {max}}\) only within \((r_0, \, r_1)\), that is, \(r_0\), \(r_1 \not \in \mathrm {M{\scriptstyle ax}}\). Moreover suppose that every maximum point is non-degenerate to the effect that

$$\begin{aligned} \mathrm {M{\scriptstyle ax}}\Subset (r_0, \, r_1), \qquad \phi ''( x_0 ) > 0 \quad \hbox {at any} \quad x_0 \in \mathrm {M{\scriptstyle ax}}, \end{aligned}$$
(83)

which is referred to as properness of the maximum. By Formula (82a) any \(x \in \mathrm {M{\scriptstyle ax}}\) is a root of

$$\begin{aligned} \chi ( x ) := \prod _{j \in J} m_j( x )^{\tau _j} - \prod _{i \in I} l_i( x )^{\sigma _i} = 0, \qquad x \in (r_0, \, r_1), \end{aligned}$$
(84)

which is called the characteristic equation for g(n), while \(\chi ( x )\) is referred to as the characteristic function for g(n). It is easy to see that Eq. (84) has only a finite number of roots, unless \(\chi (x) \equiv 0\), so \(\mathrm {M{\scriptstyle ax}}\) must be a finite set. Note that \(\phi '(x)\) and \(\chi (x)\) have the same sign.

Equation (84) can be used to determine the set \(\mathrm {M{\scriptstyle ax}}\) explicitly. In applications to hypergeometric series, one usually puts \(\sigma _i\), \(\tau _j = \pm 1\) and \(\lambda _i\), \(\mu _j \in \mathbb {Z}\), thus (84) is equivalent to an algebraic equation with integer coefficients and hence any \(x \in \mathrm {M{\scriptstyle ax}}\) must be an algebraic number. In this case with \(r_1 = +\infty \), since \(\varvec{\sigma }^{\varvec{{\scriptstyle \sigma }}} = \varvec{\tau }^{\varvec{{\scriptstyle \tau }}} = \varvec{\sigma }^{\varvec{{\scriptstyle \lambda }}} = \varvec{\tau }^{\varvec{{\scriptstyle \mu }}} = 1\), the continuity at infinity (78) is trivially satisfied with \(\Phi (+ \infty ) = 1\) in (77), thus condition \(\mathrm {M{\scriptstyle ax}}\Subset (r_0, \, +\infty )\) in (83) includes \(\Phi _{\scriptstyle \mathrm {max}}> 1\).

Theorem 5.2

If balancedness (62), boundedness (63), admissibility (69), genericness (74) and properness (83) are all satisfied, with continuity at infinity (78) and convergence (80) being added when \(r_1 = + \infty \), then the sum g(n) in (64) admits an asymptotic representation

$$\begin{aligned} g(n) = n^{\gamma (n) + \frac{1}{2} } \cdot \Phi _{\scriptstyle \mathrm {max}}^{\, n} \cdot \left\{ \, C(n) + \Omega (n) \right\} , \end{aligned}$$
(85)

where \(\gamma (n)\) is defined in Formula (81) while \(\Phi _{\scriptstyle \mathrm {max}}\) and C(n) are defined by

$$\begin{aligned} \Phi _{\scriptstyle \mathrm {max}}:= \Phi (x_0) \quad \hbox {for any} \quad x_0 \in \mathrm {M{\scriptstyle ax}}; \qquad C(n) := \sqrt{2 \pi } \sum _{x_0 \in \mathrm {M{\scriptstyle ax}}} \dfrac{u( x_0; n)}{ \sqrt{\phi ''( x_0 )}} \end{aligned}$$
(86)

in terms of the notations in (75), whereas the error term \(\Omega (n)\) is estimated as

$$\begin{aligned} |\Omega (n)| \le K \{ n^{-\frac{1}{2}} + \lambda ^{-n} (\delta _0(n)^{-1} + \delta _1(n)^{-1}) \}, \qquad {}^{\forall } n \ge N, \end{aligned}$$
(87)

for some constants \(K > 0\), \(\lambda > 1\), and \(N \in \mathbb {N}\), where \(\delta _0(n)\) and \(\delta _1(n)\) are defined in (74). This estimate is valid uniformly for all \(\varvec{\alpha }(n)\) and \(\varvec{\beta }(n)\) satisfying conditions (63) and (74) along with (80) when \(r_1 = +\infty \), in which case \(I_1 = \emptyset \) and so \(\delta _1(n) = 1\).

Things are simpler when \(\mathrm {M{\scriptstyle ax}}\) consists of a single point \(x_0 \in (r_0, \, r_1)\), in which case the main idea for proving Theorem 5.2 is to divide the sum (64) into five components:

$$\begin{aligned} g(n) = g_0(n) + h_0(n) + h(n) + h_1(n) + g_1(n), \end{aligned}$$

with each component being a partial sum of (64) defined by

$$\begin{aligned} g_0(n)&:= \hbox {sum of }G(k; n) \hbox { over} \,\,&\lceil r_0 n \rceil&\le k \le \lceil (r_0 + \varepsilon ) n \rceil -1, \quad&\hbox {(left end)} \nonumber \\ h_0(n)&:= \hbox {sum of }G(k; n) \hbox { over} \,\,&\lceil (r_0 + \varepsilon ) n \rceil&\le k \le \lceil (x_0 -\varepsilon ) n \rceil -1, \quad&\hbox {(left side)} \nonumber \\ h(n)&:= \hbox {sum of }G(k; n)\hbox { over} \,\,&\lceil (x_0-\varepsilon ) n \rceil&\le k \le \lceil (x_0 +\varepsilon ) n \rceil -1, \quad&\hbox {(top)} \\ h_1(n)&:= \hbox {sum of }G(k; n) \hbox { over} \,\,&\lceil (x_0 +\varepsilon ) n \rceil&\le k \le \lceil (r_1-\varepsilon ) n \rceil -1, \quad&\hbox {(right side)} \nonumber \\ g_1(n)&:= \hbox {sum of }G(k; n) \hbox { over} \,\,&\lceil (r_1 -\varepsilon ) n&\le k \le \lceil r_1 n \rceil -1, \quad&\hbox {(right end)} \nonumber \end{aligned}$$
(88)

where if \(r_1 = + \infty \) then the right-end component should be omitted. In order for the division (88) to make sense, the number \(\varepsilon \) must satisfy

$$\begin{aligned} 0< \varepsilon < \varepsilon _0 := \min \{ (x_0 -r_0)/2, \, (r_1-x_0)/2 \}. \end{aligned}$$
(89)

How to take \(\varepsilon \in (0, \, \varepsilon _0)\) will be specified in the course of establishing Theorem 5.2.

We want to think of h(n) as the principal part of g(n), while other four components as remainders. Thus estimating the top component h(n) is the central issue of this section, but treatment of both ends \(g_0(n)\) and \(g_1(n)\) is also far from trivial. For the sake of simplicity we shall deal with the case \(|\mathrm {M{\scriptstyle ax}}| = 1\) only, but even when \(|\mathrm {M{\scriptstyle ax}}| \ge 2\) things are essentially the same and it will be clear how to modify the arguments. The reflectional symmetry (65) reduces the discussion at the right end or right side to the discussion at the left counterpart. The top and side sums are regular, so we shall begin by estimating regular sums in Sect. 5.3.

In the present article, we are working in the balanced cases, that is, under condition (62); it is an interesting problem to extend our method so as to cover non-balanced cases.

In the sequel, we shall often utilize the following version of Stirling’s formula: For any positive number \(c > 0\) and any compact subset \(A \Subset \mathbb {C}\), we have

$$\begin{aligned} \varGamma ( x n + a ) = (2 \pi )^{1/2} \, x^{a- \frac{1}{2} } \, n^{a- \frac{1}{2} } \, x^{x n} \, (n/e)^{x n} \, \left\{ \, 1 + O( 1/n ) \, \right\} \quad \hbox {as} \quad n \rightarrow +\infty , \end{aligned}$$
(90)

where Landau’s symbol O(1 / n) is uniform with respect to \((x, a) \in \mathbb {R}_{\ge c} \times A\).

5.3 Regular sums and side components

In this subsection, we assume that g(n) in (64) satisfies balancedness (62), boundedness (63), and admissibility (69), along with continuity at infinity (78) and convergence (80) if \(r_1 = + \infty \), while properness (83) is not assumed and genericness (74) is irrelevant to regular sums.

Lemma 5.3

If the sum g(n) in (64) is regular then there exists an integer \(N_0 \in \mathbb {N}\) and a constant \(C_0 > 0\) such that H(xn) in Formula (67) can be written

$$\begin{aligned} H(x; n)&= u(x; n) \cdot n^{\gamma (n)} \cdot \Phi (x)^n \cdot \{1+ e(x; n) \}, \end{aligned}$$
(91a)
$$\begin{aligned} |e( x; n)|&\le C_0/n, \qquad {}^{\forall } n \ge N_0, \,\, r_0 \le {}^{\forall } x \le r_1. \end{aligned}$$
(91b)

Proof

Since g(n) is regular, that is, \(I_0 \cup I_1 \cup J_0 \cup J_1 = \emptyset \), we have the uniform positivity (72) for all \(i \in I\), \(j \in J\), and \(x \in [r_0, \, r_1]\). This together with boundedness (63) allows us to apply Stirling’s formula (90) to all gamma factors \(\varGamma (l_i(x) n + \alpha _i(n))\) and \(\varGamma (m_j(x) n + \beta _j(n))\) of H(xn) in (67). Taking definitions (75) and (81) into account, we use Formula (90) to have

$$\begin{aligned} H(x; n) = u(x; n) \cdot n^{\gamma (n)} \cdot \Phi (x)^n \cdot \left\{ (n/e)^{\sum _{i \in I} l_i(x) - \sum _{j \in J} m_j(x)} \right\} ^n \cdot \{1 + O(1/n) \}, \end{aligned}$$

where the O(1 / n) term is uniform with respect to \(x \in [r_0, \, r_1]\) as well as to \(\varvec{\alpha }(n)\) and \(\varvec{\beta }(n)\) satisfying condition (63). Then balancedness (68) yields the desired Formula (91). \(\square \)

Proposition 5.4

If the sum g(n) in Formula (64) is regular then it admits an estimate

$$\begin{aligned} |g(n)| \le C_1 \cdot n^{\mathrm {Re}\, \gamma (n) + 1} \cdot \Phi _{\scriptstyle \mathrm {max}}^{\, n}, \qquad {}^{\forall } n \ge N_0, \end{aligned}$$

for a constant \(C_1 > 0\) and an integer \(N_0 \in \mathbb {N}\) which is the same as in Lemma 5.3.

Proof

From representation (91), we have

$$\begin{aligned} |H(x; n)| \le (1+C_0) \cdot |u(x; n)|\cdot n^{\mathrm {Re}\, \gamma (n)} \cdot \Phi _{\scriptstyle \mathrm {max}}^{\, n}, \qquad r_0 \le {}^{\forall } x \le r_1, \,\, {}^{\forall } n \ge N_0. \end{aligned}$$
(92)

First we consider the case \(r_1 < + \infty \). Since g(n) is regular and \(\varvec{\alpha }(n)\) and \(\varvec{\beta }(n)\) are bounded by assumption (63), the definition (75b) implies that u(xn) is bounded for \((x, n) \in [r_0, \, r_1] \times \mathbb {Z}_{\ge N_0}\). Replacing the constant \(C_0\) by a larger one if necessary, we have \(|H(x; n)| \le C_0 \cdot n^{\mathrm {Re}\, \gamma (n)} \cdot \Phi _{\scriptstyle \mathrm {max}}^{\, n}\) for any \(x \in [r_0, \, r_1]\) and \(n \ge N_0\). Thus by definitions (64) and (67), we have for any \(n \ge N_0\),

$$\begin{aligned} |g(n)|&\le \sum _{k=\lceil r_0 n \rceil }^{\lceil r_1 n \rceil -1} |H(k/n; n)| \le C_0 \cdot n^{\mathrm {Re}\, \gamma (n)} \cdot \Phi _{\scriptstyle \mathrm {max}}^{\, n} \sum _{k=\lceil r_0 n \rceil }^{\lceil r_1 n \rceil -1} 1 \\&= C_0 \cdot n^{\mathrm {Re}\, \gamma (n)} \cdot \Phi _{\scriptstyle \mathrm {max}}^{\, n} \cdot (\lceil r_1 n \rceil - \lceil r_0 n \rceil ) \le C_1 \cdot n^{\mathrm {Re}\, \gamma (n) + 1} \cdot \Phi _{\scriptstyle \mathrm {max}}^{\, n}, \end{aligned}$$

with the constant \(C_1 := C_0 (1+r_1-r_0)\).

We proceed to the case \(r_1 = + \infty \) and \(\varvec{\sigma }^{\varvec{{\scriptstyle \sigma }}} = \varvec{\tau }^{\varvec{{\scriptstyle \tau }}}\) in which condition (80) takes place. Since g(n) is regular and \(\varvec{\alpha }(n)\) and \(\varvec{\beta }(n)\) are bounded by (63), the definition (75b) implies that

$$\begin{aligned} u(x; n) = (2 \pi )^{ \frac{|I|-|J|}{2} } \, \prod _{i \in I} \sigma _i^{\alpha _i(n) - \frac{1}{2}} \prod _{j \in J} \tau _j^{\frac{1}{2}-\beta _j(n)} \cdot x^{\gamma (n)} \cdot \{ 1 + O(1/x) \} \qquad \hbox {as} \quad x \rightarrow + \infty , \end{aligned}$$

uniformly for \(n \in \mathbb {N}\). By condition (80) there exists a constant \(C_2 > 0\) such that

$$\begin{aligned} |u(x; n)| \le C_2 \, (2+x)^{-1-\sigma }, \qquad {}^{\forall } x \ge r_0, \,\, {}^{\forall } n \ge N_0. \end{aligned}$$

In view of definitions (64) and (67), this estimate together with Formula (92) yields

$$\begin{aligned} |g(n)|&\le \sum _{k=\lceil r_0 n \rceil }^{\infty } |H(k/n; n)| \\&\le C_2 \, (1+C_0) \cdot n^{\mathrm {Re}\, \gamma (n) +1} \cdot \Phi _{\scriptstyle \mathrm {max}}^{\, n} \sum _{k=\lceil r_0 n \rceil }^{\infty } \left( 2+\frac{k}{n} \right) ^{-1-\sigma } \frac{1}{n} \\&\le C_2 \, (1+C_0) \cdot n^{\mathrm {Re}\, \gamma (n) +1} \cdot \Phi _{\scriptstyle \mathrm {max}}^{\, n} \int _{r_0}^{\infty } \left( 1+ x \right) ^{-1-\sigma } \, d x \\&= C_1 \cdot n^{\mathrm {Re}\, \gamma (n) + 1} \cdot \Phi _{\scriptstyle \mathrm {max}}^{\, n}, \end{aligned}$$

for any integer \(n \ge N_0\), where \(C_1 := C_2 \, (1+C_0)\, (1+r_0)^{-\sigma }/\sigma \).

The proof ends with the case where \(r_1 = + \infty \) and \(\varvec{\sigma }^{\varvec{{\scriptstyle \sigma }}} < \varvec{\tau }^{\varvec{{\scriptstyle \tau }}}\). By Stirling’s formula (90) and asymptotic representation (76), there exists a constant \(C_3 > 0\) such that

$$\begin{aligned} |H(x; n)| \le C_3 \cdot (x n)^{\mathrm {Re}\, \gamma (n)} \cdot \Phi (x)^n, \qquad \Phi (x) \le C_3 \cdot \rho ^x, \qquad {}^{\forall } x \ge r_0, \,\, {}^{\forall } n \ge N_0, \end{aligned}$$

with \(0< \rho := \varvec{\sigma }^{\varvec{{\scriptstyle \sigma }}}/\varvec{\tau }^{\varvec{{\scriptstyle \tau }}} < 1\). Take a number \(r_2 > r_0\) so large that \(d := C_3 \cdot \rho ^{r_2/2} < \Phi _{\scriptstyle \mathrm {max}}\) and let \(g(n) = g_1(n) + g_2(n)\) be the decomposition according to the division \([r_0, \,+\infty ) = [r_0, \, r_2) \cup [r_2, \, +\infty )\). Then an estimate for the \(r_1 < + \infty \) case applies to \(g_1(n)\), while one has \(|H(x; n)| \le C_3 \cdot d^n \cdot (x n)^c \cdot \rho ^{x n/2}\) for \(x \ge r_2\), where \(c := \sup _{n \ge N_0} \mathrm {Re}\, \gamma (n)\), and hence

$$\begin{aligned} |g_2(n)|&\le \sum _{k=\lceil r_2 n \rceil }^{\infty } |H(k/n; n)|\le C_3 \cdot d^n \sum _{k = \lceil r_2 n \rceil }^{\infty } k^c \cdot \rho ^{k/2}\\&\le C_3 \cdot d^n \sum _{k = 1}^{\infty } k^c \cdot \rho ^{k/2} = C_4 \cdot d^n \end{aligned}$$

for any \(n \ge N_0\). It is clear from \(0< d < \Phi _{\scriptstyle \mathrm {max}}\) that the proposition follows. \(\square \)

Proposition 5.4 can be used to estimate the side components \(h_0(n)\) and \(h_1(n)\) in (88).

Lemma 5.5

For any \(0< \varepsilon < \varepsilon _0\) there exist \(N_1^{\varepsilon } \in \mathbb {N}\) and \(C_1^{\varepsilon } > 0\) such that

$$\begin{aligned} | h_0(n) | \le C_1^{\varepsilon } \cdot n^{\mathrm {Re}\, \gamma (n) + 1} \cdot (\Phi _0^{\varepsilon })^n, \quad | h_1(n) | \le C_1^{\varepsilon } \cdot n^{\mathrm {Re}\, \gamma (n) + 1} \cdot (\Phi _1^{\varepsilon })^n, \quad {}^{\forall } n \ge N_1^{\varepsilon }, \end{aligned}$$

where

\(\Phi _0^{\varepsilon } := \displaystyle \max _{r_0 + \varepsilon \le x \le x_0-\varepsilon } \Phi (x)\)  and  \(\Phi _1^{\varepsilon } := \displaystyle \max _{x_0 + \varepsilon \le x \le r_1-\varepsilon } \Phi (x)\).

Proof

We have only to apply Proposition 5.4 with \(r_0\) and \(r_1\) replaced by \(r_0 + \varepsilon \) and \(x_0-\varepsilon \) to deduce the estimate for \(h_0(n)\). In a similar manner, we apply the proposition this time with \(r_0\) and \(r_1\) replaced by \(x_0 + \varepsilon \) and \(r_1-\varepsilon \) to get the estimate for \(h_1(n)\). \(\square \)

5.4 Top component

We consider the top component h(n) in (88). Recall the setting in Sect. 5.2 that \(\mathrm {M{\scriptstyle ax}}= \{ x_0 \} \Subset (r_0, \, r_1)\), \(\Phi _{\scriptstyle \mathrm {max}}= \Phi ( x_0 ) = e^{- \phi ( x_0)}\), \(\phi '( x_0 ) = 0\), and \(\phi ''( x_0 ) > 0\). Since the sum h(n) is regular, Lemma 5.3 implies that H(xn) can be written as in (91a) with estimate (91b) now being

$$\begin{aligned} |e( x; n)| \le C_0(\varepsilon )/n, \qquad {}^{\forall } n \ge N_0(\varepsilon ), \,\, x_0-\varepsilon \le {}^{\forall } x \le x_0+\varepsilon . \end{aligned}$$
(93)

The local study of H(xn) near \(x = x_0 \) is best performed in terms of new variables

$$\begin{aligned} y := x- x_0 \quad \hbox {(shift)}; \qquad z := \sqrt{n} \, y \quad \hbox {(scale change)}. \end{aligned}$$

Taylor expansions around \(x = x_0 \) show that \(\phi ( x )\) and u(xn) can be written

$$\begin{aligned} \phi (x)&= \phi ( x_0 ) + a \, y^2 + \eta ( y ), \quad&|\eta ( y )|&\le b \, |y|^3, \qquad&|{}^{\forall } y \, |&\le \varepsilon _1, \end{aligned}$$
(94a)
$$\begin{aligned} u(x; n)&= u( x_0; n) + v( y; n ), \quad&|v( y; n )|&\le c \, |y|, \qquad&|{}^{\forall } y \, |&\le \varepsilon _1, \end{aligned}$$
(94b)

with \(a := \textstyle \frac{1}{2} \, \phi ''( x_0 ) > 0\) and some positive constants b, c, \(\varepsilon _1 > 0\). It is clear that a and b are independent of n. We can also take c and \(\varepsilon _1\) uniformly in n because \(\varvec{\alpha }(n)\) and \(\varvec{\beta }(n)\) are bounded by assumption (63). If we put

$$\begin{aligned} H_{\mathrm {a}}(x; n)&:= u(x_0; n) \cdot n^{\gamma (n)} \cdot \Phi ( x )^n = u( x_0; n ) \cdot n^{\gamma (n)} \cdot \Phi _{\scriptstyle \mathrm {max}}^{\, n} \cdot e^{-n \{ a \, y^2 + \eta ( y ) \}}, \end{aligned}$$
(95a)
$$\begin{aligned} H_{\mathrm {b}}(x; n)&:= v( y; n ) \cdot n^{\gamma (n)} \cdot \Phi ( x )^n = n^{\gamma (n)} \cdot \Phi _{\scriptstyle \mathrm {max}}^{\, n} \cdot v( y; n ) \cdot e^{-n \{ a \, y^2 + \eta ( y ) \}}, \end{aligned}$$
(95b)
$$\begin{aligned} H_{\mathrm {c}}(x; n)&:= u(x; n) \cdot n^{\gamma (n)} \cdot \Phi (x)^n \cdot e(x; n), \end{aligned}$$
(95c)

then Formula (91a) yields \(H(x; n) = H_{\mathrm {a}}(x; n) + H_{\mathrm {b}}(x; n) + H_{\mathrm {c}}(x; n)\), which in turn gives

$$\begin{aligned} h(n) = h_{\mathrm {a}}(n) + h_{\mathrm {b}}(n) + h_{\mathrm {c}}(n), \qquad h_{\nu }(n) := \sum _{k = l}^{m-1} H_{\nu }(k/n; n), \quad \nu = \mathrm {a}, \mathrm {b}, \mathrm {c}, \end{aligned}$$

where \(l := \lceil (x_0-\varepsilon ) n \rceil \) and \(m := \lceil (x_0 + \varepsilon ) n \rceil \).

To estimate \(h_{\mathrm {a}}(n)\) we use some a priori estimates, which will be collected in Sect. 5.6.

Lemma 5.6

For any \(0< \varepsilon < \varepsilon _2 := \min \{ \varepsilon _0, \, \frac{\varepsilon _1}{2}, \, \frac{a}{4 b} \}\) and \(n \ge N_1(\varepsilon ) := \max \{ 2/\varepsilon , \, N_0(\varepsilon ) \}\),

$$\begin{aligned} h_{\mathrm {a}}(n) = \sqrt{\pi /a} \, \cdot u(x_0; n) \cdot n^{\gamma (n) +1/2} \cdot \Phi _{\scriptstyle \mathrm {max}}^{\, n} \cdot \left\{ 1 + e_{\mathrm {a}}(n) \cdot n^{-1/2} \right\} , \end{aligned}$$
(96a)
$$\begin{aligned} |e_{\mathrm {a}}(n)| \le M_5(a, b; \varepsilon ) := 2 M_3(a, b) + (5/a) \cdot (2 \varepsilon )^{-3/2}, \end{aligned}$$
(96b)

where \(M_3(a, b)\) is defined in Lemma 5.17 and currently \(a := \frac{1}{2} \phi ''(x_0) > 0\).

Proof

Put \(\psi (z; a) := e^{- a \, z^2 + \delta (z)}\) with \(\delta (z) := - n \cdot \eta \left( n^{-1/2} z \right) \). Then (95a) and (94a) read

$$\begin{aligned} H_{\mathrm {a}}(x; n)&= u(x_0; n) \cdot n^{\gamma (n) + 1/2} \cdot \Phi _{\scriptstyle \mathrm {max}}^{\, n} \cdot \psi (z; a) \cdot \frac{1}{\sqrt{n}}, \end{aligned}$$
(97a)
$$\begin{aligned} |\delta (z)|&\le \frac{b}{\sqrt{n}} \, |z|^3 \qquad (|{}^{\forall } z \, | \le \varepsilon _1 \sqrt{n} \,). \end{aligned}$$
(97b)

Consider the sequence \(\varDelta : \xi _k := (k - x_0 n)/\sqrt{n}\)\((k = l, \dots , m)\). From the definitions of l and m,

$$\begin{aligned} - \varepsilon \sqrt{n} \le \xi _l< -\varepsilon \sqrt{n} + 1/\sqrt{n}, \qquad \varepsilon \sqrt{n} \le \xi _m < \varepsilon \sqrt{n} + 1/\sqrt{n}, \end{aligned}$$
(98)

which together with \(0< \varepsilon < \varepsilon _2\) and \(n \ge N_1(\varepsilon )\) implies inclusion \([\xi _l, \, \xi _m] \subset [- \varepsilon _1 \sqrt{n}, \, \varepsilon _1 \sqrt{n}]\), so the estimate (97b) is available for all \(z \in [\xi _l, \, \xi _m]\). From Formula (97a), we have

$$\begin{aligned} h_{\mathrm {a}}(n) = u(x_0; n) \cdot n^{\gamma (n) + \frac{1}{2} } \cdot \Phi _{\scriptstyle \mathrm {max}}^{\, n} \cdot R(\psi , \varDelta ) \quad \hbox {with} \quad R(\psi ; \varDelta ) := \sum _{k=l}^{m-1} \psi (\xi _k; a) \frac{1}{\sqrt{n}}, \end{aligned}$$
(99)

where \(R(\psi ; \varDelta )\) is the left Riemann sum of \(\psi (z; a)\) for equipartition \(\varDelta \) of the interval \([\xi _l, \, \xi _m]\).

Let \(\varphi (z; a) := e^{-a z^2}\). Since \(|\xi _k - z| \le 1/\sqrt{n}\) for any \(z \in [\xi _k, \, \xi _{k+1}]\), Lemma 5.17 yields

$$\begin{aligned} \begin{aligned}&\left| R(\psi ; \varDelta ) - \int _{\xi _l}^{\xi _m} \varphi (z; a) \, d z \right| = \left| \sum _{k=l}^{m-1} \int _{\xi _k}^{\xi _{k+1}} \{ \psi (\xi _k; a) - \varphi (z; a) \} \, d z \right| \\&\quad \le \sum _{k=l}^{m-1} \int _{\xi _k}^{\xi _{k+1}} \left| \psi (\xi _k; a) - \varphi (z; a) \right| \, d z \le \frac{M_3(a, b)}{\sqrt{n}} \sum _{k=l}^{m-1} \int _{\xi _k}^{\xi _{k+1}}\varphi (z; a/4) \, d z \\&\quad = \frac{M_3(a, b)}{\sqrt{n}} \int _{\xi _l}^{\xi _m} \varphi (z; a/4) \, d z \le \dfrac{M_3(a, b)}{\sqrt{n}} \displaystyle \int _{-\infty }^{\infty } \varphi (z; a/4) \, d z = 2 M_3(a, b) \sqrt{\dfrac{\pi }{a n}}, \end{aligned} \end{aligned}$$

where estimate (119) is used in the second inequality. By the partition of Gaussian integral

$$\begin{aligned} \sqrt{\pi /a} = \int _{-\infty }^{\infty } \varphi (z; a) \, d z = \int _{-\infty }^{\xi _l} + \int _{\xi _l}^{\xi _m} + \int _{\xi _m}^{\infty } \varphi (z; a) \, d z, \end{aligned}$$

and bounds \(\xi _l \le -\varepsilon \sqrt{n}/2\) and \(\xi _m \ge \varepsilon \sqrt{n}\), which follow from (98) and \(n \ge 2/\varepsilon \), we have

$$\begin{aligned} \left| R(\psi ; \varDelta ) - \sqrt{\pi /a} \, \right|&\le \int _{-\infty }^{-\varepsilon \sqrt{n}/2} \varphi (z; a) \, d z + 2 M_3(a, b) \sqrt{\dfrac{\pi }{a n}} + \int _{\varepsilon \sqrt{n}}^{\infty } \varphi (z; a) \, d z \nonumber \\&\le 2 M_3(a, b) \sqrt{\dfrac{\pi }{a n}} + \frac{5 \sqrt{\pi }}{2 a^{3/2} \varepsilon ^2 \cdot n} \le \sqrt{\frac{\pi }{a n}} M_5(a, b; \varepsilon ), \end{aligned}$$
(100)

with \(M_5(a, b; \varepsilon ) := 2 M_3(a, b) + (5/a) \cdot (2 \varepsilon )^{-3/2}\), where the estimate

$$\begin{aligned} \int _z^{\infty } \varphi (t; a) \, d t \le \frac{1}{2} \sqrt{\frac{\pi }{a}} \, \varphi (z; a) \le \frac{\sqrt{\pi }}{2 a^{3/2} z^2}, \qquad {}^{\forall } z \ge 0, \end{aligned}$$

and \(\sqrt{n} \ge \sqrt{2/\varepsilon }\) are used in the second and third inequalities, respectively. Upon writing \(R(\psi ; \varDelta ) = \sqrt{\pi /a} \, \{ 1 + e_2(n) \cdot n^{-1/2} \}\), Formula (96) follows from (99) and (100). \(\square \)

Lemma 5.7

For any \(0< \varepsilon < \varepsilon _2\) and \(n \ge N_1(\varepsilon )\), we have

$$\begin{aligned} |h_{\mathrm {b}}(n) | \le c \, M_6(a) \cdot n^{\mathrm {Re}\, \gamma (n)} \cdot \Phi _{\scriptstyle \mathrm {max}}^{\, n}, \end{aligned}$$
(101)

where \(M_6(a) := 2 M_4(a/2) \sqrt{\pi /a} + 2/a\) with \(M_4(a)\) defined in Lemma 5.18 and \(a := \frac{1}{2} \phi ''(x_0) > 0\). For any \(0< \varepsilon < \varepsilon _0\) there exists a constant \(C_2(\varepsilon ) > 0\) such that

$$\begin{aligned} | h_{\mathrm {c}}(n) | \le C_2(\varepsilon ) \cdot n^{\mathrm {Re}\, \gamma (n)} \cdot \Phi _{\scriptstyle \mathrm {max}}^{\, n}, \qquad {}^{\forall } n \ge N_0(\varepsilon ). \end{aligned}$$
(102)

Proof

If \(| y | \le 2 \varepsilon _2\)\((\le \varepsilon _1)\) then estimate (94a) yields

$$\begin{aligned} a \, y^2 + \eta ( y )&\ge a \, y^2 - b \, |y|^3 = a \, y^2 \left( 1 - \textstyle \frac{b}{a} \, |y| \right) \\&\ge a \, y^2 \left( 1 - \textstyle \frac{2 b}{a}\, \varepsilon _2 \right) \ge \textstyle \frac{a}{2} \, y^2, \end{aligned}$$

which together with estimate (94b) and definition (95b) gives

$$\begin{aligned} |H_{\mathrm {b}}(x; n)|&\le c \cdot n^{\mathrm {Re}\, \gamma (n)} \cdot \Phi _{\scriptstyle \mathrm {max}}^{\, n} \cdot e^{ - \frac{a}{2} n y^2} \, | y |, \qquad&|{}^{\forall } y \, |&\le 2 \varepsilon _2, \nonumber \\&= c \cdot n^{\mathrm {Re}\, \gamma (n)} \cdot \Phi _{\scriptstyle \mathrm {max}}^{\, n} \cdot \varphi _1(z; a/2) \cdot \textstyle \frac{1}{\sqrt{n}}, \qquad&|{}^{\forall } z \,|&\le 2 \varepsilon _2 \sqrt{n}, \end{aligned}$$
(103)

where \(\varphi _1(z; a) := |z| \, e^{- a z^2}\). If \(0< \varepsilon < \varepsilon _2\) and \(n \ge N_1(\varepsilon )\) then \([\xi _l, \, \xi _m] \subset [- 2 \varepsilon _2 \sqrt{n}, \, 2 \varepsilon _2 \sqrt{n}]\) follows from (98), so estimate (103) is available for all \(z \in [\xi _l, \, \xi _m]\), yielding

$$\begin{aligned} | h_{\mathrm {b}}(n) | \le \sum _{k=l}^{m-1} | H_{\mathrm {b}} (k/n; n)| \le c \cdot n^{\mathrm {Re}\, \gamma (n)} \cdot \Phi _{\scriptstyle \mathrm {max}}^{\, n} \cdot R(\varphi _1; \varDelta ), \end{aligned}$$

where the Riemann sum \(R(\varphi _1; \varDelta ) := \sum _{k=l}^{m-1} \varphi _1(\xi _k; a/2) \cdot \frac{1}{\sqrt{n}}\) is estimated as

$$\begin{aligned} R(\varphi _1; \varDelta )&\le \sum _{k=l}^{m-1} \int _{\xi _k}^{\xi _{k+1}} |\varphi _1(\xi _k; a/2) - \varphi _1(z; a/2) | \, d z + \int _{\xi _l}^{\xi _m} \varphi _1(z; a/2) \, d z \\&\le M_4(a/2) \sum _{k=l}^{m-1} \int _{\xi _k}^{\xi _{k+1}} | \xi _k - z | \, \varphi (z; a/4) \, d z + \int _{-\infty }^{\infty } \varphi _1(z; a/2) \, d z \\&\le \frac{M_4(a/2)}{\sqrt{n}} \int _{-\infty }^{\infty } \varphi (z; a/4) \, d z + \int _{-\infty }^{\infty } \varphi _1(z; a/2) \, d z \\&= 2 M_4(a/2) \sqrt{\frac{\pi }{a n}} + \frac{2}{a} \le M_6(a), \end{aligned}$$

where the second inequality is obtained by Lemma 5.18. Now (101) follows readily.

Since \(\varvec{\alpha }(n)\) and \(\varvec{\beta }(n)\) are bounded by (63), there exists a constant \(C_1(\varepsilon ) > 0\) such that \(|u(x; n)| \le C_1(\varepsilon )\) for any \(n \in \mathbb {N}\) and \(x \in [x_0-\varepsilon , \, x_0+\varepsilon ]\), which together with (93) yields

$$\begin{aligned} |H_{\mathrm {c}}(x; n)| \!\le \!C_1(\varepsilon ) \cdot n^{\mathrm {Re}\, \gamma (n)} \cdot \Phi _{\scriptstyle \mathrm {max}}^{\, n} \cdot C_0(\varepsilon )/n, \quad {}^{\forall } n \ge N_0(\varepsilon ), \,\, x_0\!-\!\varepsilon \le {}^{\forall } x \le x_0+\varepsilon . \end{aligned}$$

Since \(m-l = \lceil (x_0 + \varepsilon ) n\rceil - \lceil (x_0 - \varepsilon ) n\rceil \le (2 \varepsilon + 1) n\), we have for any \(n \ge N_0(\varepsilon )\),

$$\begin{aligned} | h_{\mathrm {c}}(n) |\le & {} \sum _{k=l}^{m-1} |H(k/n; n)|\\\le & {} C_0(\varepsilon ) \cdot C_1(\varepsilon ) \cdot \frac{m-l}{n} \cdot n^{\mathrm {Re}\, \gamma (n)} \cdot \Phi _{\scriptstyle \mathrm {max}}^{\, n} = C_2(\varepsilon ) \cdot n^{\mathrm {Re}\, \gamma (n)} \cdot \Phi _{\scriptstyle \mathrm {max}}^{\, n}, \end{aligned}$$

where \(C_2(\varepsilon ) := (2 \varepsilon + 1) \cdot C_0(\varepsilon ) \cdot C_1(\varepsilon )\). This establishes estimate (102). \(\square \)

Proposition 5.8

For any \(0< \varepsilon < \varepsilon _2\), there is a constant \(M(\varepsilon ) > 0\) such that

$$\begin{aligned} h(n) = \sqrt{2 \pi } \, \frac{ u(x_0; n) }{\sqrt{ \phi ''(x_0)} } \cdot n^{\gamma (n) + \frac{1}{2} } \cdot \Phi _{\scriptstyle \mathrm {max}}^{\, n} \cdot \left\{ 1 + \frac{e(n)}{ \sqrt{n} } \right\} , \qquad |e(n)| \le M(\varepsilon ), \end{aligned}$$
(104)

for any \(n \ge N_1(\varepsilon )\), where \(\varepsilon _2\) and \(N_1(\varepsilon )\) are given in Lemma 5.6.

Proof

This readily follows from \(h(n) = h_{\mathrm {a}}(n) + h_{\mathrm {b}}(n) + h_{\mathrm {c}}(n)\) and Lemmas 5.6 and 5.7. \(\square \)

5.5 Irregular sums and end components

We shall estimate the left-end component \(g_0(n)\) in (88). When \(r_1 < +\infty \) the estimate for the right-end component \(g_1(n)\) follows from the left-end counterpart by reflectional symmetry (65). If we make the translation \(k \mapsto l := k - \lceil r_0 n \rceil \) for convenience, we can write

$$\begin{aligned} \sigma _i k + \lambda _i n + \alpha _i(n)&= \sigma _i \, l + \bar{\lambda }_i n + {\bar{\alpha }}_i(n), \qquad&\bar{\lambda }_i&:= l_i(r_0) \qquad&(i&\in I), \\ \tau _j k + \mu _j n + \beta _j(n)&= \tau _j \, l + \bar{\mu }_j n + {\bar{\beta }}_j(n), \qquad&\bar{\mu }_j&:= m_j(r_0) \qquad&(j&\in J), \end{aligned}$$

where \({\bar{\alpha }}_i(n) :=\alpha _i(n) + \sigma _i (\lceil r_0 n \rceil - r_0 n) \) and \({\bar{\beta }}_j(n) := \beta _j(n) + \tau _j (\lceil r_0 n \rceil - r_0 n)\). Note that \({\bar{\alpha }}_i(n)\) here is the same as \(\alpha _i^{(0)}(n)\) in Formula (73). Put \(I_0^+ := I \setminus I_0\) and \(J_0^+ := J \setminus J_0\), where the index sets \(I_0\) and \(J_0\) are defined in (71). Then G(kn) factors as

$$\begin{aligned}&G(k; n) = G_0( l; n) \cdot G_0^+( l ; n), \qquad l := k - \lceil r_0 n \rceil , \end{aligned}$$
(105a)
$$\begin{aligned}&G_0(l; n) := \dfrac{\prod _{i \in I_0} \varGamma ( \sigma _i \, l + {\bar{\alpha }}_i(n))}{\prod _{j \in J_0} \varGamma ( \tau _j \, l + {\bar{\beta }}_j(n))}, \quad G_0^+(l; n) := \dfrac{\prod _{i \in I_0^+} \varGamma ( \sigma _i \, l + \bar{\lambda }_i \, n \!+\! {\bar{\alpha }}_i(n))}{\prod _{j \in J_0^+} \varGamma ( \tau _j \, l \!+\! \bar{\mu }_j \, n \!+\! {\bar{\beta }}_j(n))}. \end{aligned}$$
(105b)

From Lemma 5.1 one has \(\sigma _i > 0\) for \(i \in I_0\) and \(\tau _j > 0\) for \(j \in J_0\), whereas condition (68) at \(x = r_0\) implies that \(( \bar{\lambda }_i )_{i \in I_0^+}\) and \(( \bar{\mu }_j )_{j \in J_0^+}\) are balanced to the effect that

$$\begin{aligned} \sum _{i \in I_0^+} \bar{\lambda }_i = \sum _{j \in J_0^+} \bar{\mu }_j. \end{aligned}$$
(106)

However, since \((\sigma _i)_{i \in I_0}\) and \((\tau _j)_{j \in J_0}\), resp. \((\sigma _i)_{i \in I_0^+}\) and \((\tau _j)_{j \in J_0^+}\), may not be balanced, we put

$$\begin{aligned} \rho _0 := \sum _{i \in I_0} \sigma _i - \sum _{j \in J_0} \tau _j, \qquad \rho _0^+ := \sum _{i \in I_0^+} \sigma _i - \sum _{j \in J_0^+} \tau _j, \qquad \rho _0 = - \rho _0^+, \end{aligned}$$
(107)

where the relation \(\rho _0 = - \rho _0^+\) follows from the first condition of (62).

We begin by giving an asymptotic behavior of \(G_0(l; n)\) as \(l \rightarrow \infty \) in terms of

$$\begin{aligned} \Phi _0&:= e^{-\rho _0} \prod _{i \in I_0} \sigma _i^{\sigma _i} \prod _{j \in J_0} \tau _j^{-\tau _j}, \\ u_0(n)&:= (2 \pi )^{\frac{|I_0|-|J_0|}{2}} \prod _{i \in I_0} \sigma _i^{{\bar{\alpha }}_i(n) - \frac{1}{2} } \prod _{j \in J_0} \tau _j^{ \frac{1}{2} - {\bar{\beta }}_j(n)}, \\ \gamma _0(n)&:= \sum _{i \in I_0} {\bar{\alpha }}_i(n) - \sum _{j \in J_0} {\bar{\beta }}_j(n) + \frac{|J_0|-|I_0|}{2}. \end{aligned}$$

Note that \(\Phi _0\) is positive and \(u_0(n)\) is non-zero due to the positivity of \(\sigma _i\) and \(\tau _j\) for \(i \in I_0\) and \(j \in J_0\). We use the following general fact about the gamma function.

Lemma 5.9

For any \(z \in \mathbb {C}\setminus \mathbb {Z}_{\le 0}\) and any integer m such that \(m \ge 1+ |\mathrm {Re}\, z|\),

$$\begin{aligned} |\varGamma (z)| \le \dfrac{2 |\varGamma (z+m)|}{\mathrm {dist}(z, \, \mathbb {Z}_{\le 0})}. \end{aligned}$$

Proof

If \(\mathrm {Re}\, z > 0\) we have \(\mathrm {dist}(z, \, \mathbb {Z}_{\le 0}) = |z|\) and the results follows readily. If \(\mathrm {Re}\, z \le 0\) then \(\mathrm {Re}(z+m) \ge 1\) and so the sequence \(|z|, |z+1|, \cdots , |z+m-1|\) contains \(\mathrm {dist}(z, \mathbb {Z}_{\le 0})\) as its minimum with the next smallest \(\ge 1/2\) and all the rest \(\ge 1\), thus \(|(z; \, m)| = |z||z+1|\cdots |z+m-1| \ge \mathrm {dist}(z, \mathbb {Z}_{\le 0})/2\), hence \(|\varGamma (z)| = |\varGamma (z+m)/(z; \, m)| \le 2|\varGamma (z+m)|/\mathrm {dist}(z, \mathbb {Z}_{\le 0})\). \(\square \)

Lemma 5.10

There exists a constant \(K_0 > 0\) such that

$$\begin{aligned} |G_0(l; n)| \le K_0 \cdot \delta _0(n)^{-1} \cdot (1+ l)^{|\mathrm {Re}\, \gamma _0(n)|} \cdot l^{\rho _0 \, l} \cdot \Phi _0^{l}, \qquad {}^{\forall } l \in \mathbb {Z}_{\ge 0}, \,\, {}^{\forall } n \in \mathbb {N},\nonumber \\ \end{aligned}$$
(108)

where for \(l = 0\) the convention \(l^{\rho _0 \, l} = 1\) is employed.

Proof

Note that \(G_0(l; n)\) in (105b) takes a finite value for every \(l \ge \kappa := \max _{i \in I_0} (R+1)/\sigma _i\) and \(n \in \mathbb {N}\), since (63) implies that \(\sigma _i l + \mathrm {Re}\, {\bar{\alpha }}_i(n) \ge \sigma _i l + \mathrm {Re}\, \alpha _i(n) \ge \sigma _i l - R \ge 1\) for \(i \in I_0\). By Stirling’s formula, (90) we have \(G_0(l; n) = u_0(n) \cdot l^{\gamma _0(n) + \rho _0 \, l} \cdot \Phi _0^{l} \cdot \{1+ O(1/l) \}\) as \(l \rightarrow +\infty \) uniformly with respect to \(n \in \mathbb {N}\). Thus there exists a constant \(M_0 > 0\) such that

$$\begin{aligned} |G_0(l; n)| \le M_0 \cdot (1+ l)^{| \mathrm {Re}\, \gamma _0(n) |} \cdot l^{\rho _0 \, l} \cdot \Phi _0^{l}, \qquad {}^{\forall } l \ge \kappa , \,\, {}^{\forall } n \in \mathbb {N}. \end{aligned}$$

Take the smallest integer \(m \ge \max _{i \in I_0} \{ 1 + \sigma _i (\kappa +1) + R \}\) and put

$$\begin{aligned} {\bar{G}}_0(l; n) := \dfrac{\prod _{i \in I_0} \varGamma (\sigma _i l + {\bar{\alpha }}_i(n) + m)}{ \prod _{j \in J_0} \varGamma (\tau _j l + {\bar{\beta }}_j(n)) }. \end{aligned}$$

Since \(1+ |\sigma _i l + \mathrm {Re}\, {\bar{\alpha }}_i(n)| \le 1 + \sigma _i l + |\mathrm {Re}\, {\bar{\alpha }}_i(n)| \le 1 + \sigma _i l + |\mathrm {Re}\, \alpha _i(n)| + \sigma _i \le m\) for any \(0 \le l < \kappa \), \(n \in \mathbb {N}\) and \(i \in I_0\), Lemma 5.9 implies that for any \(0 \le l < \kappa \) and \(n \in \mathbb {N}\),

$$\begin{aligned} |G_0(l; n)|&\le \dfrac{ 2^{|I_0|} \cdot |{\bar{G}}_0(l; n)|}{ \prod _{i \in I_0} \mathrm {dist}(\sigma _i l + {\bar{\alpha }}_i(n), \, \mathbb {Z}_{\le 0})} \le \dfrac{ 2^{|I_0|} \cdot |{\bar{G}}_0(l; n)|}{ \prod _{i \in I_0} \mathrm {dist}({\bar{\alpha }}_i(n), \, \mathbb {Z}_{\le 0} + |\sigma _i| \mathbb {Z}_{\le 0})} \\&\le \dfrac{ 2^{|I_0|} \cdot |{\bar{G}}_0(l; n)| }{ \delta _0(n) }. \end{aligned}$$

In view of condition (74), there exists a constant \(M_0' > 0\) such that

$$\begin{aligned} 2^{|I_0|} \cdot | {\bar{G}}_0(l; n) | \le M_0' \cdot (1+ l)^{|\mathrm {Re}\, \gamma _0(n)|} \cdot l^{\rho _0 l} \cdot \Phi _0^l, \qquad 0 \le {}^{\forall } l < \kappa , \,\, {}^{\forall } n \in \mathbb {N}. \end{aligned}$$

Then by \(1 \le \delta _0(n)^{-1}\) the estimate (108) holds with the constant \(K_0 := \max \{ M_0, \, M_0' \}\).

\(\square \)

We proceed to the investigation into \(G_0^+(l; n)\) by writing

$$\begin{aligned} G_0^+(l; n) = H_0^+ \left( l/n ; n \right) , \qquad H_0^+(x; n) := \dfrac{\prod _{i \in I_0^+} \varGamma ( {\bar{l}}_i(x) \, n + {\bar{\alpha }}_i(n))}{\prod _{j \in J_0^+} \varGamma ( {\bar{m}}_j(x) \, n + {\bar{\beta }}_j(n))}, \end{aligned}$$
(109)

where \({\bar{l}}_i(x) := \sigma _i x + \bar{\lambda }_i\) and \({\bar{m}}_j(x) := \tau _j x + \bar{\mu }_j\), and then by putting

$$\begin{aligned} \Phi _0^+(x)&:= e^{- \rho _0^+ x} \prod _{i \in I_0^+} {\bar{l}}_i(x)^{{\bar{l}}_i(x)} \prod _{j \in J_0^+} {\bar{m}}_j(x)^{-{\bar{m}}_j(x)}, \\ u_0^+(x; n)&:= (2 \pi )^{\frac{|I_0^+|-|J_0^+|}{2}} \prod _{i \in I_0^+} {\bar{l}}_i(x)^{{\bar{\alpha }}_i(n) - \frac{1}{2} } \prod _{j \in J_0^+} {\bar{m}}_j(x)^{ \frac{1}{2} - {\bar{\beta }}_j(n)}, \\ \gamma _0^+(n)&:= \sum _{i \in I_0^+} {\bar{\alpha }}_i(n) - \sum _{j \in J_0^+} {\bar{\beta }}_j(n) + \frac{|J_0^+|-|I_0^+|}{2}. \end{aligned}$$

Note that \(\Phi _0^+(x)\) and \(u_0^+(x; n)\) are well-defined continuous functions on \([0, \, \varepsilon ]\) with \(\Phi _0^+(x)\) being positive while \(u_0^+(x; n)\) non-vanishing and uniformly bounded in \(n \in \mathbb {N}\).

Lemma 5.11

For any \(0< \varepsilon < \varepsilon _0\), there exist an integer \(N_0(\varepsilon ) \in \mathbb {N}\) and a constant \(K_0^+(\varepsilon ) > 0\) such that for any \(n \ge N_0(\varepsilon )\) and \(0 \le x \le \varepsilon \),

$$\begin{aligned} |H_0^+(x; n)| \le K_0^+(\varepsilon ) \cdot n^{ \mathrm {Re}\, \gamma _0^+(n) + \rho _0^+ x \, n} \cdot \Psi _0^+(\varepsilon )^n \qquad \hbox {with} \quad \Psi _0^+(\varepsilon ) := \max _{0 \le x \le \varepsilon } \Phi _0^+(x). \end{aligned}$$
(110)

Proof

From the definitions of \(I_0^+\), \(J_0^+\), \({\bar{l}}_i(x)\), \({\bar{m}}_j(n)\), there is a constant \(c(\varepsilon ) > 0\) such that

$$\begin{aligned} {\bar{l}}_i(x)> c(\varepsilon ) \quad (i \in I_0^+), \qquad {\bar{m}}_j(x) > c(\varepsilon ) \quad (j \in J_0^+), \qquad 0 \le {}^{\forall } x \le \varepsilon . \end{aligned}$$

By condition (63), \(H_0^+(x; n)\) takes a finite value for any \(x \in [0, \, \varepsilon ]\) and \(n \ge N_0(\varepsilon ) := (R+1)/c(\varepsilon )\) and Stirling’s formula (90) implies that \(H_0^+(x; n)\) admits an asymptotic formula

$$\begin{aligned} H_0^+(x; n) = u_0^+(x; n) \cdot n^{\gamma _0^+(n) + \rho _0^+ x \, n} \cdot \Phi _0^+(x)^n \cdot \{ 1+ O(1/n) \} \qquad \hbox {as} \quad n \rightarrow + \infty , \end{aligned}$$

uniform in \(x \in [0, \, \varepsilon ]\), where one also uses the equality \(\sum _{i \in I_0^+} {\bar{l}}_i(x) - \sum _{j \in J_0^+} {\bar{m}}_j(x) = \rho _0^+ \, x\), which is due to balancedness condition (106) and definition (107). From this estimate and the boundedness of \(u_0^+(x; n)\) coming from (63), the assertion (110) follows readily. \(\square \)

Now we are able to give an estimate for the left-end component \(g_0(n)\) in terms of

$$\begin{aligned} \varepsilon _3&:= {\left\{ \begin{array}{ll} + \infty &{} (\hbox {if } \ \rho _0 \ge 0), \\ e^{-1} \Phi _0^{-1/\rho _0} &{} (\hbox {if } \ \rho _0 < 0), \end{array}\right. } \end{aligned}$$
(111a)
$$\begin{aligned} \Psi _0(\varepsilon )&:= {\left\{ \begin{array}{ll} 1 &{} (\hbox {if }\rho _0> 0, \hbox { or } \rho _0 = 0 \hbox { with } \Phi _0 \le 1), \\ (\varepsilon ^{\rho _0} \Phi _0)^{\varepsilon } \,\,\,\, &{} (\hbox {if }\rho _0 < 0, \hbox { or } \rho _0 = 0 \hbox { with } \Phi _0 > 1), \end{array}\right. } \end{aligned}$$
(111b)
$$\begin{aligned} \varDelta _0(\varepsilon )&:= \Psi _0(\varepsilon ) \cdot \Psi _0^+(\varepsilon ). \end{aligned}$$
(111c)

Lemma 5.12

For any \(0< \varepsilon < \varepsilon _4 := \min \{ \varepsilon _0, \, \varepsilon _2, \, \varepsilon _3 \}\) with \(\varepsilon _0\), \(\varepsilon _2\), and \(\varepsilon _3\) defined in (89), Lemma 5.6 and (111a), respectively, there exist \(N_0(\varepsilon ) \in \mathbb {N}\) and \(K_0(\varepsilon ) > 0\) such that

$$\begin{aligned} |g_0(n)| \le K_0(\varepsilon ) \cdot \delta _0(n)^{-1} \cdot n^{ |\mathrm {Re}\, \gamma _0(n)| + \mathrm {Re}\, \gamma _0^+(n) +1 } \cdot \varDelta _0(\varepsilon )^n, \qquad {}^{\forall } n \ge N_0(\varepsilon ). \end{aligned}$$

Proof

It follows from Formulas (109), (110), and (107) that

$$\begin{aligned} |G_0^+(l; n)| = |H_0^+( l/n; n ) | \le K_0^+(\varepsilon ) \cdot n^{\mathrm {Re}\, \gamma _0^+(n)} \cdot n^{- \rho _0 l} \cdot \Psi _0^+(\varepsilon )^n. \end{aligned}$$

Multiplying this estimate by inequality (108), we have from Formula (105a),

$$\begin{aligned} |G(k; n)| \le K(\varepsilon ) \cdot \delta _0(n)^{-1} \cdot n^{\mathrm {Re}\, \gamma _0^+(n)} \cdot \varphi (l; n) \cdot \Psi _0^+(\varepsilon )^n \cdot (1+l)^{|\mathrm {Re}\, \gamma _0(n)|}, \end{aligned}$$
(112)

for any \(n \ge N_0(\varepsilon )\) and \(0 \le l := k - \lceil r_0 n \rceil < \varepsilon n\), where \(K(\varepsilon ) := K_0 \cdot K_0^+(\varepsilon )\) and

$$\begin{aligned} \varphi ( t; n) := (t/n)^{\rho _0 t} \cdot \Phi _0^t \quad (t > 0) \quad \hbox {with} \quad \varphi (0; n) = \lim _{t \rightarrow +0} \varphi (t; n) = 1. \end{aligned}$$

A bit of differential calculus shows the following:

  1. (i)

    If either \(\rho _0 > 0\) or \(\rho _0 = 0\) with \(\Phi _0 \le 1\), then \(\varphi (t; n)\) is non-increasing in \(t \ge 0\) and hence \(\varphi (t; n) \le \varphi (0; n) = 1 = \Psi _0(\varepsilon )^n\) for any \(t \ge 0\).

  2. (ii)

    If either \(\rho _0 < 0\) or \(\rho _0 = 0\) with \(\Phi _0 > 1\), then \(\frac{d}{d t} \varphi (t; n) \ge 0\) in \(0 \le t \le \varepsilon _3 \, n\) with equality only when \(t = \varepsilon _3 \, n\), so that \(\varphi (t; n) \le \varphi (\varepsilon n; n) = (\varepsilon ^{\rho _0} \Phi _0)^{\varepsilon n} = \Psi _0(\varepsilon )^n\) for any \(0 \le t \le \varepsilon n\)\((< \varepsilon _3 \, n)\), where \(\varepsilon _3\) and \(\Psi _0(\varepsilon )\) are defined in (111a) and (111b), respectively.

In either case \(0 < \varphi (t; n) \le \Psi _0(\varepsilon )^n\) for any \(0 \le t < \varepsilon n\) and thus (112) and (111c) lead to

$$\begin{aligned} |G(k; n)| \le K(\varepsilon ) \cdot \delta _0(n)^{-1}\cdot n^{\mathrm {Re}\, \gamma _0^+(n)} \cdot \varDelta _0(\varepsilon )^n \cdot (1+l)^{|\mathrm {Re}\, \gamma _0(n)|}, \end{aligned}$$
(113)

for any \(n \ge N_0(\varepsilon )\) and \(0 \le l : = k - \lceil r_0 n \rceil < \varepsilon n\). Since

$$\begin{aligned} \sum _{0 \le l < \varepsilon n} (1+ l)^{|\mathrm {Re}\, \gamma _0(n)|}&\le \int _0^{\varepsilon n+1} (1+t)^{|\mathrm {Re}\, \gamma _0(n)|} \, d t \le \frac{(\varepsilon n + 2)^{|\mathrm {Re}\, \gamma _0(n)|+1}}{|\mathrm {Re}\, \gamma _0(n)|+1} \\&\le \{(2 + \varepsilon ) \cdot n\}^{|\mathrm {Re}\, \gamma _0(n)|+1}, \end{aligned}$$

summing up (113) over the integers \(0 \le l \le \lceil (r_0+\varepsilon ) n \rceil - \lceil r_0 n \rceil -1\)\((< \varepsilon n)\) yields

Since \(\gamma _0(n)\) is bounded by condition (63), we can take a constant \(K_0(\varepsilon ) \ge K(\varepsilon ) \cdot (2 + \varepsilon )^{|\mathrm {Re}\, \gamma _0(n)|+1}\) to establish the lemma. \(\square \)

Proposition 5.13

For any \(d > \Phi (r_0)\), there exists a positive constant \(\varepsilon _5 \le \varepsilon _4\) such that

$$\begin{aligned} |g_0(n)| \le M_0(d, \varepsilon ) \cdot \delta _0(n)^{-1} \cdot d^n, \qquad {}^{\forall } n \ge N_0(\varepsilon ), \,\, 0 < {}^{\forall } \varepsilon \le \varepsilon _5, \end{aligned}$$
(114)

for some \(M_0(d, \varepsilon ) > 0\) and \(N_0(\varepsilon ) \in \mathbb {N}\) independent of d, where \(\Phi (x)\) is defined in (75a) and \(\varepsilon _4\) is given in Lemma 5.12. When \(r_1 < + \infty \), a similar statement can be made for the right-end component \(g_1(n)\) in (88); for any \(d > \Phi (r_1)\) there exists a sufficiently small \(\varepsilon _6 > 0\) such that

$$\begin{aligned} |g_1(n)| \le M_1(d, \varepsilon ) \cdot \delta _1(n)^{-1} \cdot d^n, \qquad {}^{\forall } n \ge N_1(\varepsilon ), \,\, 0 < {}^{\forall } \varepsilon \le \varepsilon _6. \end{aligned}$$

Proof

We show the assertion for the left-end component \(g_0(n)\) only as the right-end counterpart follows by reflectional symmetry (65). Observe that \(\Psi _0(\varepsilon ) \rightarrow 1\), \(\Psi _0^+(\varepsilon ) \rightarrow \Phi (r_0)\), and so \(\varDelta _0(\varepsilon ) \rightarrow \Phi (r_0)\) as \(\varepsilon \rightarrow +0\). Thus given \(d > \Phi (r_0)\) there is a constant \(0< \varepsilon _5 < \varepsilon _4\) such that \(d > \varDelta _0(\varepsilon )\) for any \(0 < \varepsilon \le \varepsilon _5\). Then Lemma 5.12 enables us to take a constant \(M_0(d, \varepsilon )\) as in (114). \(\square \)

Proof of Theorem 5.2

As is mentioned at the end of Sect. 5.2 only the singleton case \(\mathrm {M{\scriptstyle ax}}= \{ x_0 \}\) is treated for the sake of simplicity. We can take a number d so that \(\max \{\Phi (r_0), \, \Phi (r_1) \}< d < \Phi _{\scriptstyle \mathrm {max}}\), since \(\Phi (x)\) attains its maximum only at the interior point \(x_0 \in (r_0, \, r_1)\). For this d take the numbers \(\varepsilon _5\) and \(\varepsilon _6\) as in Proposition 5.13 and put \(\varepsilon := \min \{ \varepsilon _5, \, \varepsilon _6 \}\). For this \(\varepsilon \) consider the numbers \(\Phi _0^{\varepsilon }\) and \(\Phi _1^{\varepsilon }\) in Lemma 5.5, both of which are strictly smaller than \(\Phi _{\scriptstyle \mathrm {max}}\). Take a number \(d_0\) so that \(\max \{ d, \, \Phi _0^{\varepsilon }, \, \Phi _1^{\varepsilon } \}< d_0 < \Phi _{\scriptstyle \mathrm {max}}\) and put \(\lambda := \Phi _{\scriptstyle \mathrm {max}}/d_0 > 1\). Then the estimates in Propositions 5.8 and 5.13 and Lemma 5.5 are put together into Eq. (88) to yield

$$\begin{aligned} g(n) = n^{\gamma (n) + \frac{1}{2} } \cdot \Phi _{\scriptstyle \mathrm {max}}^{\, n} \cdot \left\{ C(n) + \Omega (n) \right\} , \end{aligned}$$

where C(n) is defined in (86) and \(\Omega (n)\) admits the estimate (87). \(\square \)

Even without assuming properness (83) we have the following convenient proposition.

Proposition 5.14

Suppose that the sum g(n) in (64) satisfies balancedness (62), boundedness (63), admissibility (69), and genericness (74) along with continuity at infinity (78) and convergence (80) when \(r_1 = + \infty \). For any \(d > \Phi _{\scriptstyle \mathrm {max}}\), there exist \(K > 0\) and \(N \in \mathbb {N}\) such that

$$\begin{aligned} |g(n)| \le K \cdot d^n \cdot \{ \delta _0(n)^{-1} + \delta _1(n)^{-1} \}, \qquad {}^{\forall } n \ge N. \end{aligned}$$

Proof

Divide g(n) into three components; sums over \([r_0, \, r_0+ \varepsilon ]\), \([r_0+\varepsilon , \, r_1-\varepsilon ]\), and \([r_1-\varepsilon , \, r_1]\). Take \(\varepsilon > 0\) sufficiently small depending on how d is close to \(\Phi _{\scriptstyle \mathrm {max}}\). Apply Proposition 5.13 to the left and right components and then use Lemma 5.5 in the middle one. \(\square \)

5.6 A priori estimates

We present the a priori estimates used in Sect. 5.4. In what follows, we often use the inequality

$$\begin{aligned} |e^x -1 | \le |x| \, e^{|x|} \qquad (x \in \mathbb {R}). \end{aligned}$$
(115)

Given a positive constant a, we consider the function \(\varphi (x; a) := e^{-a x^2}\).

Lemma 5.15

If x, \(y \in \mathbb {R}\), and \(| y-x | \le 1\), then

$$\begin{aligned} | \varphi ( y; a) - \varphi ( x; a ) | \le M_1( a ) \, |y-x| \, \varphi ( x; a/2 ), \end{aligned}$$
(116)

where \(M_1(a) := a \, \displaystyle \sup _{x \in \mathbb {R}} (2 |x| +1) e^{-\frac{a}{2} (x^2 - 4|x|-2)} < \infty \).

Proof

Put \(h := y-x\). It then follows from inequality (115) that

$$\begin{aligned} \begin{aligned} \left| e^{a x^2 - a (x+h)^2} -1 \right|&= \left| e^{-a h (2 x + h)} -1 \right| \le a \, |h||2 x + h| \, e^{a |h||2 x + h|} \\&\le a \, |h| \left( 2 |x| + |h| \right) \, e^{a |h|(2|x| + |h|)} \le a \, |h| \left( 2 |x| + 1 \right) \, e^{a (2|x| + 1) }, \end{aligned} \end{aligned}$$

whenever \(|h| \le 1\). Dividing both sides by \(e^{a x^2}\) we have

$$\begin{aligned} \begin{aligned} \left| e^{- a (x+h)^2} - e^{- a x^2} \right|&\le a \, |h| \left( 2 |x| + 1 \right) \, e^{-a (x^2 -2|x| - 1) } \\&= a \, \left( 2 |x| + 1 \right) \, e^{-\frac{a}{2} (x^2 -4|x| - 2) } \cdot e^{- \frac{a}{2} x^2} \, |h| \le M_1(a) \, e^{- \frac{a}{2} x^2} \, |h|, \end{aligned} \end{aligned}$$

which proves the lemma. \(\square \)

Let \(b >0\), \(m \ge 1\), \(0 < \varepsilon \le \frac{a}{4 b}\), and suppose that a function \(\delta (x)\) admits an estimate

$$\begin{aligned} | \delta ( x ) | \le \frac{b}{m} \, |x|^3 \qquad (|{}^{\forall }x \, | \le \varepsilon m). \end{aligned}$$
(117)

Lemma 5.16

Under condition (117), the function \(\psi (x; a) := e^{- a x^2 + \delta (x)}\) satisfies

$$\begin{aligned} | \psi (x; a) - \varphi (x; a) | \le \frac{b \, M_2(a)}{m} \, \varphi (x; a/2 ) \qquad (|{}^{\forall } x \,| \le \varepsilon m), \end{aligned}$$
(118)

where \(M_2(a) := \displaystyle \sup _{x \in \mathbb {R}} |x|^3 \, e^{- \frac{a}{4} \, x^2} < \infty \).

Proof

For \(|x| \le \varepsilon m\), we have

$$\begin{aligned} \left| e^{- a x^2 + \delta (x) } - e^{-a x^2} \right|&= e^{- a x^2} \left| e^{\delta (x)} -1 \right| \le | \delta (x) | \, e^{- a x^2 + | \delta (x) |} \qquad \qquad \hbox {by }(115), \\&\le \frac{b}{m} |x|^3 \, e^{-a x^2 + \frac{b}{m} |x|^3 } = \frac{b}{m} |x|^3 \, e^{-a x^2 \left( 1-\frac{b}{a m} |x| \right) } \qquad \qquad \hbox {by }(117), \\&\le \frac{b}{m} |x|^3 \, e^{-a x^2 \left( 1-\frac{b \varepsilon }{a} \right) } \qquad \qquad \hbox {by }|x| \le \varepsilon m, \\&\le \frac{b}{m} |x|^3 \, e^{-\frac{3 a}{4} x^2 } = \frac{b}{m} |x|^3 \, e^{-\frac{a}{4} x^2 } \cdot e^{-\frac{a}{2} x^2 } \qquad \qquad \hbox {by }0 < \varepsilon \le \frac{ a }{4 b }, \\&\le \frac{b M_2(a)}{m} \, e^{-\frac{a}{2} x^2 }, \end{aligned}$$

where the last inequality is by the definition of \(M_2(a)\). \(\square \)

Lemma 5.17

Under condition (117), if \(|x| \le \varepsilon m\), \(|y| \le \varepsilon m\), and \(|y-x| \le 1/m\), then

$$\begin{aligned} | \psi (y; a) - \varphi (x; a) | \le \frac{M_3(a, b)}{m} \, \varphi (x; a/4 ), \end{aligned}$$
(119)

where \(M_3(a, b) := M_1(a) + b M_2(a) + b M_1( a/2 ) M_2( a )\).

Proof

Putting \(y = x + h\) with \(|h| \le 1/m\), we have

$$\begin{aligned} | \psi&( y; a ) - \varphi ( x; a ) |&\\&\le | \psi ( y; a ) - \varphi ( y; a ) | + | \varphi ( y; a ) - \varphi ( x; a ) | \qquad \qquad \hbox {by t.i.,} \\&\le \textstyle \frac{b M_2(a)}{m} \, \varphi ( y; \frac{a}{2} ) + M_1(a) \, |h| \, \varphi ( x; \frac{a}{2} ) \qquad \qquad \hbox {by }(118) \hbox { and } (116), \\&\le \textstyle \frac{b M_2(a)}{m} \left\{ |\varphi ( y; \frac{a}{2} ) - \varphi ( x; \frac{a}{2} ) | + \varphi ( x; \frac{a}{2} ) \right\} + \textstyle \frac{M_1(a)}{m} \, \varphi ( x; \frac{a}{2} ) \qquad \hbox {by t.i. and }|h| \le \frac{1}{m}, \\&\le \textstyle \frac{b M_2(a)}{m} \left\{ M_1( \frac{a}{2} ) \, |h| \, \varphi ( x; \frac{a}{4} ) + \varphi ( x; \frac{a}{2} ) \right\} + \textstyle \frac{M_1(a)}{m} \, \varphi ( x; \frac{a}{2} ) \qquad \qquad \hbox {by }(116), \\&\le \textstyle \frac{b M_2(a)}{m} \left\{ M_1( \frac{a}{2} ) \, \varphi ( x; \frac{a}{4} ) + \varphi ( x; \frac{a}{2} ) \right\} + \textstyle \frac{M_1(a)}{m} \, \varphi ( x; \frac{a}{2} ) \qquad \qquad \hbox {by }|h| \le \frac{1}{m} \le 1, \\&\le \textstyle \frac{M_3(a, b )}{m} \, \varphi ( x; \frac{a}{4} ) \qquad \qquad \hbox {by }\varphi (x; \frac{a}{2}) \le \varphi (x; \frac{a}{4}), \end{aligned}$$

where t.i. refers to trigonometric inequality. \(\square \)

Lemma 5.18

If x, \(y \in \mathbb {R}\) and \(| y-x | \le 1\), then \(\varphi _1(x; a) := |x| \, e^{-a x^2}\) satisfies

$$\begin{aligned} | \varphi _1( y; a) - \varphi _1( x; a ) | \le M_4( a ) \, |y-x| \, \varphi ( x; a/4 ), \end{aligned}$$
(120)

where \(M_4(a) := 1+ M_1(a) \cdot \displaystyle \max _{x \in \mathbb {R}} (|x| +1) e^{-\frac{a}{4} x^2} < \infty \).

Proof

Putting \(y = x + h\) with \(|h| < 1\), one has

$$\begin{aligned} | \varphi _1&( x+h; a ) - \varphi _1( x; a ) | = ||x+h| \, \varphi (x+h; a) - |x| \, \varphi (x; a)| \\&\le |x+h| |\varphi (x+h; a) - \varphi (x; a) | + ||x+h|-|x|| \, \varphi (x; a) \quad \quad \hbox {by t.i.}, \\&\le (|x| + 1) \, M_1(a) \, |h| \, \varphi (x; a/2) + |h| \, \varphi (x; a) \quad \quad \hbox {by }|h| < 1, 116) \hbox { and } t.i., \\&= M_1(a) \cdot (|x|+1) e^{-\frac{a}{4} x^2} \, |h| \, \varphi (x; a/4) + |h| \, \varphi (x; a) \quad \\&\le \{ 1 + M_1(a) \cdot (|x|+1) e^{-\frac{a}{4} x^2} \} |h| \, \varphi (x; a/4) \quad \hbox {by }\varphi (x; a) \le \varphi (x; a/4), \\&\le M_4(a) \, |h| \, \varphi (x; a/4). \end{aligned}$$

Thus estimate (120) has been proved. \(\square \)

6 Dominant sequences

Recall that the hypergeometric series \({}_3g_2({\varvec{a}})\) is defined in (25) and the subset \(\mathcal {S}(\mathbb {Z}) \subset \mathbb {Z}^5\) is defined in (31). In what follows, we fix any positive numbers R, \(\sigma > 0\) and let

$$\begin{aligned} \mathbb {A}(R, \sigma ) := \{\, {\varvec{a}}= (a_0, a_1, a_2; b_1, b_2) \in \mathbb {C}^5 \,:\, |\!| {\varvec{a}}|\!| \le R, \,\, \mathrm {Re}\, s({\varvec{a}}) > \sigma \, \}, \end{aligned}$$

where \(|\!| \cdot |\!|\) is the standard norm on \(\mathbb {C}^5\). As an application of Sect. 5 we shall show the following.

Theorem 6.1

If \(\varvec{p}= (p_0, p_1, p_2; q_1, q_2) \in \mathcal {S}(\mathbb {Z})\) is any vector satisfying either

$$\begin{aligned} (\mathrm {a}) \quad \varDelta (\varvec{p}) \le 0 \qquad \hbox {or} \qquad (\mathrm {b}) \quad 2 q_1^2 - 2 (p_1+p_2) q_1 + p_1 p_2 \ge 0, \end{aligned}$$
(121)

where \(\varDelta (\varvec{p})\) is the polynomial in (33), then \(|D(\varvec{p})| > 1\) and there exists an asymptotic formula

$$\begin{aligned} t({\varvec{a}}) \cdot {}_3g_2({\varvec{a}}+ n \varvec{p}) = B({\varvec{a}}; \varvec{p}) \cdot D(\varvec{p})^{n} \cdot n^{-s({\varvec{\scriptstyle a}}) -\frac{1}{2} } \, \left\{ 1+ O(n^{-\frac{1}{2} }) \right\} \qquad \hbox {as} \quad n \rightarrow +\infty , \end{aligned}$$

uniformly valid with respect to \({\varvec{a}}\in \mathbb {A}(R, \sigma )\), where \(D(\varvec{p})\) is defined in (32) and

$$\begin{aligned} t({\varvec{a}}) := & {} \sin \pi (b_1-a_0) \cdot \sin \pi (b_2 - a_0), \nonumber \\ B({\varvec{a}}; \varvec{p}) := & {} \dfrac{\pi ^{ \frac{1}{2} } \cdot p_0^{a_0- \frac{1}{2} } p_1^{a_1- \frac{1}{2} } p_2^{a_2- \frac{1}{2} } \cdot s_2(\varvec{p})^{s({\varvec{\scriptstyle a}}) - 1}}{2^{ \frac{3}{2} } \prod _{i=0}^2 \prod _{j=1}^2 (q_j-p_i)^{b_j-a_i- \frac{1}{2} }}, \end{aligned}$$
(122)

with \(s_2(\varvec{p}) := p_0 p_1 + p_1 p_2 + p_2 p_0 - q_1 q_2\) as in Theorem 4.3.

Remark 6.2

Conditions (30) and (121) are invariant under multiplication of \(\varvec{p}\) by any positive scalar. This homogeneity allows one to restrict \(\mathcal {S}(\mathbb {R})\) to \(\mathcal {S}_1(\mathbb {R}) := \mathcal {S}(\mathbb {R}) \cap \{ q_1 = 1\}\), which is a 3-dimensional solid tetrahedron. A numerical integration shows that the domain in \(\mathcal {S}_1(\mathbb {R})\) bounded by inequalities (121) occupies some 43 % of the whole \(\mathcal {S}_1(\mathbb {R})\) in volume basis. Thus we may say that about 43 % of the vectors in \(\mathcal {S}(\mathbb {Z})\) satisfy condition (121).

By the definition of \({}_3g_2({\varvec{a}})\) one can write \(g(n) := {}_3g_2({\varvec{a}}+ n \varvec{p}) = \sum _{k=0}^{\infty } \varphi (k; n)\) with

$$\begin{aligned}&\varphi (k; n) \\&\quad := \frac{\varGamma (k + p_0 n + a_0) \varGamma (k-(q_1-p_0) n+a_0-b_1+1) \varGamma (k-(q_2-p_0) n+a_0-b_2+1)}{\varGamma (k+1) \varGamma (k+(p_0-p_1) n+a_0-a_1+1) \varGamma (k+(p_0-p_2) n+ a_0-a_2+1)}. \end{aligned}$$

We remark that the current g(n) corresponds to the sequence \(g_0(n)\) in (26a), not to g(n) in (26b). In general a gamma factor \(\varGamma (\sigma k + \lambda n + \alpha )\) is said to be positive resp. negative on an interval of k, if \(\sigma k + \lambda n\) is positive resp. negative whenever k lies in that interval. Since \(\varvec{p}\in \mathcal {S}(\mathbb {Z})\), all lower and an upper gamma factors of \(\varphi (k; n)\) are positive in \(k > 0\), while the remaining two upper factors changes their signs when k goes across \((q_1-p_0) n\) or \((q_2-p_0) n\). Thus it is natural to make a decomposition \(g(n) = g_1(n) + g_2(n) + g_3(n)\) with

$$\begin{aligned}&g_1(n) := \sum _{k=0}^{(q_1-p_0) n-1} \varphi (k; n), \quad g_2(n) :=\sum _{k=(q_1-p_0) n}^{(q_2-p_0) n-1} \varphi (k; n), \\&\quad g_3(n) :=\sum _{k=(q_2-p_0) n}^{\infty } \varphi (k; n), \end{aligned}$$

where if \(q_1 = q_2\) then \(g_2(n)\) should be null so we always assume \(q_1 < q_2\) when discussing \(g_2(n)\). It turns out that the first component \(g_1(n)\) is the most dominant among the three, yielding the leading asymptotics for g(n). The proof of Theorem 6.1 is completed at the end of Sect. 6.3.

6.1 First component

For the first component \(g_1(n)\), applying Euler’s reflection formula for the gamma function to the two negative gamma factors in the numerator of \(\varphi (k; n)\), we have

$$\begin{aligned} t({\varvec{a}}) \cdot g_1(n)&= \pi ^2 \cdot (-1)^{(q_1 + q_2) n} \cdot G_1(n)\\&\quad \hbox {with} \quad G_1(n) := \sum _{k=0}^{L_1 n -1} \frac{\varGamma (\sigma _1 k + \lambda _1 n + \alpha _1)}{\prod _{j=1}^5 \varGamma (\tau _j k + \mu _j n + \beta _j)}, \end{aligned}$$

where \(L_1 = q_1-p_0\), \(\sigma _1 = 1\), \(\lambda _1 = p_0\), \(\alpha _1 = a_0\), and

$$\begin{aligned} \tau _1&= 1,&\tau _2&= 1,&\tau _3&= 1,&\tau _4&= -1,&\tau _5&= -1, \\ \mu _1&= 0,&\mu _2&= p_0-p_1,&\mu _3&= p_0-p_2,&\mu _4&= q_1-p_0 = L_1,&\mu _5&= q_2-p_0, \\ \beta _1&= 1,&\beta _2&= a_0-a_1+1,&\beta _3&= a_0-a_2+1,&\beta _4&= b_1-a_0,&\beta _5&= b_2-a_0. \end{aligned}$$

Under the assumption of Theorem 6.1, the sum \(G_1(n)\) satisfies all conditions in Theorem 5.2. Indeed, balancedness (62) follows from \(s(\varvec{p}) = 0\); boundedness (63) is trivial because \(\alpha _1\) and \(\beta _j\) are independent of n; admissibility (69) is fulfilled with \(r_0 = 0\) and \(r_1 = L_1\) due to condition (30); genericness (74) is trivial since \(I_0 \cup I_1 = \emptyset \) with \(J_0 = \{1\}\) and \(J_1 = \{4\}\) by inequalities in (30). To verify properness (83), notice that the characteristic equation (84) now reads

$$\begin{aligned} \chi _1(x) = \dfrac{x(x+p_0-p_1)(x+p_0-p_2)}{(-x+q_1-p_0)(-x+q_2-p_0)} - (x+p_0) = 0. \end{aligned}$$

Thanks to \(s(\varvec{p}) = 0\), this equation reduces to a linear equation in x having the unique root

$$\begin{aligned} x_0 = \frac{p_0(q_1-p_0)(q_2-p_0)}{p_1 p_2-(q_1-p_0)(q_2-p_0)} = \frac{p_0(q_1-p_0)(q_2-p_0)}{s_2(\varvec{p})}, \end{aligned}$$

where \(s(\varvec{p}) = 0\) again leads to \(s_2(\varvec{p}) = p_1 p_2-(q_1-p_0)(q_2-p_0)\), which together with (30) yields \(s(\varvec{p}) -p_0(q_2-p_0) = (q_1-p_1)(q_1-p_2) > 0\) and hence \(s_2(\varvec{p})> p_0(q_2-p_0) > 0\), that is,

$$\begin{aligned} 0< x_0 < L_1 = q_1 - p_0. \end{aligned}$$

If \(\phi _1(x)\) is the additive phase function for \(G_1(n)\) then it follows from (82b) and (30) that

$$\begin{aligned}&\phi _1''(x_0)\\&= \frac{1}{x_0} + \frac{1}{x_0+p_0-p_1} + \frac{1}{x+p_0-p_2} + \frac{1}{q_1-p_0-x_0} + \frac{1}{q_2-p_0-x_0} - \frac{1}{x_0+p_0} \\&= \frac{s_2(\varvec{p})^4}{p_0 p_1 p_2 \prod _{i=0}^2 \prod _{j=1}^2 (q_j-p_i)} > 0. \end{aligned}$$

Thus in the interval \(0< x < L_1\), the function \(\phi _1(x)\) has only one local and hence global minimum at \(x = x_0\), which is non-degenerate. Therefore properness (83) is satisfied with \(\mathrm {M{\scriptstyle ax}}= \{x_0 \}\) and hence Theorem 5.2 applies to the sum \(G_1(n)\).

Lemma 6.3

For any \(\varvec{p}\in \mathcal {S}(\mathbb {Z})\), we have \(|D(\varvec{p})| > 1\) and an asymptotic representation

$$\begin{aligned} t({\varvec{a}}) \cdot g_1(n) = B({\varvec{a}}; \varvec{p}) \cdot D(\varvec{p})^{n} \cdot n^{-s({\varvec{\scriptstyle a}}) -\frac{1}{2} } \, \left\{ 1+ O(n^{-\frac{1}{2} }) \right\} \qquad \hbox {as} \quad n \rightarrow +\infty , \end{aligned}$$

uniform with respect to \({\varvec{a}}\in \mathbb {A}(R, \sigma )\), where \(D(\varvec{p})\), \(t({\varvec{a}})\), and \(B({\varvec{a}}; \varvec{p})\) are as in (32) and (122).

Proof

Substituting \(x = x_0\) into Formulas (75) and using \(s(\varvec{p}) = 0\) repeatedly, one has

$$\begin{aligned} (\Phi _1)_{\mathrm {max}}&= \Phi _1(x_0) = \dfrac{p_0^{p_0} p_1^{p_1} p_2^{p_2}}{\prod _{i=0}^2 \prod _{j=1}^2 (q_j - p_i)^{q_j - p_i}},\\ u_1(x_0)&= \dfrac{p_0^{a_0-1} p_1^{a_1-1} p_2^{a_2-1} s_2(\varvec{p})^{s({\varvec{\scriptstyle a}}) + 1}}{(2 \pi )^2 \prod _{i=0}^2 \prod _{j=1}^2 (q_j-p_i)^{b_j-a_i}}, \end{aligned}$$

while \(\gamma _1 := \gamma (n)\) in definition (81) now reads \(\gamma _1 = - s({\varvec{a}}) - 1\). Since \(\delta _0(n) = \delta _1(n) = 1\) in (87) by \(I_0 \cup I_1 = \emptyset \), Formula (85) in Theorem 5.2 implies that

$$\begin{aligned} G_1(n) = C_1 \cdot (\Phi _1)_{\mathrm {max}}^n \cdot n^{\gamma _1 + \frac{1}{2} } \cdot \left\{ 1 + O(n^{- \frac{1}{2} }) \right\} \qquad \hbox {as} \quad n \rightarrow + \infty , \end{aligned}$$

where Formula (86) allows one to calculate the constant \(C_1 := C(n)\) as

$$\begin{aligned} C_1 = \sqrt{2 \pi } \, \frac{u_1(x_0)}{\sqrt{\phi _1''(x_0)}} = \dfrac{p_0^{a_0- \frac{1}{2} } p_1^{a_1- \frac{1}{2} } p_2^{a_2- \frac{1}{2} } s_2(\varvec{p})^{s({\varvec{\scriptstyle a}}) - 1}}{(2 \pi )^{ \frac{3}{2} } \prod _{i=0}^2 \prod _{j=1}^2 (q_j-p_i)^{b_j-a_i- \frac{1}{2} }}. \end{aligned}$$

In view of the relation between \(G_1(n)\) and \(g_1(n)\), the above asymptotic formula for \(G_1(n)\) gives the one for \(g_1(n)\). Finally \(|D(\varvec{p})| > 1\) follows from Lemma 6.4. \(\square \)

Lemma 6.4

Under condition (30), one has \(|D(\varvec{p}) | = (\Phi _1)_{\mathrm {max}}> \Phi _1(0) > 1\).

Proof

First, \(|D(\varvec{p})| = (\Phi _1)_{\mathrm {max}}\) is obvious from the definition (32) of \(D(\varvec{p})\) and the expression for \((\Phi _1)_{\mathrm {max}}\), while \((\Phi _1)_{\mathrm {max}} > \Phi _1(0)\) is also clear from \(\mathrm {M{\scriptstyle ax}}= \{ x_0 \}\). Regarding \(\varvec{p}= (p_0,p_1,p_2; q_1, q_2)\) as real variables subject to the linear relation \(s(\varvec{p}) = 0\) and ranging over the closure of the domain (30), we shall find the minimum of

$$\begin{aligned} \Phi _1(0) = \frac{p_0^{p_0}}{(p_0-p_1)^{p_0-p_1} (p_0-p_2)^{p_0-p_2} (q_1-p_0)^{q_1-p_0}(q_2-p_0)^{q_2-p_0}}. \end{aligned}$$

For any fixed \((p_0, p_1, p_2)\), due to the constraint \(s(\varvec{p}) = 0\), one can thought of \(\Phi _1(0)\) as a function of single variable \(q_1\) in the interval \(p_0 \le q_1 \le p_1 + p_2\). Differentiation with respect to \(q_1\) shows that \(\Phi _1(0)\) attains its minimum (only) at the endpoints \(q_1 = p_0\), \(p_1+p_2\), whose value is

$$\begin{aligned} \Psi (p_0, p_1, p_2) := \frac{p_0^{p_0}}{(p_0-p_1)^{p_0-p_1} (p_0-p_2)^{p_0-p_2} (p_1+p_2-p_0)^{p_1+p_2-p_0}}. \end{aligned}$$

So \(\Phi _1(0) > \Psi (p_0, p_1, p_2)\) for any \(p_0< q_1 < p_1+p_2\). With a fixed \(p_0 > 0\) we think of \(\Psi (p_0, p_1, p_2)\) as a function of \((p_1, p_2)\) in the closed simplex \(p_0 \le p_1 + p_2\), \(p_1 \le p_0\), \(p_2 \le p_0\). It has a unique critical value \(\Psi (p_0, 2 p_0/3, 2 p_0/3) = 3^{p_0} > 1\) in the interior of the simplex, while on its boundary one has \(\Psi (p_0, \alpha , p_0) = \Psi (p_0, p_0, \alpha )= \Psi (p_0, \alpha , p_0-\alpha ) = p_0^{p_0} \alpha ^{-\alpha } (p_0-\alpha )^{\alpha -p_0} \ge 1\) for any \(0 \le \alpha \le p_0\). Therefore, we have \(\Phi _1(0) > \Psi (p_0, p_1, p_2) \ge 1\) under condition (30). \(\square \)

6.2 Second component

Taking the shift \(k \mapsto k + (q_1-p_0) n\) in \(\varphi (k; n)\) (see (66)) and applying the reflection formula to the unique negative gamma factor in the numerator of \(\varphi (k+(q_1-p_0) n; n)\), one has

$$\begin{aligned} g_2(n)&= \frac{\pi \cdot (-1)^{(q_2 - q_1) n} \, G_2(n) }{\sin \pi (b_2-a_0)}\\&\quad \hbox {with} \quad G_2(n) := \sum _{k=0}^{L_2 n -1} (-1)^k \frac{ \prod _{i=1}^2 \varGamma (\sigma _i k + \lambda _i n + \alpha _i)}{\prod _{j=1}^4 \varGamma (\tau _j k + \mu _j n + \beta _j)}, \end{aligned}$$

where \(L_2 = q_2-q_1 > 0\), \(\sigma _1 = \sigma _2 = 1\), \(\lambda _1 = q_1\), \(\lambda _2 = 0\), \(\alpha _1 = a_0\), \(\alpha _2 = a_0-b_1+1\), and

$$\begin{aligned} \tau _1&= 1, \quad&\tau _2&= 1, \quad&\tau _3&= 1, \quad&\tau _4&= -1, \\ \mu _1&= q_1-p_0, \quad&\mu _2&= q_1-p_1, \quad&\mu _3&= q_1-p_2, \quad&\mu _4&= q_2-q_1 = L_2, \\ \beta _1&= 1, \quad&\beta _2&= a_0-a_1+1, \quad&\beta _3&= a_0-a_2+1, \quad&\beta _4&= b_2-a_0. \end{aligned}$$

Rewriting \(k \mapsto 2 k\) or \(k \mapsto 2 k+1\) according as k is even or odd, we have a decomposition \(G_2(n) = G_{20}(n) - G_{21}(n) + H_2(n)\), where \(G_{2\nu }(n)\) is given by

$$\begin{aligned} G_{2\nu }(n) := \sum _{k=0}^{\lceil \frac{L_2}{2} n \rceil -1} \frac{ \prod _{i=1}^2 \varGamma (2 \sigma _i k + \lambda _i n + \alpha _i + \nu \, \sigma _i) }{\prod _{j=1}^4 \varGamma (2 \tau _j k + \mu _j n + \beta _j + \nu \, \tau _j )}, \qquad \nu = 0, 1, \end{aligned}$$

while if \(L_2\) or n is even then \(H_2(n) := 0\); otherwise, i.e., if both of \(L_2\) and n are odd then

$$\begin{aligned} H_2(n) := \dfrac{\prod _{i=1}^2 \varGamma ((\sigma _i L_2 + \lambda _i) n + \alpha _i) }{ \prod _{j=1}^4 \varGamma ( (\tau _j L_2 + \mu _j) n + \beta _j ) }. \end{aligned}$$

Obviously, \(G_{20}(n)\) and \(G_{21}(n)\) have the same multiplicative phase function, which we denote by \(\Phi _2(x)\). Let \(\phi _2(x) := - \log \Phi _2(x)\) be the associated additive phase function. In order to make the second component \(g_2(n)\) weaker than the first one \(g_1(n)\), we want to make \(\phi _2'(x) \ge 0\) or equivalently \(\chi _2(x) \ge 0\) for every \(0 \le x \le L_2/2\), where \(\chi _2(x)\) is the common characteristic function (84) for the sums \(G_{20}(n)\) and \(G_{21}(n)\), which is given by

$$\begin{aligned} \chi _2(x) = \frac{(2 x + \mu _1)^2(2 x + \mu _2)^2 (2 x + \mu _3)^2}{(-2 x + L_2)^2} - (2 x + \lambda _1)^2 (2 x +\lambda _2)^2. \end{aligned}$$

The non-negativity of \(\chi _2(x)\) in the interval \(0 \le x \le L_2/2\) is equivalent to

$$\begin{aligned} \begin{aligned} \chi (x; \varvec{p})&:= (x + \mu _1)(x + \mu _2) (x + \mu _3) + (x + \lambda _1) (x +\lambda _2) (x -L_2) \\&= (x+q_1-p_0)(x+q_1-p_1)(x+q_1-p_2)+ x (x+q_1) (x+q_1-q_2) \\&\ge 0 \qquad \hbox {for any} \qquad 0 \le x \le L_2 = q_2 - q_1. \end{aligned} \end{aligned}$$
(123)

It is easy to see that \(G_{20}(n)\) and \(G_{21}(n)\) satisfy balancedness (62), boundedness (63), and admissibility (69) conditions, where \(r_0 = 0\), \(r_1 = L_2/2\), and \(I_0 = \{2\}\), \(I_1 = J_0 = \emptyset \), \(J_1 = \{4\}\), while genericness (74) for \(G_{2\nu }(n)\) becomes \(b_1-a_0 \not \in \mathbb {Z}_{\ge \nu +1}\) for \(\nu = 0, 1\).

Lemma 6.5

Under the assumption of Lemma 6.3, if \(\varvec{p}\) satisfies the additional condition (123) then there exist positive constants \(0< d_2 < |D(\varvec{p})|\), \(C_2 > 0\) and \(N_2 \in \mathbb {N}\) such that

$$\begin{aligned} |t({\varvec{a}}) \cdot g_2(n)| \le C_2 \cdot d_2^n, \qquad {}^{\forall } n \ge N_2, \, \, {}^{\forall }{\varvec{a}}\in \mathbb {A}(R, \sigma ). \end{aligned}$$

Proof

Condition (123) implies that \(\Phi _2(x)\) is decreasing everywhere in \(0 \le x \le L_2/2\) and is strictly so near \(x = 0\) since \(\chi (0; \varvec{p}) = (q_1-p_0)(q_1-p_1)(q_1-p_2) > 0\) by condition (30). Hence \(\Phi _2(x)\) attains its maximum (only) at the left end \(x = 0\) of the interval, having the value

$$\begin{aligned} (\Phi _2)_{\mathrm {max}}&= \Phi _2(0) = \frac{q_1^{q_1}}{(q_1-p_0)^{q_1-p_0} (q_1-p_1)^{q_1-p_1} (q_1-p_2)^{q_1-p_2}(q_2-q_1)^{q_2-q_1}} \\&= \Phi _1(L_1), \end{aligned}$$

whereas \((\Phi _2)_{\mathrm {max}} = \Phi _1(L_1) < (\Phi _1)_{\mathrm {max}} = |D(\varvec{p})|\) follows from Lemma 6.4. Thus if \(d_2\) is any number such that \((\Phi _2)_{\mathrm {max}}< d_2 < |D(\varvec{p})|\), then Proposition 5.14 shows that

$$\begin{aligned} |G_{2\nu }(n)| \le \dfrac{K_2 \cdot d_2^n}{ \min \{1, \, \mathrm {dist}(b_1-a_0, \, \mathbb {Z}_{\ge \nu +1}) \}} \le \dfrac{K_2 \cdot d_2^n}{\delta (b_1-a_0)}, \quad {}^{\forall } n \ge N_2, \,\, \nu = 0, 1, \end{aligned}$$

for some \(K_2 > 0\) and \(N_2 \in \mathbb {N}\), where \(\delta (z) := \min \{ 1, \, \mathrm {dist}(z, \, \mathbb {N})\}\) for \(z \in \mathbb {C}\).

We have to take care of \(H_2(n)\) when \(L_2\) and n are both odd. Stirling’s formula (90) yields

$$\begin{aligned} H_2(n)= & {} \frac{1}{2 \pi } \cdot \frac{\prod _{i=1}^2 (\sigma _i L_2 + \lambda _i)^{\alpha _i- \frac{1}{2}} }{ \prod _{j=1}^4 (\tau _j L_2 + \mu _j)^{\beta _j - \frac{1}{2} } } \cdot \Phi _2(L_2/2)^n \cdot n^{\gamma _2} \cdot \{1 + O(1/n) \} \end{aligned}$$

as \(n \rightarrow +\infty ,\) where \(\gamma _2 := \alpha _1+\alpha _2-\beta _1-\beta _2-\beta _3-\beta _4+1\). Since \(\Phi _2(L_2/2)< (\Phi _2)_{\mathrm {max}} < d_2\), upon retaking \(K_2 > 0\) suitably, one has \(|H_2(n)| \le K_2 \cdot d_2^n \le K_2 \cdot d_2^n/ \delta (b_1-a_0)\) for any \(n \ge N_2\).

Then from the relation between \(g_2(n)\) and \(G_2(n) = G_{20}(n)-G_{21}(n) + H_2(n)\) one has

$$\begin{aligned} |t({\varvec{a}}) \cdot g_2(n) | \le 3 \pi K_2 \cdot M_2({\varvec{a}}) \cdot d_2^n \quad \hbox {with} \quad M_2({\varvec{a}}) := \dfrac{|\sin \pi (b_1-a_0)|}{\delta (b_1-a_0)}. \end{aligned}$$

Since \(M_2({\varvec{a}})\) is bounded for \({\varvec{a}}\in \mathbb {A}(R, \sigma )\) the lemma follows (here \(\sigma \) is irrelevant).

\(\square \)

Lemma 6.5 tempts us to ask when condition (123) is satisfied.

Lemma 6.6

For any \(\varvec{p}\in \mathcal {S}(\mathbb {R})\) condition (121) implies condition (123).

Proof

We use the following general fact. Let \(\chi (x)\) be a real cubic polynomial with positive leading coefficient and \(\varDelta \) be its discriminant. If \(\varDelta < 0\) then \(\chi (x)\) has only one real root so that once \(\chi (c_0) > 0\) for some \(c_0 \in \mathbb {R}\) then \(\chi (x) > 0\) for every \(x \ge c_0\). Even if \(\varDelta = 0\), once \(\chi (c_0) > 0\) then \(\chi (x) \ge 0\) for every \(x \ge c_0\) with possible equality \(\chi (c_1) = 0\), \(c_1 > c_0\), only if \(\chi (x)\) attains a local minimum at \(x = c_1\). Currently, \(\chi (x; \varvec{p})\) has discriminant \(\varDelta (\varvec{p})\) in Formula (33) and \(\chi (0; \varvec{p}) = (q_1-p_0)(q_1-p_1)(q_1-p_2) > 0\) by condition (30). Thus if \(\varDelta (\varvec{p}) \le 0\) then \(\chi (x; \varvec{p}) \ge 0\) for every \(x \ge 0\); this is just the case (a) in condition (121).

We proceed to the case (b) in (121). The derivative of \(\chi (x; \varvec{p})\) in x is given by

$$\begin{aligned} \chi '(x; \varvec{p})= & {} 6 x^2 +4 (2 q_1-q_2) x + (3 q_1-p_1-p_2)(p_1+p_2-q_2) \\&+ \,2 q_1^2 -2 (p_1+p_2) q_1 + p_1 p_2. \end{aligned}$$

Note that \(2 q_1 - q_2\), \(3 q_1-p_1-p_2\), \(p_1+p_2-q_2 > 0\) by condition (30). Having axis of symmetry \(x = -(2 q_1-q_2)/3 < 0\), the quadratic function \(\chi '(x; \varvec{p})\) is increasing in \(x \ge 0\) and hence

$$\begin{aligned} \chi '(x; \varvec{p})&\ge \chi '(0; \varvec{p}) = (3 q_1\!-\!p_1\!-\!p_2)(p_1\!+\!p_2-q_2) \!+\! 2 q_1^2 -2 (p_1+p_2) q_1 + p_1 p_2\\&> 2 q_1^2 -2 (p_1+p_2) q_1 + p_1 p_2 \ge 0 \quad \hbox {for any} \quad x \ge 0, \end{aligned}$$

where the last inequality stems from (b) in condition (121). Thus \(\chi (x; \varvec{p}) \ge \chi (0; \varvec{p}) > 0\) for any \(x \ge 0\), so condition (123) is satisfied. \(\square \)

The converse to the implication in Lemma 6.6 is also true, accordingly conditions (121) and (123) are equivalent for any \(\varvec{p}\in \mathcal {S}(\mathbb {R})\), but the proof of this fact is omitted as it is not needed in this article. In the situation of Lemma 6.5 we proceed to the third component.

6.3 Third component

For the third component \(g_3(n)\), taking the shift \(k \mapsto k+(q_2-p_0)n\) in \(\varphi (k; n)\), one has

$$\begin{aligned} g_3(n) = \sum _{k=0}^{\infty } \frac{\prod _{i=1}^3 \varGamma (\sigma _i k+ \lambda _i n+ \alpha _i)}{\prod _{j=1}^3 \varGamma (\tau _j k+ \mu _j n +\beta _j)}, \end{aligned}$$

where \(\sigma _1 = \sigma _2 = \sigma _3 = \tau _1 = \tau _2 = \tau _3 = 1\) and

$$\begin{aligned}&\lambda _1 = q_2, \qquad \lambda _2 = q_2-q_1, \qquad \lambda _3 = 0,\quad \alpha _1 = a_0, \quad \alpha _2 = a_0-b_1+1, \quad \alpha _3 = a_0-b_2+1,\\&\mu _1 = q_2-p_0, \quad \mu _2 = q_2-p_1, \quad \mu _3 = q_2-p_2,\quad \beta _1 = 1, \quad \beta _2 = a_0-a_1+1, \quad \beta _3 = a_0-a_2+1. \end{aligned}$$

It is easy to see that \(g_3(n)\) satisfies balancedness (62), boundedness (63), admissibility (69) with \(r_0 = 0\) and \(r_1 = +\infty \). Notice that \(I_0 = \{3 \}\) if \(q_1 < q_2\) and \(I_0 = \{2, 3\}\) if \(q_1 = q_2\), while \(I_1 = J_0 = J_1 = \emptyset \). Genericness (74) becomes \(b_2-a_0 \not \in \mathbb {N}\) if \(q_1 < q_2\), and \(b_1-a_0\), \(b_2-a_0 \not \in \mathbb {N}\) if \(q_1 = q_2\). Continuity at infinity (78) is satisfied with \(\varvec{\sigma }^{\varvec{{\scriptstyle \sigma }}} = \varvec{\tau }^{\varvec{{\scriptstyle \tau }}} = 1\); convergence condition (80) is equivalent to \(\mathrm {Re}\, s({\varvec{a}}) \ge \sigma \). Under the assumption of Lemma 6.5, we have the following.

Lemma 6.7

There exist positive constants \(0< d_3 < |D(\varvec{p})|\), \(C_3 > 0\), and \(N_3 \in \mathbb {N}\) such that

$$\begin{aligned} |t({\varvec{a}}) \cdot g_3(n)| \le C_3 \cdot d_3^n, \qquad {}^{\forall } n \ge N_3, \, \, {}^{\forall }{\varvec{a}}\in \mathbb {A}(R, \sigma ). \end{aligned}$$

Proof

In view of \(s(\varvec{p}) = 0\) the characteristic function (84) for \(g_3(n)\) is given by

$$\begin{aligned} \chi _3(x)&= (x+q_2-p_0)(x+q_2-p_1)(x+q_2-p_2) - (x+q_2)(x+q_2-q_1)x \\&= s(\varvec{p}) \, x^2 + \{ 2 s(\varvec{p}) \, q_2 + s_2(\varvec{p}) \} x + (q_2-p_0)(q_2-p_1)(q_2-p_2) \\&= s_2(\varvec{p}) \, x + (q_2-p_0)(q_2-p_1)(q_2-p_2). \end{aligned}$$

Since \(s_2(\varvec{p}) > 0\) and \((q_2-p_0)(q_2-p_1)(q_2-p_2) > 0\) from condition (30), one has \(\chi _3(x) > 0\) and hence the additive phase function \(\phi _3(x)\) satisfies \(\phi _3'(x) > 0\) for any \(x \ge 0\). Thus \(\Phi _3(x) = e^{-\phi _3(x)}\) is strictly decreasing in \(x \ge 0\) and attains its maximum (only) at \(x = 0\), having the value

$$\begin{aligned} (\Phi _3)_{\mathrm {max}} = \Phi _3(0) = \frac{q_2^{q_2} (q_2-q_1)^{q_2-q_1}}{(q_2-p_0)^{q_2-p_0} (q_2-p_1)^{q_2-p_1} (q_2-p_2)^{q_2-p_2}} = \Phi _2(L_2/2), \end{aligned}$$

whereas \((\Phi _3)_{\mathrm {max}} = \Phi _2(L_2/2)< (\Phi _2)_{\mathrm {max}} = \Phi _2(0) = \Phi _1(L_1) < (\Phi _1)_{\mathrm {max}} = |D(\varvec{p})|\). Thus if \(d_3\) is any number with \((\Phi _3)_{\mathrm {max}}< d_3 < |D(\varvec{p})|\) then Proposition 5.14 implies that for any \(n \ge N_3\),

$$\begin{aligned}&|g_3(n)| \le \frac{K_3 \cdot d_3^n}{\delta (b_2-a_0)} \quad \hbox {if} \quad q_1 < q_2;\\&|g_3(n)| \le \frac{K_3 \cdot d_3^n}{\delta (b_1-a_0) \cdot \delta (b_2-a_0)} \quad \hbox {if} \quad q_1 = q_2, \end{aligned}$$

where the function \(\delta (z)\) is defined in the proof of Lemma 6.5. Since \(\sin \pi (b_j-a_0) / \delta (b_j-a_0)\), \(j = 1, 2\), are bounded for \({\varvec{a}}\in \mathbb {A}(R, \sigma )\), the lemma follows immediately. \(\square \)

Theorem 6.1 is now an immediate consequence of Lemmas 6.3, 6.5, 6.6, and 6.7. Theorems 4.3 and 6.1 then imply that if the shift vector \(\varvec{p}\in \mathcal {S}(\mathbb {Z})\) satisfies condition (121) then f(n) in (23a) and g(n) in (26b) are recessive and dominant solutions to the recurrence relation (28) whose coefficients q(n) and r(n) are given by (23b) and (23c). Now it is almost ready to apply the general error estimate (29) to \(X(n) = f(n)\) and \(Y(n) = g(n)\), where a precise asymptotic formula for the ratio \(R(n) = f(n+2)/g(n+2)\) is available from Theorems 4.3 and 6.1.

Theorem 6.1 is established under balancedness condition \(s(\varvec{p}) = 0\). We wonder if the discrete Laplace method in Sect. 5 could be extended so as to work even when this condition fails to hold.

7 Casoratian and error estimates

All that remain are to evaluate the initial term \(\omega (0)\) of the Casoratian determinant

$$\begin{aligned} \omega (n) := f(n) \cdot g(n+1) - f(n+1) \cdot g(n), \end{aligned}$$

and to incorporate the ensuing formula with the asymptotic representation for R(n) to complete the proofs of Theorems 3.2 and 3.3. The first task is done in Sect. 7.1, while the second in Sect. 7.2.

7.1 Casoratian

In order to evaluate \(\omega (0)\), following [10, Formulas (7), (8), and (10)], we define

Table 3 Five parameter involutions (including identity)
$$\begin{aligned} y_i^{(0)}({\varvec{a}}; z)&:= z^{1-b_i} {}_3f_2(\sigma _i^{(0)}({\varvec{a}}); z), \quad i = 0, 1, 2, \quad b_0 := 1, \\ y_0^{(\infty )}({\varvec{a}}; z)&:= e^{\mathrm {i} \pi s({\varvec{\scriptstyle a}})} z^{-a_0} {}_3f_2(\sigma _0^{(\infty )}({\varvec{a}}); 1/z), \end{aligned}$$

where \(\sigma _i^{(\nu )}\) are involutions on the parameters \({\varvec{a}}\) as in Table 3, and put \(y_i^{(\nu )}({\varvec{a}}) := y_i^{(\nu )}({\varvec{a}}; 1)\). Note that \(y_0^{(0)}({\varvec{a}}) = {}_3f_2({\varvec{a}})\) and \(y_0^{(\infty )}({\varvec{a}}) = e^{\mathrm {i} \pi s({\varvec{\scriptstyle a}})} {}_3g_2({\varvec{a}})\). Moreover let \(\varvec{1} := (1,1,1;1,1)\).

Lemma 7.1

For any \({\varvec{a}}\in \mathbb {C}^5\) with \(\mathrm {Re}\, s({\varvec{a}}) > 1\) one has

$$\begin{aligned} \begin{aligned} W({\varvec{a}})&:= y_0^{(0)}({\varvec{a}}) \cdot y_0^{(\infty )}({\varvec{a}}+\varvec{1}) - y_0^{(0)}({\varvec{a}}+\varvec{1}) \cdot y_0^{(\infty )}({\varvec{a}}) \\&= - e^{\mathrm {i} \pi s({\varvec{\scriptstyle a}})} \dfrac{\varGamma (a_0) \varGamma (a_1) \varGamma (a_2) \varGamma (a_0-b_1+1) \varGamma (a_0-b_2+1) \varGamma (s({\varvec{a}})-1)}{\varGamma (b_1-a_1) \varGamma (b_1-a_2) \varGamma (b_2-a_1) \varGamma (b_2-a_2)}. \end{aligned} \end{aligned}$$
(124)

Proof

A careful inspection of Bailey [4, Sect.  10.3, Formulas (3) and (5)] shows that

$$\begin{aligned} w({\varvec{a}}; z)&:= y_0^{(0)}({\varvec{a}}; z) \cdot y_1^{(0)}({\varvec{a}}+\varvec{1}; z) - y_0^{(0)}({\varvec{a}}+\varvec{1}; z) \cdot y_1^{(0)}({\varvec{a}}; z) \\&= \dfrac{\varGamma (a_0) \varGamma (a_1) \varGamma (a_2) \varGamma (a_0-b_1+1) \varGamma (a_1-b_1+1) \varGamma (a_2-b_1+1)}{\varGamma (b_1) \varGamma (1-b_1) \varGamma (b_2-a_0)\varGamma (b_2-a_1) \varGamma (b_2-a_2)} \\&\phantom {=} \times z^{1-b_1-b_2}(1-z)^{s({\varvec{\scriptstyle a}})-1} \cdot y_2^{(0)}({\varvec{a}}^*; z), \qquad |z| < 1, \end{aligned}$$

where \({\varvec{a}}^*\) is defined in Table 3, while Okubo, Takano, and Yoshida [21, Lemma 2] show that

$$\begin{aligned} \lim _{z \uparrow 1} (1-z)^{s({\varvec{\scriptstyle a}})-1} \cdot y_2^{(0)}({\varvec{a}}^*; z) = \varGamma (s({\varvec{a}})-1), \qquad \mathrm {Re}\, s({\varvec{a}}) > 1. \end{aligned}$$

It follows from these facts that \(w({\varvec{a}}) := w({\varvec{a}}; 1)\) admits a representation

$$\begin{aligned} w({\varvec{a}}) = \dfrac{\varGamma (a_0) \varGamma (a_1) \varGamma (a_2) \varGamma (a_0-b_1+1) \varGamma (a_1-b_1+1) \varGamma (a_2-b_1+1) \varGamma (s({\varvec{a}})-1)}{\varGamma (b_1) \varGamma (1-b_1) \varGamma (b_2-a_0)\varGamma (b_2-a_1) \varGamma (b_2-a_2)}. \end{aligned}$$

By the connection formula \(y_0^{(\infty )}({\varvec{a}}) = C_0({\varvec{a}}) \, y_0^{(0)}({\varvec{a}}) + C_1({\varvec{a}}) \, y_1^{(0)}({\varvec{a}})\) in [10, Formula (16)], where

$$\begin{aligned} C_0({\varvec{a}})= & {} \dfrac{e^{\mathrm {i} \pi s({\varvec{\scriptstyle a}})} \cdot \sin \pi a_1 \cdot \sin \pi a_2}{\sin \pi b_1 \cdot \sin \pi (b_2-a_0)}, \\ C_1({\varvec{a}})= & {} -\dfrac{e^{\mathrm {i} \pi s({\varvec{\scriptstyle a}})} \cdot \sin \pi (b_1-a_1) \cdot \sin \pi (b_1-a_2)}{\sin \pi b_1 \cdot \sin \pi (b_2-a_0)}, \end{aligned}$$

and the periodicity \(C_i({\varvec{a}}+\varvec{1}) = C_i({\varvec{a}})\), \(i = 0, 1\), we have \(W({\varvec{a}}) = C_1({\varvec{a}}) \, w({\varvec{a}})\). This together with the reflection formula for the gamma function yields Formula (124). \(\square \)

Theorem 7.2

The initial value of the Casoratian \(\omega (n)\) is given by

$$\begin{aligned} \omega (0) = \dfrac{\pi ^2 \cdot \rho ({\varvec{a}}; \varvec{k}) \cdot \varGamma (a_0) \varGamma (a_1) \varGamma (a_2) \varGamma ( s({\varvec{a}}) ) }{ t({\varvec{a}}) \prod _{i=0}^2 \prod _{j=1}^2 \varGamma (b_j-a_i +(l_j-k_i)_+)}, \end{aligned}$$
(125)

where \(\rho ({\varvec{a}}; \varvec{k}) \in \mathbb {Q}[{\varvec{a}}]\) is the polynomial in (13) and \(t({\varvec{a}}) := \sin \pi (b_1-a_0) \cdot \sin \pi (b_2-a_0)\).

Proof

From definitions (23a) and (26b), we find that

$$\begin{aligned} \omega (0)&= f_0(0) \cdot g_1(0) - f_1(0) \cdot g_0(0) = {}_3f_2({\varvec{a}}) \cdot {}_3g_2({\varvec{a}}+ \varvec{k}) - {}_3f_2({\varvec{a}}+ \varvec{k}) \cdot {}_3g_2({\varvec{a}}) \\&= y_0^{(0)}({\varvec{a}}) \, e^{-\mathrm {i} \pi s({\varvec{\scriptstyle a}}+\varvec{{\scriptstyle k}})} y_0^{(\infty )}({\varvec{a}}+\varvec{k}) - y_0^{(0)}({\varvec{a}}+\varvec{k}) \, e^{-\mathrm {i} \pi s({\varvec{\scriptstyle a}})} y_0^{(\infty )}({\varvec{a}}) \\&= e^{-\mathrm {i} \pi s({\varvec{\scriptstyle a}})} \{ y_0^{(0)}({\varvec{a}}) \, y_0^{(\infty )}({\varvec{a}}+\varvec{k}) - y_0^{(0)}({\varvec{a}}+\varvec{k}) \, y_0^{(\infty )}({\varvec{a}}) \} \\&= e^{-\mathrm {i} \pi s({\varvec{\scriptstyle a}})} r({\varvec{a}}; \varvec{k}) \{ y_0^{(0)}({\varvec{a}}) \, y_0^{(\infty )}({\varvec{a}}+\varvec{1}) - y_0^{(0)}({\varvec{a}}+\varvec{1}) \, y_0^{(\infty )}({\varvec{a}}) \}\\&= e^{-\mathrm {i} \pi s({\varvec{\scriptstyle a}})} r({\varvec{a}}; \varvec{k}) \, W({\varvec{a}}), \end{aligned}$$

where the fourth equality follows from \(s(\varvec{k}) = 0\) and the fifth from the three-term relation

$$\begin{aligned} y_0^{(\nu )}({\varvec{a}}+\varvec{k}) = r_1({\varvec{a}}; \varvec{k}) \, y_0^{(\nu )}({\varvec{a}}) + r({\varvec{a}}; \varvec{k}) \, y_0^{(\nu )}({\varvec{a}}+\varvec{1}), \quad \nu = 0, \infty , \end{aligned}$$

where \(r_1({\varvec{a}}; \varvec{k})\) and \(r({\varvec{a}}; \varvec{k})\) are the (1, 1) and (1, 2) entries of the connection matrix \(A({\varvec{a}}; \varvec{k})\) as in [10, Formulas (33) and (34)]. Using Formula (124) one has

$$\begin{aligned} \omega (0)&= - r({\varvec{a}}; \varvec{k}) \, \dfrac{\varGamma (a_0) \varGamma (a_1) \varGamma (a_2) \varGamma (a_0-b_1+1) \varGamma (a_0-b_2+1) \varGamma (s({\varvec{a}})-1)}{\varGamma (b_1-a_1) \varGamma (b_1-a_2) \varGamma (b_2-a_1) \varGamma (b_2-a_2) } \\&= - \dfrac{ \pi ^2 \cdot r({\varvec{a}}; \varvec{k}) \cdot \varGamma (a_0) \varGamma (a_1) \varGamma (a_2) \varGamma (s({\varvec{a}})-1) }{ t({\varvec{a}}) \prod _{i=0}^2 \prod _{j=1}^2 \varGamma (b_j-a_i) } \\&= \dfrac{ \pi ^2 \cdot \rho ({\varvec{a}}; \varvec{k}) \cdot \varGamma (a_0) \varGamma (a_1) \varGamma (a_2) \cdot \{ s({\varvec{a}})-1 \} \varGamma (s({\varvec{a}})-1) }{ t({\varvec{a}}) \prod _{i=0}^2 \prod _{j=1}^2 (b_j - a_i; \, (l_j-k_i)_+) \cdot \prod _{i=0}^2 \prod _{j=1}^2 \varGamma (b_j-a_i) } \\&= \hbox {RHS of }(125), \end{aligned}$$

where the second equality follows from the reflection formula for the gamma function, the third from (13) and the final one from the recursion formula for the gamma function. \(\square \)

7.2 Error estimates

We are now in a position to establish our main results in Sect. 3.2 by means of the general estimate (29) upon putting \(X(n) = f(n)\) and \(Y(n) = g(n)\). In this subsection, unless otherwise mentioned explicitly, Landau’s symbols \(O(\, \cdot \,)\) are uniform in any compact subset of

$$\begin{aligned} \mathbb {A}:= \{ {\varvec{a}}\in \mathbb {C}^5 \,:\, \mathrm {Re}\, s({\varvec{a}}) > 0\}. \end{aligned}$$

Proof of Theorem 3.2

In the straight case in Definition 2.2, the sequences in (23a) and (26b) are given by \(f(n) = {}_3f_2({\varvec{a}}+ n \varvec{k})\) and \(g(n) = {}_3g_2({\varvec{a}}+ n \varvec{k})\), respectively. Under the assumption of Theorem 3.2, we can use Theorems 4.3 and 6.1 with \(\varvec{p}\) replaced by \(\varvec{k}\) to get

$$\begin{aligned} f(n)&= {}_3f_2({\varvec{a}}+ n \varvec{k}) = \varGamma (s({\varvec{a}})) \cdot s_2(\varvec{k})^{-s({\varvec{\scriptstyle a}})} \cdot n^{-2 s({\varvec{\scriptstyle a}})} \cdot \{1 + O(1/n) \}, \\ g(n)&= {}_3g_2({\varvec{a}}+ n \varvec{k}) = \dfrac{B({\varvec{a}}; \varvec{k})}{t({\varvec{a}})} \cdot D(\varvec{k})^n \cdot n^{-s({\varvec{\scriptstyle a}})-\frac{1}{2} } \cdot \left\{ 1 + O(n^{-\frac{1}{2} }) \right\} , \end{aligned}$$

where \(D(\varvec{k})\), \(t({\varvec{a}})\), and \(B({\varvec{a}}; \varvec{k})\) are given by (32) and (122) with \(\varvec{p}\) replaced by \(\varvec{k}\), and hence

$$\begin{aligned} R(n)&= \frac{f(n+2)}{g(n+2)}\\&= \dfrac{ t({\varvec{a}}) \cdot \varGamma (s({\varvec{a}})) \cdot s_2(\varvec{k})^{-s({\varvec{\scriptstyle a}})}}{ B({\varvec{a}}; \varvec{k}) \cdot D(\varvec{k})^2 } \cdot D(\varvec{k})^{-n} \cdot n^{-s({\varvec{\scriptstyle a}})+ \frac{1}{2} } \cdot \left\{ 1 + O(n^{-\frac{1}{2}}) \right\} . \end{aligned}$$

Combining this formula with (125) in Theorem 7.2, we have

$$\begin{aligned} \omega (0) \, R(n) = \rho ({\varvec{a}}; \varvec{k}) \cdot e_{\mathrm {s}}({\varvec{a}}; \varvec{k}) \cdot \gamma ({\varvec{a}}; \varvec{k}) \cdot D(\varvec{k})^{-n} \cdot n^{-s({\varvec{\scriptstyle a}}) + \frac{1}{2} } \cdot \left\{ 1 + O(n^{-\frac{1}{2} }) \right\} , \end{aligned}$$

where \(e_{\mathrm {s}}({\varvec{a}}; \varvec{k})\) and \(\gamma ({\varvec{a}}; \varvec{k})\) are defined in (37) and (34). To cope with the error term in Formula (29), we also need to care about how \(R(n) \cdot Y(0)/ X(0)\) depends on \({\varvec{a}}\in \mathbb {A}\). Observe that

$$\begin{aligned} \frac{R(n) \cdot Y(0)}{X(0)}&= \frac{R(n) \cdot {}_3g_2({\varvec{a}})}{{}_3f_2({\varvec{a}})} \!= \!\psi _1({\varvec{a}}) \cdot \psi _2({\varvec{a}}) \cdot D(\varvec{k})^{-n} \cdot n^{-s({\varvec{\scriptstyle a}}) \!+\! \frac{1}{2}} \cdot \left\{ 1 + O(n^{-\frac{1}{2} }) \right\} , \\&\quad \hbox {with} \quad \psi _1({\varvec{a}}) := \frac{\varGamma (s({\varvec{a}})) \cdot s_2(\varvec{k})^{-s({\varvec{\scriptstyle a}})}}{B({\varvec{a}}; \varvec{k}) \cdot D(\varvec{k})^2}, \quad \psi _2({\varvec{a}}) := \frac{t({\varvec{a}}) \cdot {}_3g_2({\varvec{a}})}{{}_3f_2({\varvec{a}})}. \end{aligned}$$

It is obvious that \(\psi _1({\varvec{a}})\) is holomorphic in \(\mathbb {A}\). It is also easy to see that \(\psi _2({\varvec{a}})\) is holomorphic in \(\mathbb {A}_0 := \{{\varvec{a}}\in \mathbb {A}: {}_3f_2({\varvec{a}}) \ne 0 \}\). Indeed \({}_3g_2({\varvec{a}})\) has a pole when \(a_0-b_1+1 \in \mathbb {Z}_{\le 0}\) or \(a_0-b_2+1 \in \mathbb {Z}_{\le 0}\) but the pole is canceled by a zero of \(t({\varvec{a}}) = \sin \pi (b_1-a_0) \cdot \sin \pi (b_2-a_0)\); similarly \({}_3g_2({\varvec{a}})\) has a pole when \(a_0 \in \mathbb {Z}_{\le 0}\) but it is canceled by a pole of \({}_3f_2({\varvec{a}})\). Now estimate (29) leads to asymptotic Formula (36), in which Landau’s symbol is uniform in any compact subset of \(\mathbb {A}_0\). \(\square \)

Proof of Theorem 3.3

In the twisted case in Definition 2.2, if \(n = 3 m + i\), \(m \in \mathbb {Z}_{\ge 0}\), \(i = 0, 1, 2\), then the sequences f(n) in (23a) and g(n) in (26b) are given by

$$\begin{aligned} f(3 m + i) = {}_3f_2({\varvec{a}}+ \varvec{j}_i + m \varvec{p}), \qquad g(3 m + i) = {}_3g_2({\varvec{a}}+ \varvec{j}_i + m \varvec{p}), \end{aligned}$$

where \(\varvec{j}_0 = \varvec{0}\), \(\varvec{j}_1 = \varvec{k}\), \(\varvec{j}_2 = \varvec{l}\), and \(\varvec{p}\) is the shift vector in Formula (20).

Observe that \(\varvec{p}\) belongs to \(\mathcal {S}(\mathbb {Z})\) and satisfies condition (121), if and only if the seed vector \(\varvec{k}\in \mathbb {Z}^5\) fulfills condition (39). Indeed, since \(p_0 = p_1 = p_2 = l_1+l_2\), \(q_1 = 3 l_1\), and \(q_2 = 3 l_2\), the inequalities in (30) becomes \(l_1 + l_2< 3 l_1 \le 3 l_2 < 2(l_1 + l_2)\), which is equivalent to \(l_1 \le l_2 < 2 l_1\). Case (a) in condition (121) now reads \(\varDelta (\varvec{p}) = -27(l_2^2-4 l_1 l_2 + l_1^2) (l_2^2+2 l_1 l_2 -2l_1^2) (2 l_2^2 -2l_1 l_2-l_1^2) \le 0\), which together with \(l_1 \le l_2 < 2 l_1\) yields \(l_1 \le l_2 \le \tau l_1\) in condition (39), where \(\tau = (1+\sqrt{3})/2\). On the other hand, case (b) in condition (121) becomes \(l_2^2 - 10 l_1 l_2 + 7 l_1^2 \ge 0\), that is, \(l_2 \le (5-3 \sqrt{2}) l_1\) or \(l_2 \ge (5 + 3\sqrt{2}) l_1\), but neither of which is possible when \(l_1 \le l_2 < 2 l_1\).

Thus under the assumption of Theorem 3.3 one can apply Theorems 4.3 and 6.1 to the shift vector \(\varvec{p}\) in (20) with \({\varvec{a}}\) replaced by \({\varvec{a}}+\varvec{j}_i\) to obtain

$$\begin{aligned} f(3 m + i)&= {}_3f_2({\varvec{a}}+ \varvec{j}_i + m \varvec{p}) = \varGamma (s({\varvec{a}})) \cdot s_2(\varvec{p})^{-s({\varvec{\scriptstyle a}})} \cdot m^{-2 s({\varvec{\scriptstyle a}})} \cdot \{ 1 + O(1/m)\}, \\ g(3 m + i)&= {}_3g_2({\varvec{a}}+ \varvec{j}_i + m \varvec{p}) \\&= \dfrac{B({\varvec{a}}+ \varvec{j}_i ; \varvec{p})}{t({\varvec{a}}+ \varvec{j}_i)} \cdot D(\varvec{p})^m \cdot m^{-s({\varvec{\scriptstyle a}})-\frac{1}{2}} \cdot \left\{ 1 + O(m^{-\frac{1}{2} }) \right\} , \end{aligned}$$

where \(s(\varvec{j}_i) = 0\) is also used. Substituting the settings (20) and (21) into definitions (32) and (122) and taking \(s(\varvec{k}) = 0\) into account, one has \(D(\varvec{p}) = E(l_1, l_2)^3\) and

$$\begin{aligned} t({\varvec{a}}+ \varvec{j}_i)= & {} (-1)^{i(l_1+l_2)} \cdot t({\varvec{a}}), \\ B({\varvec{a}}+ \varvec{j}_i; \varvec{p})= & {} (-1)^{i (l_1+l_2)} \cdot B({\varvec{a}}; \varvec{p}) \cdot E(l_1, l_2)^i, i = 0, 1, 2, \end{aligned}$$

where \(E(l_1, l_2)\) is defined in (41). These formulas and \(s_2(\varvec{p}) = 3 (l_1^2 -l_1l_2+l_2^2)\) lead to

$$\begin{aligned} f(n)&= 3^{s({\varvec{\scriptstyle a}})} \cdot \varGamma (s({\varvec{a}})) \cdot (l_1^2 -l_1 l_2 + l_2^2)^{-s({\varvec{\scriptstyle a}})}\cdot n^{-2 s({\varvec{\scriptstyle a}})} \cdot \{1 + O(1/n)\}, \\ g(n)&= 3^{s({\varvec{\scriptstyle a}})+ \frac{1}{2} } \cdot \frac{B({\varvec{a}}; \varvec{p})}{t({\varvec{a}})} \cdot E(l_1, l_2)^n \cdot n^{-s({\varvec{\scriptstyle a}})- \frac{1}{2} } \cdot \left\{ 1 + O(n^{-\frac{1}{2}}) \right\} , \end{aligned}$$

so the ratio \(R(n) = f(n+2)/g(n+2)\) is estimated as

$$\begin{aligned} R(n)&= \dfrac{ t({\varvec{a}}) \cdot \varGamma (s({\varvec{a}})) \cdot (l_1^2-l_1l_2+l_2^2)^{-s({\varvec{\scriptstyle a}})} }{ 3^{\frac{1}{2} } \cdot B({\varvec{a}}; \varvec{p})\cdot E(l_1, l_2)^2}\\&\quad \cdot E(l_1, l_2)^{-n} \cdot n^{-s({\varvec{\scriptstyle a}})+ \frac{1}{2} } \cdot \left\{ 1 + O(n^{-\frac{1}{2} }) \right\} . \end{aligned}$$

Substituting \(\varvec{p}= (l_1+l_2, l_1+l_2, l_1+l_2; 3l_1, 3l_2)\) into definition (122) yields

$$\begin{aligned} B({\varvec{a}}; \varvec{p}) = \dfrac{\pi ^{ \frac{1}{2} } (l_1+l_2)^{a_0+a_1+a_2- \frac{3}{2} } \cdot 3^{s({\varvec{\scriptstyle a}})-1} \cdot (l_1^2-l_1l_2+l_2^2)^{s({\varvec{\scriptstyle a}})-1}}{2^{ \frac{3}{2} } \cdot (2l_1-l_2)^{2 b_1-b_2+s({\varvec{\scriptstyle a}})- \frac{3}{2} } (2l_2-l_1)^{2 b_2-b_1+s({\varvec{\scriptstyle a}})-\frac{3}{2} }}, \end{aligned}$$

which is put together with Formula (125) in Theorem 7.2 to give

$$\begin{aligned} \omega (0) \, R(n) = \rho ({\varvec{a}}; \varvec{k}) \cdot e_{\mathrm {t}}({\varvec{a}}; \varvec{k}) \cdot \gamma ({\varvec{a}}; \varvec{k}) \cdot E(l_1, l_2)^{-n} \cdot n^{-s({\varvec{\scriptstyle a}}) + \frac{1}{2} } \cdot \left\{ 1 + O(n^{-\frac{1}{2} }) \right\} , \end{aligned}$$

where \(e_{\mathrm {t}}({\varvec{a}}; \varvec{k})\) and \(\gamma ({\varvec{a}}; \varvec{k})\) are given in (42) and (34). The treatment of \(R(n) \cdot Y(0)/X(0)\) is similar to the one in the straight case and the estimate (29) leads to asymptotic Formula (40), in which Landau’s symbol is uniform in any compact subset of \(\mathbb {A}_0\). \(\square \)

8 Back to original series and specializations

Theorems 3.2 and 3.3 are stated in terms of the renormalized series \({}_3f_2({\varvec{a}})\). It is interesting to reformulate them in terms of the original series \({}_3F_2(1)\). Multiplying equations (36) and (40) by

$$\begin{aligned} \frac{\varGamma (b_1+l_1) \varGamma (b_2+l_2)}{\varGamma (a_0+k_0) \varGamma (a_1+k_1) \varGamma (a_2+k_2)} \cdot \frac{\varGamma (a_0) \varGamma (a_1) \varGamma (a_2)}{\varGamma (b_1) \varGamma (b_2)} = \frac{(b_1; \, l_1)(b_2; \, l_2)}{(a_0; \, k_0) (a_1; \, k_1) (a_2; \, k_2)} \end{aligned}$$

and using relation (9) between \({}_3f_2({\varvec{a}})\) and \({}_3F_2({\varvec{a}})\), we find that

(126a)
(126b)

as \(n \rightarrow + \infty \), where the quantities marked with an asterisk are defined by

$$\begin{aligned} q^*(0)&:= u({\varvec{a}}) \textstyle \prod _{i=0}^2 (a_i; \, k_i), \quad&q^*(n)&:= q(n), \quad n \ge 1, \quad&\\ r^*(0)&:= (b_1; \, l_1)(b_2; \, l_2), \quad&r^*(1)&:= v({\varvec{a}}) \textstyle \prod _{i=0}^2 (a_i; \, k_i), \quad&r^*(n)&:= r(n), \quad n \ge 2, \end{aligned}$$
$$\begin{aligned} c_{\iota }^*({\varvec{a}}; \varvec{k}) := \rho ({\varvec{a}}; \varvec{k}) \cdot e_{\iota }({\varvec{a}}; \varvec{k}) \cdot \gamma ^*({\varvec{a}}; \varvec{k}), \qquad \iota = \mathrm {s}, \mathrm {t}, \end{aligned}$$
(127)

with \(\rho ({\varvec{a}}; \varvec{k})\) and \(e_{\iota }({\varvec{a}}; \varvec{k})\) unaltered while

$$\begin{aligned} \gamma ^*({\varvec{a}}; \varvec{k}) := \frac{\varGamma (b_1+l_1) \varGamma (b_2\!+\!l_2) \varGamma (b_1) \varGamma (b_2) \varGamma ^2(s({\varvec{a}})) }{\varGamma (a_0\!+\!k_0) \varGamma (a_1+k_1) \varGamma (a_2\!+\!k_2) \prod _{i=0}^2 \prod _{j=1}^2 \varGamma (b_j-a_i+(l_j-k_i)_+)}. \end{aligned}$$

It follows from (9) that \(\mathbb {A}_0^* := \{ {\varvec{a}}\in \mathbb {A}\,:\, b_1, \, b_2 \not \in \mathbb {Z}_{\le 0}, \,\, {}_3 F_2 ({\varvec{a}}) \ne 0 \} \subset \mathbb {A}_0\), where \(\mathbb {A}\) and \(\mathbb {A}_0\) are defined in Sect. 7.2, so Landau’s symbols in (126) are uniform in any compact subset of \(\mathbb {A}_0^*\).

Take an index \(\lambda \in \{0, 1, 2\}\) such that \(k_{\lambda } > 0\) and put \(\{\lambda , \mu , \nu \} = \{0, 1, 2\}\). For any non-zero vector \(\varvec{k}\in \mathbb {Z}_{\ge 0}^5\) with \(s(\varvec{k}) =0\) such an index \(\lambda \) always exists since \(k_0+k_1+k_2 = l_1+l_2 > 0\). In Formulas (126), take the limit \(a_{\lambda } \rightarrow 0\) and make the substitutions \(a_i \mapsto a_i-k_i\), \(b_j \mapsto b_j-l_j\) for \(i = \mu , \nu \) and \(j = 1, 2\). This procedure is referred to as the \(\lambda \)-th specialization. If this is well defined then \({}_3F_2({\varvec{a}}) \rightarrow 1\) as \(a_{\lambda } \rightarrow 0\), so Formulas (126a) and (126b) lead to

(128a)
(128b)

where \({\hat{q}}(n)\) and \({\hat{r}}(n)\) are derived from \(q^*(n)\) and \(r^*(n)\), while \({\hat{c}}_{\iota }({\varvec{a}}; \varvec{k}) := {\hat{\rho }}({\varvec{a}}; \varvec{k}) \cdot {\hat{e}}_{\iota }({\varvec{a}}; \varvec{k}) \cdot {\hat{\gamma }}({\varvec{a}}; \varvec{k})\), \(\iota = \mathrm {s}, \mathrm {t}\), are obtained from (127) through the specialization; in particular one has

$$\begin{aligned} {\hat{\gamma }}({\varvec{a}}; \varvec{k}) := \frac{\varGamma (b_1) \varGamma (b_2) \varGamma ^2({\hat{s}})}{\varGamma (k_{\lambda }) \varGamma (a_{\mu }) \varGamma (a_{\nu }) \displaystyle \prod _{j=1,2} (b_j-l_j; \, (l_j-k_{\lambda })_+) \cdot \prod _{i = \mu , \nu } \prod _{j=1,2} \varGamma (b_j-a_i + (k_i-l_j)_+) }, \end{aligned}$$

with \({\hat{s}} := b_1+b_2-a_{\mu }-a_{\nu }-k_{\lambda }\). Landau’s symbols in (128) are uniform in compact subsets of

$$\begin{aligned} {\hat{\mathbb {A}}} := \{\, (a_{\mu }, a_{\nu }; b_1, b_2) \in \mathbb {C}^4 \,:\, \mathrm {Re}\, {\hat{s}} > 0, \, \, b_1, b_2 \not \in \mathbb {Z}_{\le 0} \, \}. \end{aligned}$$

The specialization is indeed well defined. It follows from [10, Proposition 4.9] and Lemma 2.1 that for any \(\varvec{c}\in \mathbb {Q}^5\), the restriction \(\rho ({\varvec{a}}+\varvec{c}; \varvec{k}) \big |_{a_0 = a_1 = a_2 = 0}\) is a non-zero polynomial in \(\mathbb {Q}[b_1, b_2]\) and hence \(\rho ({\varvec{a}}+\varvec{c}; \varvec{k})|_{a_{\lambda } = 0}\) is a non-zero polynomial in \(\mathbb {Q}[a_{\mu }, a_{\nu }, b_1, b_2]\). This is also the case for \(\sigma (\varvec{k})\) and \(\varvec{l}\) in place of \(\varvec{k}\) in Sect. 2.2. Thus Formula (19a) implies that the specialization for \(q^*(0)\),

$$\begin{aligned} {\hat{q}}(0) := & {} \lim _{a_{\lambda } \rightarrow 0} u({\varvec{a}}) \prod _{i=0}^2 (a_i; \, k_i) \quad \hbox {followed by} \quad a_i \mapsto a_i - k_i, \,\, b_j \mapsto b_j - l_j, \\ i= & {} \mu , \nu , \,\, j = 1, 2, \end{aligned}$$

is well defined and the ensuing \({\hat{q}}(0)\) is a non-trivial rational function in \(\mathbb {Q}(a_{\mu }, a_{\nu }, b_1, b_2)\). In a similar manner, Formula (19b) tells us that the specialization for \(r^*(1)\), that is, \({\hat{r}}(1) \in \mathbb {Q}(a_{\mu }, a_{\nu }, b_1, b_2)\) is well defined and non-trivial. The specialization for \(q^*(n)\) with \(n \ge 1\) is also well defined, since \(q^*(n) = q(n)\) is of the form \({}^{\sigma ^i} \! u({\varvec{a}}+ \varvec{c})\), where \(i \in \{0,1,2\}\) and \(\varvec{c}\) is a vector in \(\mathbb {Z}_{\ge 0}^5\) whose \(\lambda \)-th upper component, say \(c_{\lambda }\), is positive, in which case one can take \(\lim _{a_{\lambda } \rightarrow 0} {}^{\sigma ^i} \! u({\varvec{a}}+ \varvec{c})\) without trouble, because the critical factorial \((a_{\lambda }; \, k_{\lambda }) \rightarrow 0\) in the denominator of (19a) is now replaced by a safe one \((a_{\lambda }+ c_{\lambda }; \, k_{\lambda }) \rightarrow (c_{\lambda }; \, k_{\lambda }) \ne 0\). The resulting \({\hat{q}}(n)\) is non-trivial in \(\mathbb {Q}(a_{\mu }, a_{\nu }, b_1, b_2)\). A similar argument can be made for \({\hat{r}}(n)\) with \(n \ge 2\). Thus the procedure of specialization is well defined over the rational function field \(\mathbb {Q}(a_{\mu }, a_{\nu }, b_1, b_2)\).

9 Some examples

To illustrate Theorems 3.2 and 3.3, we present a couple of the simplest examples.

Example 9.1

The simplest example of twisted type is given by

$$\begin{aligned} \varvec{k}= \begin{pmatrix} 1, &{} 1, &{} 0 \\ &{} 1, &{} 1 \end{pmatrix}, \qquad \varvec{l}= \begin{pmatrix} 1, &{} 2, &{} 1 \\ &{} 2, &{} 2 \end{pmatrix}, \qquad \varvec{p}= \begin{pmatrix} 2, &{} 2, &{} 2 \\ &{} 3, &{} 3 \end{pmatrix}, \end{aligned}$$

together with \(\sigma (a_0, a_1, a_2; b_1, b_2) = (a_2, a_0, a_1; b_1, b_2)\). The recipe described in Sect. 2.1 readily yields \(\rho ({\varvec{a}}; \varvec{l}) = b_1 b_2 - a_0 a_2\) and \(\rho ({\varvec{a}}; \varvec{k}) = \rho ({\varvec{a}}+\varvec{k}; \sigma (\varvec{k})) = 1\), so Formula (19) yields

$$\begin{aligned} u({\varvec{a}}) = \dfrac{ b_1 b_2 - a_0 a_2}{a_0 a_1}, \qquad v({\varvec{a}}) = -\dfrac{(b_1-a_0)(b_2-a_0)}{a_0 a_1}. \end{aligned}$$

Thus the partial denominators and numerators of the continued fraction in Theorem 3.3 are given as in Table 4, so the continued fraction for \({}_3 f_2({\varvec{a}}+\varvec{k})/{}_3f_2({\varvec{a}})\) is well defined when

$$\begin{aligned} a_0, \, a_1, \, a_2 \not \in \mathbb {Z}_{\le 0}; \qquad b_j -a_i \not \in \mathbb {Z}_{\le 0}, \quad b_j-a_2 \not \in \mathbb {Z}_{\le -1}, \quad i = 0, 1, \,\, j = 1, 2. \end{aligned}$$

In the error estimate (40), we have \(E(l_1, l_2) = E(1,1) = 4\) and

$$\begin{aligned} c_{\mathrm {t}}({\varvec{a}}; \varvec{k}) = \dfrac{\pi ^{\frac{3}{2}} \cdot \varGamma (a_0) \varGamma (a_1) \varGamma (a_2) \varGamma ^2(s({\varvec{a}}))}{2^{a_0+a_1+a_2+1} \cdot 3^{s({\varvec{\scriptstyle a}}) - \frac{1}{2}} \prod _{j=1}^2 \prod _{i=0}^2 \varGamma (b_j-a_i + \delta _{i 2})}. \end{aligned}$$
Table 4 Partial denominators and numerators in Example 9.1 with \(r_0(0) := 1\)

Passing to the continued fraction for \({}_3 F_2({\varvec{a}}+\varvec{k})/{}_3F_2({\varvec{a}})\), we have \(q_0^*(0) = b_1 b_2-a_0 a_2\), \(r^*_0(0) = b_1 b_2\), \(r^*_1(0) = -(b_1-a_0)(b_2-a_0)\), and \(q^*_i(n) = q_i(n)\), \(r^*_i(n) = r_i(n)\) for all other (in) in Formula (126b). The 0-th specialization of (126b) then leads to \({\hat{q}}_0(0) = {\hat{r}}_0(0) = (b_1-1)(b_2-1)\), \({\hat{r}}_1(0) = -(b_1-1)(b_2-1)\) in Formula (128b), while all other \({\hat{q}}_i(n)\) and \({\hat{r}}_i(n)\) are given as in Table 2, where circumflex “\(\hat{\phantom {q}}\)” is deleted for the sake of simplicity. Clearly, we can make \({\hat{q}}_0(0) = {\hat{r}}_0(0) = 1\), \({\hat{r}}_1(0) = -1\) up to equivalence of continued fractions and we have established Theorem 1.1.

Example 9.2

The next simplest example of twisted type is given by

$$\begin{aligned} \varvec{k}= \begin{pmatrix} 2, &{} 0, &{} 0 \\ &{} 1, &{} 1 \end{pmatrix}, \qquad \varvec{l}= \begin{pmatrix} 2, &{} 2, &{} 0 \\ &{} 2, &{} 2 \end{pmatrix}, \qquad \varvec{p}= \begin{pmatrix} 2, &{} 2, &{} 2 \\ &{} 3, &{} 3 \end{pmatrix}, \end{aligned}$$

where \(\varvec{p}\) and \(\sigma \) are the same as in Example 9.1. In this case, the recipe in Sect. 2.1 gives

$$\begin{aligned} \rho ({\varvec{a}}; \varvec{l})&= b_1 b_2 + a_0 a_1 - (a_2 - 1) (a_0 + a_1 + 1), \\ \rho ({\varvec{a}}; \varvec{k})&= b_1 b_2 -a_1 a_2 - (a_0 + 1)(b_1+b_2 -a_1-a_2), \\ \rho ({\varvec{a}}+\varvec{k}; \sigma (\varvec{k}))&= (b_1+1)(b_2+1)-(a_0+2)a_2-(a_1+1) (b_1+b_2-a_0-a_2). \end{aligned}$$

With these data, the Formula (19) yields

$$\begin{aligned} u({\varvec{a}})&= \frac{(b_1 - a_1) (b_2 - a_1) \cdot \rho ({\varvec{a}}; \varvec{l}) }{a_0 (a_0 + 1) \cdot \rho ({\varvec{a}}+\varvec{k}; \sigma (\varvec{k})) }, \\ \qquad v({\varvec{a}})&= - \frac{(b_1 - a_2 + 1) (b_2 - a_2 + 1) \cdot \rho ({\varvec{a}}; \varvec{k}) }{a_0 (a_0 + 1) \cdot \rho ({\varvec{a}}+\varvec{k}; \sigma (\varvec{k})) }. \end{aligned}$$

In the error estimate (40) in Theorem 3.3, we have \(E(l_1, l_2) = E(1,1) = 4\), and

$$\begin{aligned} c_{\mathrm {t}}({\varvec{a}}; \varvec{k}) = \dfrac{\pi ^{\frac{3}{2}} \cdot \rho ({\varvec{a}}; \varvec{k}) \cdot \varGamma (a_0) \varGamma (a_1) \varGamma (a_2) \varGamma ^2(s({\varvec{a}}))}{2^{a_0+a_1+a_2+1} \cdot 3^{s({\varvec{\scriptstyle a}}) - \frac{1}{2}} \cdot \prod _{j=1}^2\varGamma (b_j-a_0) \cdot \prod _{i=1}^2 \prod _{j=1}^2 \varGamma (b_j-a_i+1)}. \end{aligned}$$

The 0-th specialization leads to a continued fraction expansion (128b) for \({}_3F_2(2, a_1,a_2; b_1, b_2)\).

Example 9.3

The simplest example of straight type is given by \(\varvec{k}= (2,2,2; 3,3)\), \(\varvec{l}= 2 \varvec{k}\), and \(\varvec{p}= 3 \varvec{k}\). The recipe in Sect. 2.1 shows that \(\rho ({\varvec{a}}; \varvec{k}) = a_0 a_1 a_2 (b_1+b_2+1) + b_1 b_2 \{s({\varvec{a}}) - s_2({\varvec{a}}) \}\) and \(\rho ({\varvec{a}}; 2 \varvec{k})\) is a polynomial of degree 10 (explicit formula is omitted). Formula (38) yields

$$\begin{aligned} u({\varvec{a}})&= \dfrac{\rho ({\varvec{a}}; 2 \varvec{k})}{\rho ({\varvec{a}}+\varvec{k}; \varvec{k}) \prod _{i=0}^2 a_i(a_i+1)},\\ \qquad v({\varvec{a}})&= - \dfrac{\rho ({\varvec{a}}; \varvec{k}) \prod _{i=0}^2 \prod _{j=1}^2 (b_j-a_i+1)}{\rho ({\varvec{a}}+\varvec{k}; \varvec{k}) \prod _{i=0}^2 a_i(a_i+1)}. \end{aligned}$$

In the error estimate (36) in Theorem 3.2, we have \(D(\varvec{k}) = 2^6 = 64\) and

$$\begin{aligned} c_{\mathrm {s}}({\varvec{a}}; \varvec{k}) = \dfrac{\pi ^{\frac{3}{2}} \cdot \rho ({\varvec{a}}; \varvec{k}) \cdot \varGamma (a_0) \varGamma (a_1) \varGamma (a_2) \varGamma ^2(s({\varvec{a}}))}{2^{a_0+a_1+a_2+9} \cdot 3^{2 s({\varvec{\scriptstyle a}}) - 1} \cdot \prod _{i=0}^2 \prod _{j=1}^2 \varGamma (b_j-a_i+1)}. \end{aligned}$$