1 Introduction

We consider the problem of determining \((\alpha , \beta )\) for

$$\begin{aligned} \int _0^x t^{-\beta } J_\alpha (t)\hbox {d}t\ge 0\qquad (x>0), \end{aligned}$$
(1.1)

where \(J_\alpha \) stands for the first-kind Bessel function of order \(\alpha \). For the sake of convergence and application, it will be assumed \(\alpha >-1, \,\beta <\alpha +1.\)

Owing to various applications, the problem has been studied by many authors over a long period of time. In connection with the monotonicity of Bessel functions, for instance, the problem dates back to Bailey [5] and Cooke [9]. We refer to Askey [1, 2] for further historical background.

By interpolating known results for some special cases in a certain way, Askey [2] described an explicit range of parameters as follows.

Theorem A

Let \({\mathcal {P}}\) be the set of \((\alpha , \beta )\in {\mathbb {R}}^2\) defined by

$$\begin{aligned} {\mathcal {P}} =\bigl \{\alpha >-1,\, \,0\le \beta <\alpha +1\bigr \} \cup \left\{ \alpha \ge 0,\quad \max \left( -\,\alpha ,\,-\frac{1}{2}\right) \le \beta \le 0\right\} . \end{aligned}$$
  1. (i)

    For each \((\alpha , \beta )\in {\mathcal {P}},\) the inequality of (1.1) holds with strict positivity unless it coincides with \((1/2, -1/2).\)

  2. (ii)

    If \(\alpha >-1, \,\beta <-1/2,\) then (1.1) does not hold.

As is shown in Fig. 1, the positivity region \({\mathcal {P}}\) represents an infinite polygon enclosed by four boundary lines

Fig. 1
figure 1

The known positivity region for problem (1.1) (Color figure online)

$$\begin{aligned} \beta =\alpha +1, \quad \beta =0, \quad \beta =-\,\alpha , \quad \beta =-\,1/2. \end{aligned}$$

By part (ii), observed by Steinig [20], Theorem A leaves only the trapezoid

$$\begin{aligned} {\mathcal {T}}=\left\{ -1<\alpha<\frac{1}{2},\quad -\frac{1}{2}\le \beta <\min \left( 0,\quad -\alpha \right) \right\} \end{aligned}$$

undetermined in regards to problem (1.1).

As for this missing region, the best possible range of parameters is known in an implicit formulation that involves roots of certain transcendental equations. To be precise, we follow Askey’s summary [2] to state:

Theorem B

Let \(j_{\alpha , 2}\) be the second positive zero of \(J_\alpha (t),\,\alpha >-1.\)

  1. (i)

    For \(-\,1<\alpha \le 1/2,\) (1.1) holds if and only if \(\beta \ge \beta (\alpha ),\) where \(\beta (\alpha )\) denotes the unique zero of

    $$\begin{aligned} A(\beta ) =\int _0^{j_{\alpha , 2}} t^{-\beta } J_\alpha (t) \hbox {d}t,\quad -\frac{1}{2}<\beta <\alpha +1. \end{aligned}$$
  2. (ii)

    As a special case of (1.1), the inequality

    $$\begin{aligned} \int _0^x t^{-\alpha } J_\alpha (t)\hbox {d}t \ge 0\qquad (x>0) \end{aligned}$$
    (1.2)

    holds for \(\alpha \ge {\bar{\alpha }},\) where \({\bar{\alpha }}\) denotes the unique zero of

    $$\begin{aligned} G(\alpha ) = \int _0^{j_{\alpha , 2}} t^{-\alpha } J_\alpha (t) \hbox {d}t,\quad \alpha >-\frac{1}{2}. \end{aligned}$$

Regarding part (i), the existence and uniqueness of such a zero as well as the positivity of (1.1) is due to Makai [17, 18] when \(-\,1/2<\alpha <1/2,\) and Askey and Steinig [3] when \(-\,1<\alpha <-1/2,\) respectively. The remaining case \(\alpha =\pm 1/2\) follows by an integration by parts.

Part (ii) is obtained by Szegö [10] much earlier and reproved by Koumandos [14], and Lorch et al. [15]. Since (1.2) is a special case of (1.1) and part (i) gives a necessary and sufficient condition for (1.1), it is equivalent to define \({\bar{\alpha }}\) as the unique solution of \(\beta (\alpha )=\alpha .\)

A major drawback of Theorem B lies in the intricate nature of the zeros \(\beta (\alpha )\) and \({\bar{\alpha }}\). As is pointed out by Askey [2], in fact, essentially nothing has been known yet on the nature of \(\beta (\alpha )\) and \({\bar{\alpha }}\) except a few numerical simulations and the trivial limiting behavior \(\lim _{\alpha \rightarrow -1 +} \beta (\alpha ) = 0.\)

In this paper, we aim to extend the positivity region \({\mathcal {P}}\) of Theorem A and thereby obtain informative bounds of \(\beta (\alpha )\) and \({\bar{\alpha }}\) that provide an insight into their nature and an approximating means in practical use.

By making use of the series representation (Luke [16], Watson [21])

$$\begin{aligned} J_\alpha (t) = \frac{1}{\Gamma (\alpha +1)} \left( \frac{t}{2}\right) ^\alpha {}_0F_1\left( \alpha +1\,;-\frac{t^2}{4}\right) \qquad (\alpha >-1) \end{aligned}$$
(1.3)

and integrating termwise, it is easy to see that

$$\begin{aligned} \int _0^x t^{-\beta }J_\alpha (t) \hbox {d}t&= \frac{x^{\alpha -\beta +1}}{2^\alpha (\alpha -\beta +1)\Gamma (\alpha +1)}\nonumber \\&\qquad \times \, {}_1F_2\left[ \begin{array}{c} \frac{\alpha -\beta +1}{2}\\ \alpha +1, \frac{\alpha -\beta +3}{2}\end{array} \biggl | -\frac{\,x^2}{4}\right] , \end{aligned}$$
(1.4)

and hence problem (1.1) is equivalent to the problem of positivity for the functions defined on the right side of (1.4).

More generally, we shall be concerned with the positivity of generalized hypergeometric functions of type

$$\begin{aligned} {}_1F_2\left[ \begin{array}{c} a\\ b, c\end{array}\biggr | -\frac{\,x^2}{4}\right] \qquad (x>0) \end{aligned}$$
(1.5)

with parameters \(a>0, \,b>0, \,c>0.\) In the recent work [8], to be explained in detail, a positivity criterion for the functions of type (1.5) is established in terms of the Newton diagram associated with \(\{(a+1/2, 2a), \,(2a, a+1/2)\}.\) Due to a certain region of parameters being left undetermined, however, it turns out that an application of the criterion to (1.4) yields Theorem A immediately but does not cover the missing region \({\mathcal {T}}\) either.

The main purpose of this paper is to give an extension of the Newton diagram that leads to an improvement of Theorem A in an explicit way and provides information on the nature of \(\beta (\alpha )\) and \({\bar{\alpha }}\).

As is more or less standard in the theory of special functions, we shall carry out Gasper’s sums of squares method [12] for investigating positivity, which essentially reduces the matter to how to determine the signs of \({}_4F_3\) terminating series given in the form

$$\begin{aligned} {}_{4}F_{3} \left[ \begin{array}{c} -n, n+\alpha _1, \alpha _2, \alpha _3\\ \beta _1, \beta _2, \beta _{3}\end{array}\right] , \quad n=1, 2, \ldots , \end{aligned}$$
(1.6)

for appropriate values of \(\alpha _j, \beta _j\) expressible in terms of abc.

From a technical point of view, if we express (1.6) as a finite sum with index k, it is the alternating factor \((-\,n)_k\) that causes the main difficulties in analyzing its sign. To circumvent, we shall apply Whipple’s transformation formula to convert it into a \({}_7F_6\) terminating series that does not involve such an alternating factor. By estimating a lower bound for the transformed series, we shall deduce positivity in an inductive way.

While Askey and Szegö studied problem (1.1) primarily as a limiting case for the positivity of certain sums of Jacobi polynomials, there are many other applications and generalizations (see, e.g., [10, 12, 13, 19]). As an exemplary generalization, we shall consider the integrals of type

$$\begin{aligned} \int _0^x (x^2- t^2)^\gamma t^{-\beta }J_\alpha (t) \hbox {d}t\qquad (x>0), \end{aligned}$$

and extend the range of parameters for its positivity in Gasper [12] by applying our new criterion.

2 Preliminaries

As is standard, given nonnegative integers pq, we shall define and write \({}_pF_q\) generalized hypergeometric functions in the form

$$\begin{aligned} {}_pF_q\left[ \begin{array}{c} \alpha _1, \ldots , \alpha _p\\ \beta _1, \ldots , \beta _q\end{array}\biggr | \,z\right] = \sum _{k=0}^\infty \frac{(\alpha _1)_k\cdots (\alpha _p)_k}{k!\,(\beta _1)_k \cdots (\beta _q)_k}\, z^k\qquad (z\in {\mathbb {C}}), \end{aligned}$$
(2.1)

where the coefficients are written in Pochhammer’s notation; that is, for any \(\alpha \in {\mathbb {R}}\), \((\alpha )_k = \alpha (\alpha +1)\cdots (\alpha +k-1)\) when \(k\ge 1\) and \((\alpha )_0=1.\) In the case when \(z=1\), we shall delete the argument z in what follows.

A function of type (2.1) is said to be Saalschützian when the parameters satisfy the condition \(1+\alpha _1+\cdots +\alpha _p = \beta _1+\cdots +\beta _q.\) If one of the numerator-parameters \(\alpha _j\) is a negative integer, e.g., \(\alpha _1=-\,n\) with n a positive integer, then it becomes a terminating series given by

$$\begin{aligned} {}_pF_q\left[ \begin{array}{c} -n, \alpha _2, \ldots , \alpha _p\\ \beta _1, \ldots , \beta _q\end{array}\biggr | \,z\right] = \sum _{k=0}^n (-\,1)^k \left( {\begin{array}{c}n\\ k\end{array}}\right) \frac{(\alpha _2)_k\cdots (\alpha _p)_k}{(\beta _1)_k \cdots (\beta _q)_k}\, z^k. \end{aligned}$$

For the generalized hypergeometric functions of type (2.1) that are both terminating and Saalschützian, there are a number of formulas available for summing or transforming into other terminating series. Of particular importance will be the following, extracted from Bailey [4]:

  1. (i)

    (Saalschütz’s formula, [4, 2.2(1)]) If \(1+\alpha _1+\alpha _2=\beta _1+\beta _2,\) then

    $$\begin{aligned} {}_3F_2\left[ \begin{array}{c} -n, n+\alpha _1, \alpha _2 \\ {} \beta _1, \beta _2\end{array}\right] = \frac{\left( \beta _1-\alpha _2\right) _n \left( \beta _2-\alpha _2\right) _n}{\left( \beta _1\right) _n\left( \beta _2\right) _n}. \end{aligned}$$
  2. (ii)

    (Whipple’s transformation formula, [4, 4.3(4)]) If \(1+\alpha _1+\alpha _2 +\alpha _3= \beta _1 +\beta _2 +\beta _3,\) then

    $$\begin{aligned}&{}_{4}F_{3} \left[ \begin{array}{c} -n, n+\alpha _1, \alpha _2, \alpha _3\\ {} \beta _1, \beta _2, \beta _{3}\end{array}\right] =\frac{(1+\alpha _1-\beta _3)_n (\beta _3-\alpha _2)_n}{(1+\sigma )_n (\beta _3)_n}\quad \nonumber \\&\quad \times {}_7F_6\biggl [\begin{array}{cccc} \sigma , &{}1+ \sigma /2, &{}-n, &{}n+\alpha _1,\\ &{}\sigma /2, &{}n+1+\sigma , &{}-n+1+\alpha _2-\beta _3,\end{array}\nonumber \\&\quad \quad \quad \quad \quad \quad \begin{array}{ccc} \alpha _2, &{}\beta _1-\alpha _3, &{}\beta _2-\alpha _3\\ 1+\alpha _1-\beta _3, &{}\beta _2, &{}\beta _1 \end{array}\biggr ], \end{aligned}$$
    (2.2)

    where we put \(\sigma =\alpha _1+\alpha _2-\beta _3.\) It is a modification of the original form suitable to the present application and arranged in such a way that the sums of columns of the \({}_6F_6\) terminating series, obtained from deleting \(\sigma \), are all equal to \(1+\sigma \).

3 Positivity of \({}_4F_3\) Terminating Series

The purpose of this section is to prove the following positivity result for a special class of terminating \({}_4F_3\) generalized hypergeometric series, which will be crucial in our subsequent developments.

Lemma 3.1

For each positive integer n, put

$$\begin{aligned} \Theta _n={}_{4}F_{3} \left[ \begin{array}{c} -n, n+\alpha _1, \alpha _2, \alpha _3\\ \beta _1, \beta _2, \beta _{3}\end{array}\right] . \end{aligned}$$

Suppose that \(\alpha _j, \beta _j\) satisfy the following assumptions simultaneously:

$$\begin{aligned} \left\{ \begin{aligned}&{\mathrm{(A1)}\quad 1+\alpha _1+\alpha _2+\alpha _3 = \beta _1+\beta _2+\beta _3,}\\&{\mathrm{(A2)}\quad 0<\alpha _2<\beta _3\le 2+\alpha _1,}\\&{\mathrm{(A3)}\quad 0<\alpha _3 <\min \bigl (\beta _1,\,\beta _2\bigr ),}\\&{\mathrm{(A4)}\quad (1+\alpha _1)\alpha _2\alpha _3 \le \beta _1\beta _2\beta _3.}\end{aligned}\right. \end{aligned}$$
(3.1)

Then \(\Theta _1\ge 0\) and \(\Theta _n >0\) for all \(n\ge 2.\)

Proof

We apply Whipple’s transformation formula to transform \(\Theta _n\) into a product of \({}_3F_2\) and \({}_7F_6\) terminating series as stated in (2.2). By using

$$\begin{aligned} (\alpha )_n = (\alpha )_k (k+\alpha )_{n-k},\quad (\alpha )_n= (\alpha )_{n-k}(n-k+\alpha )_k, \end{aligned}$$

valid for any real number \(\alpha \) and \(k=0, \ldots , n,\) it is equivalent to

$$\begin{aligned} \Theta _n&= \frac{1}{(1+\sigma )_n (\beta _3)_n}\,\Omega _n,\\ \Omega _n&=\sum _{k=0}^n \left( {\begin{array}{c}n\\ k\end{array}}\right) (k+1+\alpha _1-\beta _3)_{n-k}(\beta _3-\alpha _2)_{n-k}\,\frac{ (n+\alpha _1)_k}{(n+1+\sigma )_k} \\&\qquad \times \,\, \frac{(\alpha _2)_k (\beta _1-\alpha _3)_k(\beta _2-\alpha _3)_k}{(\beta _1)_k (\beta _2)_k} \frac{(\sigma )_k (1+\sigma /2)_k}{(\sigma /2)_k}, \end{aligned}$$

where \(\sigma = \alpha _1+\alpha _2-\beta _3\) and the last factor must be understood as

$$\begin{aligned} \frac{(\sigma )_k (1+\sigma /2)_k}{(\sigma /2)_k} = \left\{ \begin{aligned}&{\qquad 1}&{\text {for}\quad k=0,}\\&{(1+\sigma )_{k-1}(2k+\sigma )}&{\text {for}\quad k\ge 1.} \end{aligned}\right. \end{aligned}$$
(3.2)

By the Saalschützian condition of (A1) and (A3), we observe that

$$\begin{aligned} 1+\sigma =\beta _1+\beta _2-\alpha _3>0, \end{aligned}$$

and hence the positivity or nonnegativity of \(\Theta _n\) reduces to that of \(\Omega _n\). We also note that the assumptions of (A1)–(A3) imply

$$\begin{aligned} 1+\alpha _1= \beta _1+ (\beta _2 -\alpha _3)+ (\beta _3-\alpha _2)>0. \end{aligned}$$

As a consequence, if \(\beta _3\le 1+\alpha _1,\) then the first term is nonnegative and all of the other terms are positive so that \(\Omega _n>0\) for each \(n\ge 1.\) Therefore it suffices to deal with the case \(\beta _3>1+\alpha _1,\) which will be assumed hereafter.

In the special case \(n=1,\) it is a matter of algebra to factor out

$$\begin{aligned} \Omega _1&= (1+\alpha _1-\beta _3)(\beta _3-\alpha _2) + \frac{(1+\alpha _1)\alpha _2(\beta _1-\alpha _3)(\beta _2-\alpha _3)}{\beta _1\beta _2} \\&=\frac{(\beta _1+\beta _2-\alpha _3)\bigl [\beta _1\beta _2\beta _3-(1+\alpha _1)\alpha _2\alpha _3\bigr ]}{\beta _1\beta _2}, \end{aligned}$$

which clearly shows \(\Omega _1\ge 0\) under the stated assumptions.

For \(n\ge 2,\) we shall deduce the strict positivity of \(\Omega _n\) by considering each case \(\beta _3\ge 1+\alpha _2,\,\beta _3<1+\alpha _2\) separately in the following manner.

I. The case\(\beta _3\ge 1+\alpha _2.\) We claim that

$$\begin{aligned} \Omega _n>(2+\alpha _1-\beta _3)_{n-1}(1+\beta _3-\alpha _2)_{n-1}\Omega _1. \end{aligned}$$
(3.3)

To verify, we observe that each term of \(\Omega _n\) except the first one is positive so that \(\Omega _n\) exceeds the sum of the first two terms, which implies

$$\begin{aligned} \Omega _n&> (2+\alpha _1-\beta _3)_{n-1}(\beta _3-\alpha _2)_n\nonumber \\&\qquad \times \biggl [ 1+\alpha _1-\beta _3 +\frac{n(n+\alpha _1)\alpha _2(\beta _1-\alpha _3)(\beta _2-\alpha _3)(2+\sigma )}{(n+1+\sigma )(n-1+\beta _3-\alpha _2)\beta _1\beta _2}\biggr ]. \end{aligned}$$
(3.4)

If we set

$$\begin{aligned} f(n)&= \frac{n(n+\alpha _1)}{(n+1+\sigma )(n-1+\beta _3-\alpha _2)}\\&= \frac{n^2 + \alpha _1n}{n^2 + \alpha _1 n + (1+\sigma )(\beta _3-1-\alpha _2)} \end{aligned}$$

and regard n as a continuous variable, then the derivative of f is given by

$$\begin{aligned} f'(n) = \frac{(2n+\alpha _1)(1+\sigma )(\beta _3-1-\alpha _2)}{\left[ n^2 + \alpha _1 n + (1+\sigma )(\beta _3-1-\alpha _2)\right] ^2}. \end{aligned}$$

Due to the case assumption, it shows \(f'(n)\ge 0\) on the interval \([1, \infty )\), and hence we may conclude \(f(n)\ge f(1)\); that is,

$$\begin{aligned} \frac{n(n+\alpha _1)}{(n+1+\sigma )(n-1+\beta _3-\alpha _2)}\ge \frac{1+\alpha _1}{(2+\sigma )(\beta _3-\alpha _2)}. \end{aligned}$$

Reflecting this estimate in (3.4) and simplifying, we obtain

$$\begin{aligned} \Omega _n&> (2+\alpha _1-\beta _3)_{n-1}\frac{(\beta _3-\alpha _2)_n}{(\beta _3-\alpha _2)}\\&\qquad \times \,\biggl [(1+\alpha _1-\beta _3)(\beta _3-\alpha _2) +\frac{(1+\alpha _1)\alpha _2(\beta _1-\alpha _3)(\beta _2-\alpha _3)}{\beta _1\beta _2}\biggr ]\\&=(2+\alpha _1-\beta _3)_{n-1}(1+\beta _3-\alpha _2)_{n-1}\Omega _1, \end{aligned}$$

which proves (3.3). The strict positivity of \(\Omega _n\) is an immediate consequence of this inequality and the nonnegativity of \(\Omega _1\).

II. The case\(\beta _3<1+\alpha _2.\) In this case, we shall deduce the strict positivity of \(\Omega _n\) by induction on n. To simplify notation, we put

$$\begin{aligned} A_{n, k}&= \left( {\begin{array}{c}n\\ k\end{array}}\right) (k+1+\alpha _1-\beta _3)_{n-k}(\beta _3-\alpha _2)_{n-k}\,\frac{ (n+\alpha _1)_k}{(n+1+\sigma )_k},\\ B_k&=\frac{(\alpha _2)_k (\beta _1-\alpha _3)_k(\beta _2-\alpha _3)_k}{(\beta _1)_k (\beta _2)_k} \frac{(\sigma )_k (1+\sigma /2)_k}{(\sigma /2)_k} \end{aligned}$$

so that \(\Omega _n = \sum _{k=0}^n A_{n, k} B_k.\) By the stated assumptions and (3.2), we note that \(A_{n, 0}<0\) but \(A_{n, 1}\ge 0,\quad A_{n, k}>0\) for \(2\le k\le n\) and \(B_k>0\) for each \(0\le k\le n.\) In this notation we claim that

$$\begin{aligned} \Omega _{n+1}>A_{n+1, n+1} B_{n+1} + \left[ \frac{(n+1)(n+1+\alpha _1-\beta _3)(n+1+\sigma )}{n+\alpha _1}\right] \Omega _n. \end{aligned}$$
(3.5)

As is already shown that \(\Omega _1\ge 0,\) once (3.5) was true, it follows by an obvious induction argument that we may conclude \(\Omega _n>0\) for all \(n\ge 2\).

To verify, we make use of the identities

$$\begin{aligned} \left( {\begin{array}{c}n+1\\ k\end{array}}\right)&=\left( {\begin{array}{c}n\\ k\end{array}}\right) \,\frac{n+1}{n+1-k},\\ (k+1+\alpha _1-\beta _3)_{n+1-k}&= (k+1+\alpha _1-\beta _3)_{n-k}\,(n+1+\alpha _1-\beta _3),\\ (\beta _3-\alpha _2)_{n+1-k}&= (\beta _3-\alpha _2)_{n-k}\,(n-k+\beta _3-\alpha _2),\\ (n+1+\alpha _1)_k&= (n+\alpha _1)_k\,\frac{n+k+\alpha _1}{n+\alpha _1},\\ (n+2+\sigma )_k&= (n+1+\sigma )_k\,\frac{n+1+k+\sigma }{n+1+\sigma }, \end{aligned}$$

to write \(A_{n+1, k}\) in the form

$$\begin{aligned} A_{n+1, k}&= A_{n, k} \left[ \frac{(n+1)(n+1+\alpha _1-\beta _3)(n+1+\sigma )}{n+\alpha _1} \right] g_n(k),\\ g_n(k)&= \frac{(k+n+\alpha _1)(k-n+\alpha _2-\beta _3)}{(k-n-1)(k+n+1+\sigma )}\\&= \frac{k^2 +\sigma k -(n+\alpha _1)(n+\beta _3-\alpha _2)}{k^2+\sigma k -(n+1)(n+1+\sigma )}. \end{aligned}$$

Regarding k as a continuous variable as before, we differentiate

$$\begin{aligned} g_n'(k) = \frac{(2k+\sigma )(\beta _3-\alpha _2-1)(2n+1+\alpha _1)}{\left[ k^2+\sigma k -(n+1)(n+1+\sigma )\right] ^2}. \end{aligned}$$

By the case assumption, it shows \(g_n'(k)<0\) on the interval \([1, \infty )\). In view of the limiting behavior \(g_n(k)\rightarrow 1\) as \(k\rightarrow \infty ,\) hence, we may conclude \(g_n(k)>1\) for \(k=1, \ldots , n,\) which leads to the estimate

$$\begin{aligned} A_{n+1, k} \ge A_{n, k} \left[ \frac{(n+1)(n+1+\alpha _1-\beta _3)(n+1+\sigma )}{n+\alpha _1} \right] \end{aligned}$$
(3.6)

for each \(k=1, \ldots , n\) with strict inequalities when \(k\ge 2.\)

As for the initial term \(A_{n+1, 0}\), we may write

$$\begin{aligned} A_{n+1, 0} = A_{n, 0} \bigl [(n+1+\alpha _1-\beta _3)(n+\beta _3-\alpha _2)]. \end{aligned}$$

We observe that an upper bound for the last factor is given by

$$\begin{aligned} n+\beta _3-\alpha _2<\frac{(n+1)(n+1+\sigma )}{n+\alpha _1}, \end{aligned}$$

which follows easily from the sign of cross difference

$$\begin{aligned}&(n+1)(n+1+\sigma ) - (n+\beta _3-\alpha _2)(n+\alpha _1)\\&\quad = 2(1+\alpha _2-\beta _3) n + (1+\alpha _1)(1+\alpha _2-\beta _3)\\&\quad =(1+\alpha _2-\beta _3)(2n+1+\alpha _1) >0 \end{aligned}$$

due to the case assumption. Since \(A_{n, 0}<0,\) this upper bound gives

$$\begin{aligned} A_{n+1, 0} >A_{n, 0} \left[ \frac{(n+1)(n+1+\alpha _1-\beta _3)(n+1+\sigma )}{n+\alpha _1}\right] . \end{aligned}$$
(3.7)

Multiplying each term of (3.6) and (3.7) by \(B_k\) and adding up, we obtain

$$\begin{aligned} \Omega _{n+1}&= A_{n+1, n+1} B_{n+1} + \sum _{k=0}^n A_{n+1, k} B_k \\&> A_{n+1, n+1} B_{n+1} + \left[ \frac{(n+1)(n+1+\alpha _1-\beta _3)(n+1+\sigma )}{n+\alpha _1}\right] \sum _{k=0}^n A_{n, k} B_k \\&= A_{n+1, n+1} B_{n+1} + \left[ \frac{(n+1)(n+1+\alpha _1-\beta _3)(n+1+\sigma )}{n+\alpha _1}\right] \Omega _n, \end{aligned}$$

which proves (3.5), and our proof is now complete. \(\square \)

Remark 3.1

Since \(\Theta _n\) is symmetric in \(\alpha _2, \alpha _3\) and \(\beta _1, \beta _2, \beta _3,\) respectively, Lemma 3.1 also holds true if \(\alpha _2\) is interchanged with \(\alpha _3\) or \(\beta _1, \beta _2, \beta _3\) are permuted in the conditions of (A2) and (A3).

4 Rational Extension of the Newton Diagram

In this section, we aim to extend the aforementioned positivity criterion of [8] for the generalized hypergeometric functions of type (1.5).

To state the criterion precisely, we recall that the Newton diagram associated with a finite set of planar points \(\bigl \{\left( \alpha _i,\,\beta _i\right) : i= 1, \ldots , m\bigr \}\) refers to the closed convex hull containing

$$\begin{aligned} \bigcup _{i=1}^m\,\Big \{(x, y)\in {\mathbb {R}}^2 : x\ge \alpha _i,\quad y\ge \beta _i\Big \}. \end{aligned}$$

For each \(a>0,\) we denote by \(O_a\) the set of \((b, c)\in {\mathbb {R}}_+^2\) defined by

$$\begin{aligned} O_a&= \left\{ a<b<a+\frac{1}{2},\quad c\ge 3a+\frac{1}{2}-b\right\} \nonumber \\&\qquad \cup \left\{ a<c<a+\frac{1}{2},\quad b\ge 3a+\frac{1}{2}-c\right\} \quad \text {if}\quad a\ge \frac{1}{2}, \end{aligned}$$
(4.1)
$$\begin{aligned} O_a&= \left\{ a<b<2a,\quad c\ge 3a+\frac{1}{2}-b\right\} \nonumber \\&\qquad \cup \left\{ a<c<2a,\quad b\ge 3a+\frac{1}{2}-c\right\} \quad \text {if}\quad 0<a<\frac{1}{2}, \end{aligned}$$
(4.2)

which represents two symmetric infinite strips bounded by \(b+c= 3a + 1/2\) and four half-lines parallel to the coordinate axes.

By combining the methods of Fields and Ismail [11], Gasper [12], and fractional integrals with the squares of Bessel functions as kernels, two of the present authors established the following criterion.

Theorem 4.1

(Cho and Yun [8]) For \(a>0,\,b>0,\,c>0,\) put

$$\begin{aligned} \Phi (x)={}_1F_2\left[ \begin{array}{c} a\\ b, c\end{array}\biggr | -\frac{\,x^2}{4}\right] \qquad (x>0). \end{aligned}$$

Let \(P_a\) be the Newton diagram associated with \(\Lambda = \left\{ \left( a+\frac{1}{2},\,2a\right) ,\,\left( 2a,\,a+\frac{1}{2}\right) \right\} \), \(O_a\) the set defined in (4.1), (4.2), and \(N_a\) the complement of \(P_a\cup O_a\) in \({\mathbb {R}}_+^2\) so that the decomposition \({\mathbb {R}}_+^2 = P_a\cup O_a\cup N_a\) holds.

  1. (i)

    If \(\Phi \ge 0,\) then necessarily

    $$\begin{aligned} b>a,\quad c>a,\quad b+c\ge 3a+\frac{1}{2}. \end{aligned}$$
    (4.3)
  2. (ii)

    If \((b, c)\in P_a,\) then \(\Phi \ge 0\) and strict positivity holds unless \((b, c)\in \Lambda .\)

  3. (iii)

    If \((b, c)\in N_a,\) then \(\Phi \) alternates in sign.

Remark 4.1

For the cases of nonnegativity, we follow [8] to introduce

$$\begin{aligned} {\mathbb {J}}_\alpha (x) = {}_0F_1\left( \alpha +1\,; -\frac{\,x^2}{4}\right) \qquad (\alpha >-1). \end{aligned}$$

Owing to the relation of (1.3), it is easy to see that \({\mathbb {J}}_\alpha \) shares positive zeros in common with Bessel function \(J_\alpha \) and its square takes the form

$$\begin{aligned} {\mathbb {J}}_\alpha ^2\left( x\right) = {}_1F_2\left[ \begin{array}{c} \alpha + \frac{1}{2}\\ \alpha +1, 2\alpha +1\end{array}\biggr | - x^2\right] \end{aligned}$$

when \(\alpha >-1/2.\) Consequently, if \((b, c)\in \Lambda ,\) then

$$\begin{aligned} \Phi (x) = {}_1F_2\left[ \begin{array}{c} a\\ a+\frac{1}{2}, 2a\end{array}\biggr | -\frac{\,x^2}{4}\right] = {\mathbb {J}}_{a-\frac{1}{2}}^2\left( \frac{x}{2}\right) , \end{aligned}$$
(4.4)

which is nonnegative but has infinitely many zeros on \((0, \infty )\).

Theorem 4.1 unifies many earlier positivity results, and we refer to our recent paper [7] in which it is applied to improve the results of Misiewicz and Richards [19], and Buhmann [6] at the same time.

For \((b, c)\in O_a,\) it is left undetermined whether positivity holds or not. We now state our main extension theorem, which still does not fill out the whole of \(O_a\) but covers the upper half of the rational function

$$\begin{aligned} c= a+\frac{a}{2(b-a)}\qquad (b>a). \end{aligned}$$

We shall use the letter \(\Lambda \) below for the same notation as above.

Theorem 4.2

For \(a>0,\,b>0,\,c>0,\) put

$$\begin{aligned} \Phi (x)={}_1F_2\left[ \begin{array}{c} a\\ b, c\end{array}\biggr | -\frac{\,x^2}{4}\right] \qquad (x>0). \end{aligned}$$

Let \(P_a^*\) be the set of parameter pairs \((b, c)\in {\mathbb {R}}_+^2\) defined by

$$\begin{aligned} P_a^* = \left\{ b>a,\,c>a,\, c\ge \max \Big [ 3a+\frac{1}{2}-b,\quad a+ \frac{a}{2(b-a)}\Big ]\right\} . \end{aligned}$$

If \((b, c)\in P_a^*{\setminus }\Lambda ,\) then \(\Phi \) is strictly positive.

Proof

In view of the difference

$$\begin{aligned} a+ \frac{a}{2(b-a)} - \left[ 3a+\frac{1}{2}-b\right] = \frac{\left( b-a-\frac{1}{2}\right) (b-2a)}{b-a}, \end{aligned}$$

it is graphically obvious that the rational function \(c= a+ a/2(b-a)\) lies below the line \(c= 3a+1/2-b\) only for \(b\in L,\) where L denotes

$$\begin{aligned} L=\left\{ (1-t)(a+1/2) + t(2a) : 0\le t\le 1\right\} . \end{aligned}$$

As is already shown in Theorem 4.1 that \(\Phi \) is strictly positive for \((b, c)\in P_a{\setminus }\Lambda ,\) it remains to prove the positivity of \(\Phi \) in the case \((b, c)\in P_a^*\) with b lying outside the closed interval L. By symmetry in bc,  we may assume \(b\le c\), and hence it suffices to deal with the case

$$\begin{aligned} c\ge a+ \frac{a}{2(b-a)}, \end{aligned}$$
(4.5)

where \(a<b<a+1/2\) when \(a\ge 1/2\) or \(a<b<2a\) when \(0<a<1/2.\)

We apply Gasper’s sums of squares formula [12, 3.1] to write

$$\begin{aligned} \Phi (x)&= \Gamma ^2(\nu +1)\left( \frac{x}{4}\right) ^{-2\nu }\biggl \{J_{\nu }^2\left( \frac{x}{2}\right) \nonumber \\&\qquad +\sum _{n=1}^{\infty } C(n, \nu )\frac{2n+2\nu }{n+2\nu }\frac{(2\nu +1)_n}{n!} J_{\nu +n}^2\left( \frac{x}{2}\right) \biggr \}, \end{aligned}$$
(4.6)

in which \(C(n, \nu )\) denotes the terminating series defined by

$$\begin{aligned} C(n, \nu )= {}_4F_{3} \left[ \begin{array}{c} -n,\,n+2\nu ,\,\nu + 1,\,a\\ \nu +\frac{1}{2},\,b,\,c\end{array}\right] \end{aligned}$$
(4.7)

and \(\nu \) can be arbitrary as long as \(2\nu \) is not a negative integer.

Due to the interlacing property on the zeros of Bessel functions \(J_\nu ,\,J_{\nu +1}\) (see Watson [21]), the positivity of \(\Phi \) would follow instantly from formula (4.6) if \(C(n, \nu )>0\) for all n, for instance, and \(\nu >-1/2.\)

To investigate the sign of \(C(n, \nu )\), we apply Lemma 3.1 with

$$\begin{aligned} \alpha _1=2\nu ,\quad \alpha _2=\nu +1,\quad \alpha _3=a,\quad \beta _1=\nu +\frac{1}{2},\quad \beta _2=b,\quad \beta _3=c. \end{aligned}$$

The Saalschützian condition (A1) of (3.1) is equivalent to the choice

$$\begin{aligned} \nu = \frac{1}{2}\left( b+ c-a-\frac{3}{2}\right) . \end{aligned}$$
(4.8)

It is elementary to translate conditions (A2)–(A4) of (3.1) into

$$\begin{aligned} \left\{ \begin{aligned}&{\quad c> b-a + \frac{1}{2},\quad b\ge a-\frac{1}{2},}\\&{\quad c>3a+\frac{1}{2}-b,\quad b>a,}\\&{\quad c\ge a+ \frac{a}{2(b-a)}}. \end{aligned}\right. \end{aligned}$$
(4.9)

On inspecting the region determined by (4.9) in the (bc)-plane, it is immediate to find that (4.9) amounts to (4.5) subject to the restriction \(a<b<a+1/2\) when \(a\ge 1/2\) or \(a<b<2a\) when \(0<a<1/2.\)

By Lemma 3.1, we may conclude \(C(1, \nu )\ge 0\) and \(C(n, \nu )>0\) for all \(n\ge 2\) with \(\nu \) chosen according to (4.8), and our proof is now complete. \(\square \)

Fig. 2
figure 2

The improved positivity region \(P_a^*\) in the case \(a>\frac{1}{2}\) which includes the Newton diagram associated with \(\Lambda =\{(a+1/2, 2a), \,(2a, a+1/2)\}\) (Color figure online)

Remark 4.2

For the sake of convenience, we illustrate Theorems 4.14.2 with Figs. 2 and 3 for the case \(a>1/2,\,a=1/2,\) separately (the case \(0<a<1/2\) is similar to the case \(a>1/2\)). In each figure, the red-colored part indicates the improved positivity region \(P_a^*\), and the grey-colored part indicates the region where positivity breaks down. The blank or white-colored parts indicate the missing region.

Fig. 3
figure 3

The improved positivity region \(P_{\frac{1}{2}}^*\) which includes the Newton diagram associated with \(\Lambda =\{(1, 1)\}\) (Color figure online)

5 Askey–Szegö Problem

Returning to problem (1.1), an application of the above positivity criteria in an obvious way yields what we aimed to establish.

Theorem 5.1

For \(\alpha >-1,\,\beta <\alpha +1,\) put

$$\begin{aligned} \psi (x) = \int _0^x t^{-\beta } J_\alpha (t) \hbox {d}t\qquad (x>0). \end{aligned}$$
  1. (i)

    If \(\psi \ge 0,\) then necessarily

    $$\begin{aligned} -\alpha -1<\beta <\alpha +1,\quad \beta \ge -\frac{1}{2}. \end{aligned}$$
  2. (ii)

    Let \({\mathcal {P}}^*\) be the set of \((\alpha , \beta )\in {\mathbb {R}}^2\) defined by

    $$\begin{aligned} {\mathcal {P}}^*=\left\{ \alpha >-1,\quad \max \Big [-\frac{1}{2},\,-\frac{1}{3} (\alpha +1)\Big ]\le \beta <\alpha +1\right\} . \end{aligned}$$

    Then \(\psi \ge 0\) for each \((\alpha , \beta )\in {\mathcal {P}}^*\), and strict positivity holds unless it coincides with \((1/2, -1/2).\)

Proof

In view of (1.4), it suffices to deal with

$$\begin{aligned} \Psi (x) = {}_1F_2\left[ \begin{array}{c} \frac{\alpha -\beta +1}{2}\\ \alpha +1, \frac{\alpha -\beta +1}{2} +1\end{array} \biggl | -\frac{\,x^2}{4}\right] \qquad (x>0). \end{aligned}$$
(5.1)

Under the assumption \(\alpha >-1,\,\beta <\alpha +1,\) each parameter of \(\Psi \) is positive. If \(\Psi \ge 0,\) then it follows from necessary condition (4.3) that

$$\begin{aligned} \alpha +1&> \frac{\alpha -\beta +1}{2},\\ \frac{\alpha -\beta +1}{2} +1&\ge 3\left( \frac{\alpha -\beta +1}{2}\right) +\frac{1}{2} -(\alpha +1), \end{aligned}$$

which reduces to the stated necessary condition of part (i).

To prove part (ii), we apply Theorem 4.2 with

$$\begin{aligned} a= \frac{\alpha -\beta +1}{2},\quad b=\alpha +1,\quad c= \frac{\alpha -\beta +1}{2} +1. \end{aligned}$$

Inspecting the conditions \(c\ge 3a+1/2-b,\,c\ge a+a/2(b-a)\) for \(b>a\) in terms of \(\alpha , \beta \) separately, it is elementary to find that the condition

$$\begin{aligned} c\ge \max \Big [ 3a+\frac{1}{2}-b,\quad a+ \frac{a}{2(b-a)}\Big ],\quad b>a, \end{aligned}$$

is equivalent to

$$\begin{aligned} \beta \ge \max \Big [-\frac{1}{2},\,-\frac{1}{3} (\alpha +1)\Big ],\quad \beta >-\alpha -1. \end{aligned}$$
(5.2)

Combining (5.2) with the necessary condition of part (i), we deduce \(\Psi \ge 0\) for each \((\alpha , \beta )\in {\mathcal {P}}^*.\) Regarding strict positivity, we note that the nonnegativity condition required by

$$\begin{aligned} (b, c)\in \Lambda = \left\{ \Big (a+\frac{1}{2}, 2a\Big ),\quad \Big (2a, a+\frac{1}{2}\Big )\right\} \end{aligned}$$

reduces to the single case \((\alpha , \beta )= (1/2, -1/2).\) Indeed,

$$\begin{aligned} \Psi (x) = {}_1F_2\left( 1\,; \frac{3}{2}, \,2\,; -\frac{\,x^2}{4}\right) =\left[ \frac{\sin (x/2)}{x/2}\right] ^2 \end{aligned}$$

in this case, which is nonnegative but has infinitely many positive zeros.

By Theorem 4.2, we conclude that \(\Psi \) is strictly positive for each \((\alpha , \beta )\in {\mathcal {P}}^*\) unless it coincides with \((1/2, -1/2)\), and our proof is complete. \(\square \)

Fig. 4
figure 4

The improved positivity region for problem (1.1) in which the line \(\beta =\alpha \) corresponds to Szegö’s problem (1.2) (Color figure online)

Remark 5.1

In Fig. 4, the green-colored part represents \({\mathcal {P}}^*\). As is evident pictorially on comparing with Fig. 1, Theorem 5.1 improves Theorem A by adding the triangle

$$\begin{aligned} \Delta =\left\{ -1<\alpha<\frac{1}{2},\quad -\frac{1}{3} (\alpha +1)\le \beta <\min (-\,\alpha , 0)\right\} \end{aligned}$$
(5.3)

as a new positivity region and by narrowing down the necessity region.

As an immediate consequence of Theorem 5.1, we obtain the following upper and lower bounds for \(\beta (\alpha )\) and \({\bar{\alpha }}.\)

Corollary 5.1

Under the same setting as in Theorem B, we have

$$\begin{aligned} \mathrm{(i)}\quad&\,\max \left( -\,\alpha -1, \,-\frac{1}{2}\right)<\beta (\alpha )\le -\frac{1}{3} (\alpha +1),\\ \mathrm{(ii)}\quad&\,\quad \lim _{\alpha \rightarrow -1 +}\beta (\alpha ) = 0,\quad \beta \left( \frac{1}{2}\right) = -\frac{1}{2},\\ \mathrm{(iii)}\quad&\,\quad -\frac{1}{2}<{\bar{\alpha }}\le -\frac{1}{4}. \end{aligned}$$

Remark 5.2

While the results are evident by Theorem B and Theorem 5.1, that \(\beta (1/2)= -1/2\) can be verified in a simple way. Indeed, the formula

$$\begin{aligned} J_{\frac{1}{2}}(t)= \sqrt{\frac{2}{\pi t}}\, \sin t \end{aligned}$$

(see Luke [16], Watson [21]) implies that \(j_{\frac{1}{2},\,2} = 2\pi \) and

$$\begin{aligned} \int _0^{2\pi } \sqrt{t} J_{\frac{1}{2}}(t) \hbox {d}t = \sqrt{\frac{2}{\pi }}\,\int _0^{2\pi } \sin t\, \hbox {d}t =0, \end{aligned}$$

whence the desired value follows instantly by the uniqueness of \(\beta (\alpha )\).

Corollary 5.1 indicates that \(\beta =\beta (\alpha ),\,-1<\alpha \le 1/2,\) is a smooth curve joining \((-\,1, 0),\,(1/2, -1/2)\) which lies in the triangle determined by

$$\begin{aligned} \beta =-\,\alpha -1,\quad \beta =-\,1/2,\quad \beta =-\,(\alpha +1)/3. \end{aligned}$$

In [3], Askey and Steinig gave a list of numerical approximations for \(\beta (\alpha )\). To get an insight into how accurate or informative the above upper bound would be, we compare it with their list as follows.

\(\alpha \)

\(\beta (\alpha )\)

\(-\dfrac{1}{3} (\alpha +1)\)

\(-\) 0.5

\(-\) 0.1915562

\(-\) 0.1666667

\(-\) 0.4

\(-\) 0.2259427

\(-\) 0.2000000

\(-\) 0.3

\(-\) 0.2593436

\(-\) 0.2333333

\(-\) 0.2

\(-\) 0.2918541

\(-\) 0.2666667

\(-\) 0.1

\(-\) 0.3235531

\(-\) 0.3000000

0

\(-\) 0.3545096

\(-\) 0.3333333

0.1

\(-\) 0.3847832

\(-\) 0.3666667

0.2

\(-\) 0.4144258

\(-\) 0.4000000

0.3

\(-\) 0.4434834

\(-\) 0.4333333

0.4

\(-\) 0.4719960

\(-\) 0.4666667

Regarding the approximated values as true ones, these comparisons show that \(\beta (\alpha )\) lies within distance 0.026 from \(-\,(\alpha +1)/3\) and the error increases up to certain point near \(\alpha = -\,0.3\) and then decreases to zero.

On the other hand, we also point out that Szegö [10] approximated \({\bar{\alpha }}\approx -0.2693885,\) whereas our upper bound of \({\bar{\alpha }}\) is \(-\,0. 25\).

6 A Simplified Proof for the Improved Part

Regarding Askey–Szegö problem (1.1), Professor Gasper pointed out that it is possible to give a much simpler proof for the positivity in the improved region \(\Delta \), defined in (5.3), of parameters. Based on a modification of Lemma 3.1, his proof proceeds as follows. Let

$$\begin{aligned} \Theta _n = {}_4F_{3} \left[ \begin{array}{c} -n,\,n+2\nu ,\,\nu + 1,\,\frac{\alpha -\beta +1}{2}\\ \nu +\frac{1}{2},\,\alpha +1,\,\frac{\alpha -\beta +3}{2} \end{array}\right] , \end{aligned}$$

which corresponds to coefficient \(C(n, \nu )\) of (4.7) in Gasper’s series expansion (4.6) for the function \(\Psi \) defined in (5.1).

Choosing \(2\nu = \alpha +1/2\) so that \(\Theta _n\) becomes Saalschützian, we apply Whipple’s transformation formula (2.2) with

$$\begin{aligned} \alpha _1&=\alpha +\frac{1}{2},\quad \alpha _2=\frac{\alpha -\beta +1}{2},\quad \alpha _3=\frac{1}{2}\left( \alpha +\frac{5}{2}\right) , \\ \beta _1&=\alpha +1,\quad \beta _2=\frac{1}{2}\left( \alpha +\frac{3}{2} \right) ,\quad \beta _3=\frac{\alpha -\beta +3}{2}. \end{aligned}$$

Since \( 1+\sigma = \alpha _1, \,\beta _3 = 1+\alpha _2, \,1+\sigma /2 = \beta _2 = 1+\beta _1-\alpha _3,\) on cancelling out four pairs of numerator-denominator parameters, the \({}_7F_6\) series on the right of (2.2) reduces to a terminating \({}_3F_2\) series. It is thus found that

$$\begin{aligned} \Theta _n =\frac{n! \left( \frac{\alpha +\beta }{2}\right) _n}{\left( \alpha +\frac{1}{2}\right) _n \left( \frac{\alpha -\beta +3}{2}\right) _n}\sum _{k=0}^n \frac{\left( \alpha -\frac{1}{2}\right) _k \left( \frac{\alpha -\beta +1}{2}\right) _k\left( -\frac{1}{2}\right) _k}{k! \left( \frac{\alpha +\beta }{2}\right) _k(\alpha +1)_k}, \end{aligned}$$
(6.1)

where the case \(\alpha = -1/2\) must be understood as an appropriate limit.

In the case \(n=1,\) it is easy to simplify (6.1) in the form

$$\begin{aligned} \Theta _1 = \frac{\alpha +3\beta +1}{2(\alpha +1)(\alpha -\beta +3)}, \end{aligned}$$

which indicates \(\Theta _1\ge 0\) for \((\alpha , \beta )\in \Delta .\) For \(n\ge 2,\) we find

$$\begin{aligned} \Theta _n&=\frac{ n!\left( \frac{\alpha +\beta +2}{2}\right) _{n-1}}{\left( \alpha +\frac{3}{2}\right) _{n-1} \left( \frac{\alpha -\beta +3}{2}\right) _n}\Biggl \{ \frac{\alpha +3\beta +1}{4(\alpha +1)}\\&\qquad +\frac{1-2\alpha }{4} \sum _{k=2}^n \frac{\left( \alpha +\frac{3}{2}\right) _{k-2} \left( \frac{\alpha -\beta +1}{2}\right) _k\left( \frac{1}{2}\right) _{k-1}}{k! \left( \frac{\alpha +\beta +2}{2}\right) _{k-1}(\alpha +1)_k}\Biggr \}, \end{aligned}$$

which indicates \(\Theta _n>0\) for \((\alpha , \beta )\in \Delta .\)

By the same reasoning as in the proof of Theorem 4.2, we may conclude that \(\Psi \) is strictly positive for \((\alpha , \beta )\in \Delta \) and hence Theorem 5.1 is proved for this improved region of parameters. Notice that \(\Theta _n\) of this type does not satisfy the sufficient conditions of Lemma 3.1, and the required inequality \(1+\sigma >0\) in the proof of Lemma 3.1 does not need to hold, either.

More generally, if \(1+\sigma =\alpha _1\) in Whipple’s transformation formula (2.2), then the \({}_7F_6\) series reduces to a terminating \({}_5F_4\) series with parameters independent of n, and it may be possible to prove the nonnegativity or positivity of \(\Theta _n\) in a simplified manner as above even when \(1+\sigma \le 0.\)

In practice, if we consider a \({}_1F_2\) generalized hypergeometric function of type (1.5) and its series expansion according to Gasper’s formula (4.6), then it is easy to see that such a reduction takes place in each coefficient only when b or c coincides with \(a+1\). To carry out the above idea of simplification concretely, let us fix \(b=a+1\) and put

$$\begin{aligned} \Theta _n = {}_4F_{3} \left[ \begin{array}{c} -n,\,n+2\nu ,\,\nu + 1,\,a\\ \nu +\frac{1}{2},\,a+1,\,c\end{array}\right] , \end{aligned}$$

where \(2\nu = c-1/2,\) which corresponds to the coefficient of (4.7).

On rearranging the parameters in the form

$$\begin{aligned} \alpha _1&= c-\frac{1}{2},\quad \alpha _2=a,\quad \alpha _3 =\frac{1}{2}\left( c+\frac{3}{2}\right) ,\\ \beta _1&=\frac{1}{2}\left( c+\frac{1}{2}\right) ,\quad \beta _2 =c,\quad \beta _3 =a+1 \end{aligned}$$

so as to \(1+\sigma =\alpha _1,\) Whipple’s transformation formula (2.2) gives

$$\begin{aligned} \Theta _n =\frac{n!\left( c-a-\frac{1}{2}\right) _n}{\left( c-\frac{1}{2}\right) _n (a+1)_n} \sum _{k=0}^n \frac{\left( -\frac{1}{2}\right) _k (a)_k \left( c-\frac{3}{2}\right) _k}{k! (c)_k \left( c-a-\frac{1}{2}\right) _k}. \end{aligned}$$
(6.2)

As readily verified, since the first two terms of (6.2) add up to

$$\begin{aligned} \frac{\left( c-\frac{1}{2}\right) \left( c-\frac{3a}{2}\right) }{c\left( c-a-\frac{1}{2}\right) }, \end{aligned}$$

we easily deduce that \(\Theta _1\ge 0\) and \(\Theta _n>0\) for all \(n\ge 2\) under the assumption \(0<a<1,\,3a/2\le c<3/2.\) In summary, we obtain:

Theorem 6.1

If \(0<a<1,\,\frac{3}{2} a\le c<\frac{3}{2},\) then

$$\begin{aligned} \Phi _s(x) = {}_1F_2\left[ \begin{array}{c} a\\ a+1, \,c\end{array}\biggr | -\frac{\,x^2}{4}\right]>0\qquad (x>0). \end{aligned}$$

In the special case \(a= (\alpha -\beta +1)/2,\,c=\alpha +1,\) that is, \(\Phi _s =\Psi ,\) the sufficient condition is readily seen to be equivalent to \(\Delta \). We should remark that an application of Theorem 4.2 shows that \(\Phi _s\) is strictly positive for

$$\begin{aligned} c\ge \max \left[ 2a -\frac{1}{2},\quad \frac{3}{2} a\right] ,\quad a>0,\, \end{aligned}$$

unless \(a=1,\,c= 3/2,\) and therefore this result is a part of Theorem 4.2 although its proof is considerably simplified as above.

7 Gasper’s Extensions

As a generalization of (1.1), we consider the problem of determining parameters \(\alpha , \beta , \gamma \) for the inequality

$$\begin{aligned} \int _0^x (x^2- t^2)^\gamma t^{-\beta }J_\alpha (t) \hbox {d}t \ge 0\qquad (x>0), \end{aligned}$$
(7.1)

which reduces to problem (1.1) in the special case \(\gamma =0.\)

By integrating termwise, it is plain to evaluate

$$\begin{aligned} \int _0^x (x^2- t^2)^\gamma t^{-\beta }J_\alpha (t) \hbox {d}t&= \frac{B\left( \gamma +1, \frac{\alpha -\beta +1}{2}\right) }{2^{\alpha +1}\Gamma (\alpha +1)}\,x^{\alpha -\beta +2\gamma +1} \nonumber \\&\quad \times \, {}_1F_2\left[ \begin{array}{c} \frac{\alpha -\beta +1}{2}\\ \alpha +1, \frac{\alpha -\beta +1}{2} +\gamma +1\end{array} \biggl | -\frac{\,x^2}{4}\right] \end{aligned}$$
(7.2)

subject to the condition \(\alpha>-1,\,\gamma >-1,\,\beta <\alpha +1,\) where B denotes the usual Euler’s beta function. Analogously to (1.1), hence, problem (7.1) is equivalent to the positivity question on the \({}_1F_2\) generalized hypergeometric function defined on the right side of (7.2).

In [12], Gasper employed the sums of squares method and an interpolation argument involving fractional integrals to prove that (7.1) holds with strict positivity for each \((\alpha , \beta )\in {\mathcal {S}}_\gamma {\setminus }\left\{ (\gamma +1/2, \,-\gamma -1/2)\right\} ,\) where

$$\begin{aligned} {\mathcal {S}}_\gamma = \left\{ \alpha \ge \gamma +\frac{1}{2},\quad \alpha -2\gamma -1\le \beta <\alpha +1\right\} \end{aligned}$$

in the case \(-\,1<\gamma \le -1/2\) and

$$\begin{aligned} {\mathcal {S}}_\gamma&= \bigl \{\alpha >-1,\, \,0\le \beta <\alpha +1\bigr \}\nonumber \\&\qquad \cup \,\left\{ \alpha \ge \gamma +\frac{1}{2},\quad -\Big (\gamma +\frac{1}{2}\Big )\le \beta \le 0\right\} \end{aligned}$$

in the case \(\gamma >-1/2\) (see Figs. 56).

Our purpose here is to improve Gasper’s result as follows.

Theorem 7.1

Let \(\alpha>-1,\,\gamma >-1,\,\beta <\alpha +1.\)

  1. (i)

    If (7.1) holds, then necessarily

    $$\begin{aligned} \beta \ge -\Big (\gamma +\frac{1}{2}\Big ),\quad -\alpha -1<\beta <\alpha +1. \end{aligned}$$
  2. (ii)

    For each \(\gamma >-1,\) let \({\mathcal {S}}_\gamma ^*\) be the set of \((\alpha , \beta )\in {\mathbb {R}}^2\) defined by

    $$\begin{aligned} {\mathcal {S}}_\gamma ^* = \left\{ \,\alpha >-1,\quad \max \left[ -\Big (\gamma +\frac{1}{2}\Big ),\quad -\,\frac{2\gamma +1}{2\gamma +3} (\alpha +1)\right] \le \beta <\alpha +1\,\right\} . \end{aligned}$$

    If \((\alpha , \beta )\in {\mathcal {S}}_\gamma ^*,\) then (7.1) holds with strict positivity unless

    $$\begin{aligned} \alpha =\gamma +\frac{1}{2}, \,\beta = -\Big (\gamma +\frac{1}{2}\Big ) \quad \text {or}\quad \gamma =-\,\frac{1}{2},\quad \beta =0. \end{aligned}$$

Proof

In view of (7.2), it suffices to deal with

$$\begin{aligned} \Sigma (x) = {}_1F_2\left[ \begin{array}{c} \frac{\alpha -\beta +1}{2}\\ \alpha +1, \frac{\alpha -\beta +1}{2} +\gamma +1\end{array} \biggl | -\frac{\,x^2}{4}\right] \qquad (x>0). \end{aligned}$$

On setting

$$\begin{aligned} a= \frac{\alpha -\beta +1}{2},\quad b=\alpha +1,\quad c= \frac{\alpha -\beta +1}{2} +\gamma + 1 \end{aligned}$$

and applying the necessary part of Theorem 4.14.2 in the same way as in the proof of Theorem 5.1, it is straightforward to verify (i), (ii).

As for the cases of nonnegativity, we note from (4.4) of Remark 4.1 that if \(\alpha =\gamma +1/2,\,\beta =-\,(\gamma +1/2),\,\gamma >-1, \) then

$$\begin{aligned} \Sigma (x) = {}_1F_2\left[ \begin{array}{c} \gamma +1\\ \gamma +\frac{3}{2}, 2(\gamma +1)\end{array} \biggl | -\frac{\,x^2}{4}\right] = {\mathbb {J}}_{\gamma +\frac{1}{2}}^2\left( \frac{x}{2}\right) . \end{aligned}$$

On the other hand, if \(\alpha >-1, \,\beta =0,\, \gamma =-\,1/2,\) then

$$\begin{aligned} \Sigma (x) = {}_1F_2\left[ \begin{array}{c} \frac{\alpha +1}{2}\\ \alpha +1, \frac{\alpha +2}{2}\end{array} \biggl | -\frac{\,x^2}{4}\right] = {\mathbb {J}}_{\frac{\alpha }{2}}^2\left( \frac{x}{2}\right) . \end{aligned}$$

Both identities show \(\Sigma \ge 0\) with infinitely many positive zeros.\(\square \)

Fig. 5
figure 5

The improved positivity region for (7.1) in the case \(\gamma >-1/2\), where the yellow-colored part represents Gasper’s positivity region (Color figure online)

Fig. 6
figure 6

The improved positivity region problem (7.1) in the case \(-\,1<\gamma < -1/2,\) where the yellow-colored part represents Gasper’s positivity region (Color figure online)

Remark 7.1

As for the missing ranges, we point out the following:

  • In the case \(\gamma >-1/2,\) as shown in Fig. 5, Theorem 7.1 leaves the triangle formed by the boundary lines

    $$\begin{aligned} \beta =-\,\alpha -1,\quad \beta = -(\gamma +1/2),\quad \beta = -\,\frac{2\gamma +1}{2\gamma +3} (\alpha +1), \end{aligned}$$

    and it is an open question if it is possible to give a necessary and sufficient condition in terms of certain transcendental root \(\beta _\gamma (\alpha )\) in an analogous manner with the case \(\gamma =0.\)

  • In the case \(\gamma =-\,1/2,\) problem (7.1) is completely resolved in the sense that it holds if and only if \(\alpha >-1, \,0\le \beta <\alpha +1.\)

  • In the case \(-\,1<\gamma <-1/2,\) as shown in Fig. 6, Theorem 7.1 leaves the infinite sector defined by

    $$\begin{aligned} \alpha > \gamma +1/2,\,\quad -(\gamma +1/2)\le \beta <-\,\frac{2\gamma +1}{2\gamma +3} (\alpha +1). \end{aligned}$$