Abstract
The spectral decomposition for an explicit second-order differential operator T is determined. The spectrum consists of a continuous part with multiplicity two, a continuous part with multiplicity one, and a finite discrete part with multiplicity one. The spectral analysis gives rise to a generalized Fourier transform with an explicit hypergeometric function as a kernel. Using Jacobi polynomials, the operator T can also be realized as a five-diagonal operator, leading to orthogonality relations for 2×2-matrix-valued polynomials. These matrix-valued polynomials can be considered as matrix-valued generalizations of Wilson polynomials.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
It is well known that a three-term recurrence relation
with a −1=0, can be solved using orthogonal polynomials. A generalization of this is obtained by Durán and Van Assche [5], who show that a 2N+1-term recurrence relation can be solved using N×N-matrix-valued orthogonal polynomials. Motivated by this result and previous work by Ismail and the second author [12, 13], a method is presented by Ismail and the authors [8] to obtain orthogonality relations for 2×2-matrix-valued orthogonal polynomials from an operator T on a Hilbert space \(\mathcal{H}\) of functions. The operator T must satisfy the following conditions:
-
1.
T is self-adjoint;
-
2.
there exists a weighted Hilbert space \(L^{2}(\mathcal{V})\) and a unitary operator \(U:\mathcal{H} \to L^{2}(\mathcal{V})\) so that UT=MU, where M is the multiplication operator on \(L^{2}(\mathcal{V})\);
-
3.
there exists an orthonormal basis \(\{f_{n}\} _{n=0}^{\infty}\) of \(\mathcal{H}\), and there exist sequences \((a_{n})_{n=0}^{\infty}\), \((b_{n})_{n=0}^{\infty}\), \((c_{n})_{n=0}^{\infty}\) of numbers with a n >0 and \(c_{n} \in \mathbb{R}\) for all \(n \in \mathbb{N}\), such that
$$T f_n = a_n f_{n+2} + b_n f_{n+1} + c_n f_n + \overline{b_{n-1}} f_{n-1} + a_{n-2}f_{n-2}, $$where we assume a −1=a −2=b −1=0.
In [8], two explicit examples are worked out, where the operator T is, besides a five-term operator, also realized as the second-order q-difference operator corresponding to well-known q-hypergeometric orthogonal polynomials. Thus, the unitary operator U is the integral transform with the corresponding orthogonal polynomials as a kernel. This leads to complicated, but explicit, orthogonality relations for certain matrix-valued polynomials defined by an explicit matrix three-term recurrence relation. We note that the explicit weight function differs structurally from the usually considered weight functions for matrix-valued orthogonal polynomials consisting of a matrix-deformation of a classical weight.
In this paper, we apply the method from [8] with the second-order differential operator T=T (α,β;κ) defined by
Here x∈(−1,1), α,β>−1, and \(\kappa \in \mathbb{R}_{\geq0} \cup i\mathbb{R}_{>0}\). The differential operator T is closely related to the second-order differential operator to which the Jacobi polynomials are eigenfunctions. It should be noted that T raises the degree of a polynomial by 2, so there are no polynomial eigenfunctions. We will show that the differential operator T, considered as an unbounded operator on a weighted L 2-space, satisfies conditions (i)–(iii) given above. An interesting problem here is that T does not correspond to orthogonal polynomials or to a known unitary integral transform such as the Jacobi function transform [15].
The unitary operator U needed in condition (ii) is given by an explicit integral transform \(\mathcal{F}\) which is obtained from spectral analysis of T. The spectrum of T consists of a continuous part with multiplicity two, a continuous part with multiplicity one, and a (possibly empty) finite discrete part of multiplicity one. As a result, the integral transform \(\mathcal{F}\) has a hypergeometric kernel which is partly \(\mathbb{C}^{2}\)-valued and partly \(\mathbb{C}\)-valued. There are several (but not very many) hypergeometric integral transforms with \(\mathbb{C}^{2}\)-valued kernels available in the literature, see, e.g., [9, 17], [14, Exercise (4.4.11)], see also [7] for an example with basic hypergeometric functions. To the best of our knowledge, all known examples can be considered as nonpolynomial extensions of hypergeometric orthogonal polynomials, in the sense that the corresponding kernels are eigenfunctions of a differential/difference operator that also has orthogonal polynomials as eigenfunctions. For example, Neretin’s \(\mathbb{C}^{2}\)-valued 2 F 1-integral transform [17] generalizes the Jacobi polynomials. The integral transform \(\mathcal{F}\) we consider in this paper, however, does not seem to generalize a family of orthogonal polynomials, although in a special case it can be considered as a nonpolynomial extension of two different one-parameter families of Jacobi polynomials. Furthermore, other hypergeometric integral transforms and hypergeometric orthogonal polynomials correspond to a bispectral problem, see, e.g., [10], which can always be related directly to contiguous relations for hypergeometric functions. From the explicit expressions as hypergeometric functions for the kernel of \(\mathcal{F}\), it is unclear whether \(\mathcal{F}\) is also related to a bispectral problem.
In a special case, the 2×2-matrix-valued orthogonal polynomials we obtain can be diagonalized. In this case, the orthogonality relations correspond to orthogonality relations for two subfamilies of Wilson polynomials [19], and the matrix-three-term recurrence relation corresponds to the three-term recurrence relations for the two subfamilies. Moreover, the differential operator arises as an extension of [12, §3] from a three-term to five-term recurrence operator for such a differential operator as described in [8]. In [12, §3], a subfamily of the Wilson polynomials play the role of the matrix-valued polynomials of this paper. This is why we consider our matrix-valued polynomials as generalizations of (subfamilies of) Wilson polynomials. It should be stressed that it is completely unclear at this point whether the matrix-valued polynomials have other properties that are fundamental for the Wilson polynomials. For example, the Wilson polynomials are eigenfunctions of a second-order difference operator, but a similar result for the matrix-valued polynomials in this paper is not (yet) known.
The organization of this paper is as follows. In Sect. 2, we introduce the integral transform \(\mathcal{F}\) and show that the differential operator T (1.1) satisfies conditions (i) and (ii). The proofs for this section are given separately in Sect. 4, where the spectral analysis of T is carried out, which can be quite technical at certain points. In Sect. 3, we realize T as a five-diagonal operator on a basis consisting of Jacobi polynomials, so that condition (iii) is also satisfied. The corresponding five-term recurrence relation is equivalent to a matrix three-term recurrence relation that defines 2×2-matrix-valued orthogonal polynomials P n for which the orthogonality relations are determined. We also consider briefly the special case α=β, in which case the integral transform \(\mathcal{F}\) reduces to two Jacobi function transforms and the orthogonality relations for P n correspond to the orthogonality relations for certain Wilson polynomials. Finally, in Sect. 4, eigenfunctions of T are given, which are needed for the spectral decomposition of T. The spectral decomposition leads to a proof of the unitarity of the integral transform \(\mathcal{F}\), and to an explicit formula for its inverse.
Notation
We write \(\mathbb{N}\) for the set of nonnegative integers. We use standard notation for hypergeometric functions, as in, e.g., [2, 11]. For products of Γ-functions and of shifted factorials, we use the shorthand notation
2 Spectral Analysis and a Hypergeometric Function Transform
In this section, we describe the spectral analysis of the operator T defined by (1.1). The spectral decomposition is given by an integral transform with certain hypergeometric 2 F 1-functions as a kernel which is interesting in its own right. The proofs for this section are postponed until Sect. 4.
Let α,β>−1 be fixed, and let w (α,β) be the Jacobi weight function on [−1,1] given by
The corresponding inner product is denoted by 〈⋅,⋅〉,
The weight is normalized such that 〈1,1〉=1. We denote by \(\mathcal{H}=\mathcal{H}^{(\alpha ,\beta )}\) the corresponding weighted L 2-space; \(\mathcal{H}= L^{2}((-1,1),w^{(\alpha ,\beta )}(x)\,dx)\). To stress the dependence on the parameters α and β, we will sometimes denote the inner product in \(\mathcal {H}^{(\alpha ,\beta )}\) by 〈⋅,⋅〉 α,β . Let us remark that the substitution x↦−x sends T (α,β;κ) to T (β,α;κ), and \(\mathcal{H}^{(\alpha ,\beta )}\) to \(\mathcal{H}^{(\beta ,\alpha )}\). So without loss of generality we may assume β≥α, which we do from here on.
We consider T as an unbounded operator on \(\mathcal{H}\). The domain \(\mathcal{D}_{0}\) for T is described in Sect. 4.2, where the following result is proved.
Proposition 2.1
The operator \((T,\mathcal{D}_{0})\) has a unique self-adjoint extension.
We denote the extension of T again by T. The spectral analysis of T will be described by the integral transform \(\mathcal{F}\) mapping functions in \(\mathcal{H}\) (under suitable conditions) to functions in the Hilbert space \(L^{2}(\mathcal{V})\). We first introduce the latter space.
Let \(\varOmega _{1}, \varOmega _{2} \subset \mathbb{R}\) be given by
We set
Here \(\sqrt{\cdot}\) denotes the principal branch of the square root. For \(n \in \mathbb{N}\), we define \(\lambda _{n} \in \mathbb{C}\) as the solution of
We define the finite set Ω d by
i.e., Ω d consists of the real solutions of (2.4). Note that Ω d =∅ if κ<1 or \(\kappa \in i\mathbb{R}_{>0}\). The number λ n ∈Ω d has the explicit expression
We will denote by σ the set Ω 2∪Ω 1∪Ω d . Theorem 2.2 will show that σ is the spectrum of T.
Next we introduce the weight functions that we need to define \(L^{2}(\mathcal{V})\). First we define
With this function, we define for λ∈Ω 1,
For λ∈Ω 2, we define the matrix-valued weight function V(λ) by
with
and \(v_{12}(\lambda ) = \overline{v_{21}(\lambda )}\). Finally, for λ n ∈Ω d , we set
Note here that \(\delta (\lambda _{n})-\eta(\lambda _{n}) = \frac{(\alpha -\beta )(\alpha +\beta +2)}{-2n-1+\kappa }\).
Now we are ready to define the Hilbert space \(L^{2}(\mathcal{V})\). It consists of functions that are \(\mathbb{C}^{2}\)-valued on Ω 2 and \(\mathbb{C}\)-valued on Ω 1∪Ω d . The inner product on \(L^{2}(\mathcal{V})\) is given by
where
Next we introduce the integral transform \(\mathcal{F}\). For λ∈Ω 1 and x∈(−1,1), we define
By Euler’s transformation, see, e.g., [2, (2.2.7)], we can replace δ λ by −δ λ in (2.12). Furthermore, we define for λ∈Ω 2 and x∈(−1,1),
Observe that \(\overline{\varphi^{+}_{\lambda }(x)}=\varphi^{-}_{\lambda }(x)\), again by Euler’s transformation. Finally, for λ n ∈Ω d , we define
Now, let \(\mathcal{F}\) be the integral transform defined by
for all \(f \in\mathcal{H}\) such that the integrals converge. The following result says that \(\mathcal{F}\) is the required unitary operator U from the introduction.
Theorem 2.2
The integral transform \(\mathcal{F}\) extends uniquely to a unitary operator \(\mathcal{F}:\mathcal{H} \to L^{2}(\mathcal{V})\) such that \(\mathcal{F} T = M \mathcal{F}\), where \(M:L^{2}(\mathcal{V}) \to L^{2}(\mathcal{V})\) is the unbounded multiplication operator given by (Mg)(λ)=λg(λ) for almost all λ∈σ.
Remark 2.3
In case α=β, the spectral decomposition of T can be described using the Jacobi function transform [15]. To see this, we apply the change of variable x=tanh(t). Then the second-order differential operator T defined by (1.1) turns into
For α=β, let f λ be a solution of the eigenvalue equation \(\widehat{T} f_{\lambda }= \lambda f_{\lambda }\). Now define \(F_{\lambda }^{\pm}(t) = \cosh^{\frac{1}{2}(2\alpha +3\pm \kappa )}(t) f_{\lambda }(t)\). Then F λ satisfies
Using the differential equation for Jacobi functions, see [15, (1.1)], we now see that the spectral decomposition of T can be given using the Jacobi function transforms corresponding to the Jacobi functions \(\phi_{\delta _{\lambda }}^{(-\frac{1}{2},\frac{1}{2}\kappa )}\) and \(\phi_{\delta _{\lambda }}^{(-\frac{1}{2},-\frac{1}{2}\kappa )}\).
We have an explicit inverse of the integral transform \(\mathcal{F}\). Define for x∈(−1,1) the integral transform \(\mathcal{G}\) by
for all functions \(f \in L^{2}(\mathcal{V})\) for which the above integrals converge.
Theorem 2.4
The integral transform \(\mathcal{G}\) extends uniquely to an operator \(\mathcal{G}: L^{2}(\mathcal{V}) \to\mathcal{H}\) such that \(\mathcal{G} = \mathcal {F}^{-1}\).
Theorem 2.2 and 2.4 are proved in Sect. 4. The following orthogonality relations are a result of Theorem 2.2 by considering the discrete spectrum of T, see Corollary 4.17 for the proof.
Corollary 2.5
Let κ≥1. Then the following orthogonality relations hold:
for all \(n,m \in \mathbb{N}\) such that \(n,m \leq\frac{1}{2}(\kappa -1)\).
3 Matrix-Valued Orthogonal Polynomials
In this section, we show that the differential operator T can be realized as a five-diagonal operator with respect to an orthonormal basis for \(\mathcal{H}\). Using the spectral decomposition for T, this leads to orthogonality relations for 2×2-matrix-valued orthogonal polynomials.
3.1 The Five-Diagonal Operator
The Jacobi polynomials are defined by
For α,β>−1, they form an orthogonal basis for \(\mathcal{H}\);
The Jacobi polynomials are eigenfunctions of the Jacobi differential operator
We define r(x)=1−x 2. Then for x∈(−1,1), the polynomial r can be written as
The differential operator T=T (α,β;κ) defined by (1.1) is related to L (α,β) by
where M(r) denotes multiplication by r.
In [8, Sect. 3.1], it is shown that an operator of the form (3.2) acts as a five-term operator on a suitable basis of \(\mathcal{H}\). In this case, the basis consists of Jacobi polynomials. We define \(\phi_{n} = P_{n}^{(\alpha ,\beta )}/(h_{n}^{(\alpha ,\beta )})^{1/2}\), \(n \in \mathbb{N}\). Then \(\{\phi_{n}\}_{n \in \mathbb{N}}\) is an orthonormal basis for \(\mathcal{H}^{(\alpha ,\beta )}\). We also define \(\varPhi_{n} = P_{n}^{(\alpha +1,\beta +1)}/(h_{n}^{(\alpha +1,\beta +1)})^{1/2}\), \(n \in \mathbb{N}\). Then \(\{\varPhi_{n}\}_{n \in \mathbb{N}}\) is an orthonormal basis for \(\mathcal{H}^{(\alpha +1,\beta +1)}\). In order to write T explicitly as a five-diagonal operator on the basis \(\{\phi_{n}\}_{n \in \mathbb{N}}\), we need a connection formula between \(\{\phi_{n}\}_{n \in \mathbb{N}}\) and \(\{ \varPhi_{n}\}_{n \in \mathbb{N}}\).
Lemma 3.1
The following connection formula holds:
where
Proof
There exists an expansion \(\phi_{n} = \sum_{k = 0}^{n} a_{n,k} \varPhi_{k}\), where
Since r has degree 2, it follows from the orthogonality relations for ϕ n that a n,k =0 for 0≤k≤n−3.
We compute the three remaining coefficients. The value of a n,n follows from comparing leading coefficients; \(a_{n,n} = \frac{\mathrm {lc}(\phi_{n})}{\mathrm {lc}(\varPhi_{n})}\). We have
and lc(Φ n ) is obtained by replacing (α,β) by (α+1,β+1), which leads to the result for α n =a n,n .
For a polynomial p of degree n, let k(p) denote the coefficient of (1−x)n−1. Then
We have
and k(Φ n ) is obtained by replacing (α,β) by (α+1,β+1), which gives the expression for β n =a n,n−1.
Finally, the expression for γ n =a n,n−2 follows from
□
We now use [8, Lemma 3.1] to write the differential operator T as a five-diagonal operator.
Proposition 3.2
The operator T defined by (1.1) acts as a five-diagonal operator on the basis \(\{\phi_{n}\}_{n \in \mathbb{N}}\) of \(\mathcal{H}\) by
with coefficients given by
where Λ n =−n(n+α+β+3), K, ρ are given by (3.1), (3.2), and α n ,β n ,γ n are as in Lemma 3.1.
One easily verifies that a −1=a −2=b −1=0. Furthermore, we have the factorization
3.2 Matrix-Valued Orthogonal Polynomials
From Theorem 2.2 and Proposition 3.2, it follows that the functions \(\mathcal{F}\phi_{n}\), \(n \in \mathbb{N}\), satisfy the five-term recurrence relation
for almost all λ∈σ. Furthermore, the set \(\{\mathcal{F}\phi _{n}\}_{n \in \mathbb{N}}\) is an orthonormal basis for \(L^{2}(\mathcal{V})\). We can determine an explicit expression for \(\mathcal{F}\phi_{n}\) in terms of hypergeometric functions.
Lemma 3.3
For \(n \in \mathbb{N}\), let F n (δ,η)=F n (δ,η;α,β,κ) denote the series
with
Then, for λ∈σ,
The above 3 F 2-series converges absolutely if ℜ(α−δ+1+2l)>0, which is the case if λ∈Ω 1∪Ω 2. For λ∈Ω d , the 3 F 2-series terminates.
Proof
We compute
Interchanging the order of summation and integration, and using the Beta-integral, we obtain
where
Now the result follows from the explicit expressions for ϕ n and the integral transform \(\mathcal{F}\). □
From Lemma 3.3, it follows that
and
These two functions and the five-term recurrence relation from (3.4) completely determine the functions \(\mathcal{F}\phi_{n}\).
We define 2×2-matrix-valued orthogonal polynomials P n , \(n \in \mathbb{N}\), by the three-term recurrence relations
and the initial conditions P −1(λ)=0 and P 0(λ)=I. The matrix elements a n , b n , and c n are given in Proposition 3.2. From the five-term recurrence relation (3.4), we obtain, for \(m \in \mathbb{N}\),
The orthogonality relations for \(\mathcal{F} \phi_{n}\), \(n \in \mathbb{N}\), can now be reformulated as orthogonality relations for the matrix-valued polynomials P n , see [8, Theorem 2.1].
Theorem 3.4
The 2×2-matrix-valued orthogonal polynomials P n , \(n \in \mathbb{N}\), defined by (3.5) satisfy the orthogonality relations
with
where 〈x,y〉 V(λ)=x ∗ V(λ)y. Here \(\varOmega _{1},\varOmega _{2},\varOmega _{d} \subset \mathbb{R}\) are as in (2.2) and (2.5), the functions δ λ and η λ are as defined in (2.3), the weight functions v,V and N are defined by respectively (2.7), (2.8), and (2.10), and the constant D is given in (2.11).
Remark 3.5
In [8, Proposition 3.6], a q-analog of Theorem 3.4 is considered. The functions ϕ n in this case are the little q-Jacobi polynomials, and the integral transform \(\mathcal{F}\) is simply the integral transform corresponding to the continuous dual q-Hahn polynomials. It would be very interesting to see if similar results can be obtained for other q-analogs of the Jacobi polynomials, such as big q-Jacobi polynomials [1], Askey–Wilson polynomials [3], and Ruijsenaars’ R-function [18].
3.3 The Special Case α=β
We assume α=β, and for convenience we also assume Ω d =∅. In this case, Ω 1=∅, and δ λ =η λ for all λ∈Ω 2. The spectral decomposition of T can now be obtained in a different way.
The coefficient b n in the five-diagonal expression for T vanishes, so T reduces to a tridiagonal operator or Jacobi operator. Explicitly,
with
The spectral decomposition can be described with the help of the orthonormal Wilson polynomials [2, 19], which are defined by
If a,b,c,d>0, these polynomials are orthonormal with respect to an absolutely continuous measure on (0,∞).
For \(m \in \mathbb{N}\), we define
Then we obtain from the three-term recurrence relation for the Wilson polynomials,
Let μ e and μ o denote the orthogonality measures for the Wilson polynomials \(W_{m}^{\mathrm{e}}\) and \(W_{m}^{\mathrm{o}}\). The unitary operator \(U:\mathcal{H} \to L^{2}(\mu^{\mathrm{e}})\oplus L^{2}(\mu ^{\mathrm{o}})\), given by
satisfies UT=MU, where M is multiplication by −((α+1)2+x 2). So T indeed has continuous spectrum (−∞,−(α+1)2)=Ω 2, with multiplicity 2.
The Hilbert space \(\mathcal{H}^{(\alpha ,\alpha )}\) can also be split up in a natural way. From (1.1), we see that T (with α=β) leaves invariant the subspaces of even/odd functions, so we can split \(\mathcal{H}\) accordingly into \(\mathcal{H}^{\mathrm{e}}\) and \(\mathcal{H}^{\mathrm{o}}\). The Jacobi (Gegenbauer) polynomials ϕ 2m (x) are even polynomials; hence they form an orthonormal basis for \(\mathcal{H}^{\mathrm{e}}\), and by a quadratic transformation they can be transformed into multiples of \(P_{m}^{(\alpha ,-\frac{1}{2})}(2x^{2}-1)\). Similarly, the odd polynomials ϕ 2m+1(x) form an orthonormal basis for \(\mathcal{H}^{\mathrm{o}}\), and they can transformed into multiples of \(xP_{m}^{(\alpha ,\frac{1}{2})}(2x^{2}-1)\). Obviously there are similar transformations for the Jacobi polynomials Φ 2m and Φ 2m+1. Now the operator T restricted to \(\mathcal{H}^{\mathrm{e}}\) or \(\mathcal{H}^{\mathrm{o}}\) can be treated as in [12, Sect. 3]. The unitary operator U is given in each case by a Jacobi function transform, see Remark 2.3, so that we obtain a special case of Koornwinder’s formula [16] stating that Jacobi polynomials are mapped to Wilson polynomials by the Jacobi function transform. In this light, (3.6) can be considered as a matrix-analog of Koornwinder’s formula.
Remark 3.6
There exists an extension of Koornwinder’s formula on the level of Wilson polynomials [6, Theorem 6.7]: Wilson polynomials are mapped to Wilson polynomials by the Wilson function transform. It would be interesting to see if there also exists a matrix-analog of this formula.
4 Proofs for Sect. 2
In this section, we perform the spectral analysis of the second-order differential operator T defined by (1.1), considered as an unbounded operator on the Hilbert space \(\mathcal{H}\) defined in the beginning of Sect. 2.
4.1 Eigenfunctions
The eigenvalue equation Tf λ =λf λ is a second order differential equation with regular singular points at 1,−1 and ∞, so it has hypergeometric (i.e., 2 F 1) solutions. We first determine these solutions.
We define for \(\lambda \in \mathbb{C}\setminus (\varOmega _{1}\cup \varOmega _{2} )\), see (2.2), the functions
and
where δ and η are defined by (2.3). From Euler’s transformation for 2 F 1-series, it follows that \(\phi^{\pm}_{\lambda }\) is invariant under δ(λ)↦−δ(λ); similarly, \(\psi^{\pm}_{\lambda }\) is invariant under η(λ)↦−η(λ). Note that \(\psi_{\lambda }^{\pm}(x)\) is obtained from \(\phi_{\lambda }^{\pm}(x)\) by the substitution (α,β,x)↦(β,α,−x), and vice versa. Note also that the 2 F 1-series in \(\phi^{\pm}_{\lambda }\) is summable at x=1 if ℜ(δ(λ))>0, and that the 2 F 1-series in \(\psi^{\pm}_{\lambda }\) is summable at x=−1 if ℜ(η(λ))>0.
Remark 4.1
From here on we will just write δ and η, instead of δ(λ) and η(λ).
Proposition 4.2
The functions \(\phi_{\lambda }^{\pm}\), \(\psi_{\lambda }^{\pm}\) are solutions of the eigenvalue equation Tf=λf.
Proof
Suppose f is a solution of the eigenvalue equation Tf=λf. A calculation shows that if \(f(x) = (1-x)^{-\frac{1}{2}(\alpha +\delta +1)}(1+x)^{-\frac{1}{2}(\beta +\eta+1)}\phi(x)\), then ϕ satisfies
Now set \(t=\frac{1}{2}(1+x)\). Then
This is the hypergeometric differential equation (see, e.g., [2, Chap. 2]) with coefficients
The 2 F 1-functions in \(\phi^{-}_{\lambda }, \psi^{-}_{\lambda }\) are well-known solutions of this differential equation, so \(\phi^{-}_{\lambda }, \psi^{-}_{\lambda }\) are solutions of the eigenvalue equation. The proof for \(\phi^{+}_{\lambda }\) and \(\psi^{+}_{\lambda }\) is similar. □
For later reference, we need connection formulas for \(\phi_{\lambda }^{\pm}\) and \(\psi_{\lambda }^{\pm}\).
Proposition 4.3
For c defined by (2.6),
Proof
This follows from a three-term transformation for 2 F 1-functions, see, e.g., [2, (2.3.11)]. □
The following identities for the c-function turn out to be useful.
Lemma 4.4
The c-function defined by (2.6) satisfies:
- (i):
-
\(c(x;y) = -\frac{y}{x} c(-y;-x)\),
- (ii):
-
\(c(x;y)c(-x;-y) - c(x;-y)c(-x;y)=-\frac{y}{x}\).
Remark 4.5
Using the reflection equation for the Γ-function, Lemma 4.4(ii) is equivalent to the trigonometric identity
and can also be proved in this way.
Proof
The first identity follows from Γ(z+1)=zΓ(z).
For the second identity, we note that Proposition 4.3 implies
which in turn implies
Applying the first identity to the numerator then gives
which proves the second identity. □
We also need the behavior of the eigenfunctions near the endpoints −1 and 1.
Lemma 4.6
For x↓−1, we have
For x↑1, we have
Proof
This is straightforward from the explicit expressions as 2 F 1-series. □
Remark 4.7
Observe that the function \(|\phi^{\pm}_{\lambda }|^{2} w^{(\alpha ,\beta )}\) is in L 1(−1,0) if and only if ±ℜ(η)>0. Furthermore, \(|\psi^{\pm}_{\lambda }|^{2} w^{(\alpha ,\beta )}\) is in L 1(0,1) if and only if ±ℜ(δ)>0.
4.2 Spectral Analysis
We determine the spectrum and the spectral decomposition of T.
For functions f,g that are differentiable at a point x∈(−1,1), we define
where
and W(f,g) denotes the Wronskian
For 1≤a<b≤1 we denote by \(\mathcal{D}(a,b)\) the subspace of L 2((a,b),w (α,β)(x) dx) consisting of functions f such that:
-
f is continuously differentiable on (a,b),
-
f′ is absolutely continuous on (a,b),
-
Tf∈L 2((a,b),w (α,β)(x) dx).
Note that \(\mathcal{D}(a,b)\) is dense in L 2((a,b),w (α,β)(x) dx).
Lemma 4.8
Let 1≤a<b≤1 and \(f,g \in\mathcal{D}(a,b)\). Then
Proof
We write the differential operator T as
Then it follows that
Using integration by parts, this is equal to
which gives the result. □
Let \(\mathcal{D}_{0} \subset\mathcal{H}\) consist of the functions in \(\mathcal{D}(-1,1)\) with support on a compact interval in (−1,1).
Proposition 4.9
The densely defined operator \((T,\mathcal{D}_{0})\) is symmetric.
Proof
Clearly, we have \(\lim_{x \downarrow-1}[f,\bar{g}](x) = \lim_{x\uparrow 1}[f,\bar{g}](x)=0\) for \(f,g \in\mathcal{D}_{0}\). Then the result follows from Lemma 4.8. □
The function x↦[f,g](x), x∈(−1,1), is constant if f and g are solutions of the eigenvalue equations Ty=λy. In the following lemma, we determine the value of the constant in the case of the eigenfunctions \(\psi_{\lambda }^{\pm}\) en \(\phi_{\lambda }^{\pm}\).
Lemma 4.10
For \(\lambda \in \mathbb{C}\setminus(-\infty,-(\alpha +1)^{2})\),
where \(D=2^{\alpha +\beta +3}C=\frac{4\varGamma (\alpha +\beta +2)}{\varGamma (\alpha +1,\beta +1) }\) (see also (2.11)).
Note that \([\psi_{\lambda }^{-},\psi_{\lambda }^{+}]\) and \([\psi_{\lambda }^{-},\phi_{\lambda }^{-}]\) can be obtained from Lemma 4.10 using (α,β,x)↦(β,α,−x) and (δ,η)↦(−δ,−η), respectively.
Proof
We have
Using
we find
Now from the connection formula (4.1), we obtain
□
Let us mention that from the explicit formula (2.6) for c(−η;δ) and Lemma 4.10, it follows that \([\psi^{+}_{\lambda },\phi^{+}_{\lambda }]=0\) if and only if \(\lambda \in \mathbb{C}\setminus(-\infty,-(\alpha +1)^{2})\) is a solution of
for some \(n\in \mathbb{N}\), or equivalently,
Proposition 4.11
The symmetric operator \((T,\mathcal{D}_{0})\) has a unique self-adjoint extension.
We denote the self-adjoint extension again by T.
Proof
By [4, Theorem XIII.2.10], the adjoint of \((T,\mathcal{D}_{0})\) is \((T,\mathcal{D}(-1,1))\), so the deficiency spaces of \((T,\mathcal{D}_{0})\) consist of the solutions of the differential equations Tf=±if that are in \(\mathcal{H}\), [4, Corollary XIII.2.11]. Let \(f\in\mathcal{H}\) be a solution of Tf=if. Then f must be a linear combination of \(\phi_{i}^{+}\) and \(\phi_{i}^{-}\), since these are linearly independent solutions of this eigenvalue equation by Lemma 4.10. Note that ℜ(η(i))>0, so by Remark 4.7, \(\phi_{i}^{-}\) is not L 2 near −1, which implies that f is a multiple of \(\phi_{i}^{+}\). In the same way, it follows that f is a multiple of \(\psi^{+}_{i}\). But \(\phi_{i}^{+}\) and \(\psi_{i}^{+}\) are linearly independent by Lemma 4.10; hence f=0. In the same way, it follows that \(f \in\mathcal{H}\) satisfying Tf=−if is the zero function. So T has deficiency indices (0,0), which implies it has a unique self-adjoint extension. □
Assume \(\lambda \in \mathbb{C}\setminus \mathbb{R}\). In this case, ℜ(η),ℜ(δ)>0, so \(\phi_{\lambda }^{+} \in L^{2} ((-1,0),w^{(\alpha ,\beta )}(x)\,dx )\) and \(\psi_{\lambda }^{+} \in L^{2} ((0,1),w^{(\alpha ,\beta )}(x)\,dx )\). We define the Green kernel by
Then \(K(\cdot,y) \in\mathcal{H}\) for any y∈(−1,1). The Green kernel is useful for describing the resolvent operator R λ =(T−λ)−1.
Lemma 4.12
For \(\lambda \in \mathbb{C}\setminus \mathbb{R}\), the resolvent R λ is given by
Proof
First note that if \(f \in\mathcal{D}_{0}\),
Now from Lemma 4.8 we obtain
since \((T-\lambda )\phi_{\lambda }^{+}=0\) and \((T-\lambda )\psi_{\lambda }^{+}=0\). Writing out the terms between brackets, we obtain
□
Now we can determine the spectral measure E of T by
see [4, Theorem XII.2.10]. We write
where \(\varDelta= \{ (x,y) \in \mathbb{R}^{2} \mid-1<x<1,\ x<y<1 \}\).
Let \(\lambda \in \mathbb{R}\). To compute the spectral measure, we have to consider the limit
Note that
and
We see that we have to distinguish several cases.
4.3 The Continuous Spectrum
We define an integral transform \(\mathcal{F}_{c}^{(2)}\) mapping \(f \in \mathcal{D}_{0}\) to a \(\mathbb{C}^{2}\)-valued function on Ω 2=(−∞,−(β+1)2) by
where the functions \(\varphi_{\lambda }^{\pm}\) are defined by (2.13).
Proposition 4.13
Let a,b∈Ω 2 with a<b. Then
where (recall from (2.9))
and \(v_{12}(\lambda ) = \overline{v_{21}(\lambda )}\). Here δ λ and η λ are defined by (2.3).
Proof
First observe that lim ε↓0 δ(λ+iε)=δ λ and \(\lim_{\varepsilon \downarrow0} \delta (\lambda -i\varepsilon ) = \overline{\delta _{\lambda }}=-\delta _{\lambda }\). Similarly, lim ε↓0 η(λ+iε)=η λ and \(\lim _{\varepsilon \downarrow0} \eta(\lambda -i\varepsilon ) = \overline{\eta_{\lambda }}=-\eta_{\lambda }\). This gives us
Now the limit in (4.7) is equal to
Using the connection formula (4.1), this becomes
which is manifestly symmetric in x and y. Using \(\overline{\varphi _{\lambda }^{+}(x)}=\varphi_{\lambda }^{-}(x)\), we see that
Now we can symmetrize the double integral in (4.6) again, and then the result follows from (4.5). □
Next we define an integral transform \(\mathcal{F}_{c}^{(1)}\) mapping \(\mathcal{D}_{0}\) to complex-valued functions on Ω 1=(−(β+1)2,−(α+1)2) by
where φ λ (x) is defined by (2.12).
Proposition 4.14
Let a,b∈Ω 1 with a<b. Then
where (recall from (2.7))
Proof
In this case,
Consequently, using Euler’s transformation for 2 F 1-functions,
and
so that the limit (4.7) is equal to
where we have used Lemma 4.4(i). Using the connection formula (4.2), we obtain
The result follows from (4.5) and (4.6) after symmetrizing the double integral. □
Since the spectrum is a closed set, the points −(α+1)2 and −(β+1)2 must belong to the spectrum.
Proposition 4.15
The points −(α+1)2 and −(β+1)2 belong to the continuous spectrum of T.
Proof
This follows from the fact that none of the eigenfunctions is in \(\mathcal {H}\) for these values of λ, see Remark 4.7. □
4.4 The Discrete Spectrum
Recall from (2.5) the finite set
where λ n is defined by (2.4). For κ≥1, i.e., if Ω d is nonempty, we define the integral transform \(\mathcal{F}_{d}\) on \(\mathcal{H}\) by
where \(\varphi_{\lambda _{n}}\) is defined by (2.14). Note that \(\varphi_{\lambda _{n}}=\phi_{\lambda _{n}}^{+}\).
Proposition 4.16
Let −(α+1)2<a<b. If Ω d ∩(a,b) consists of exactly one number λ n , then
where (recall from (2.10))
Furthermore, if Ω d ∩(a,b)=∅, then
Proof
Assume Ω d ∩(a,b)={λ n }. By (4.5) and (4.6), we have
where \(\mathcal{C}\) is a small clockwise oriented rectifiable closed curve encircling λ n exactly once. The integral over the curve \(\mathcal{C}\) is equal to
By (4.3), we have c(−η(λ);δ(λ))=0 if and only if λ=λ n , \(n \in \mathbb{Z}\), where λ n is defined by (2.4). So in this case, (4.1) becomes \(\psi_{\lambda _{n}}^{+} = c(\eta(\lambda _{n});\delta (\lambda _{n}))\phi_{\lambda _{n}}^{+}\), from which we see that the integrand is symmetric in x and y. Symmetrizing the double integral gives the result. □
Corollary 4.17
Suppose Ω d is nonempty. Then the following orthogonality relations hold:
Proof
Let λ n ,λ m ∈Ω d , and set \(f = \varphi_{\lambda _{m}}\) and \(g = \varphi_{\lambda _{n}}\). From Proposition 4.16, it follows that \(\langle\varphi_{\lambda _{m}}, \varphi_{\lambda _{n}} \rangle=0\) if λ n ≠λ m . Furthermore, if λ n =λ m , then
from which the result follows. □
We have now completely determined the spectrum of T.
Theorem 4.18
The self-adjoint closure of the densely defined operator \((T,\mathcal{D}_{0})\) has continuous spectrum (−∞,−(α+1)2] and (possibly empty) discrete spectrum Ω d . The sets Ω 2 and Ω 1 inside the continuous spectrum have multiplicity two and one, respectively.
4.5 The Integral Transform
We define an integral transform \(\mathcal{F}\) on \(\mathcal{D}_{0}\subset \mathcal{H}\) by
For \(f \in\mathcal{D}_{0}\), this coincides with the integral transform defined in (2.15).
Proposition 4.19
\(\mathcal{F}\) extends uniquely to an isometry \(\mathcal{F}: \mathcal{H} \to L^{2}(\mathcal{V})\).
Proof
For \(f,g\in\mathcal{D}_{0}\), we have
by Propositions 4.13, 4.14, and 4.16, so \(\mathcal{F}:\mathcal{D}_{0} \to L^{2}(\mathcal{V})\) is an isometry. By continuity of \(\mathcal{F}\) and density of \(\mathcal{D}_{0}\) in \(\mathcal {H}\), it extends uniquely to an isometry \(\mathcal{H} \to L^{2}(\mathcal{V})\). □
Our next goal is to show that \(\mathcal{F}:\mathcal{H}\to L^{2}(\mathcal{V})\) is surjective and determine the inverse. For convenience, we assume that T has no discrete spectrum.
For 0<a<1, we define
for all functions f,g for which the integral converges. Note that for \(f,g \in\mathcal{H}\), the limit a↑1 gives the inner product \(\langle f,\bar{g}\rangle\). Suppose now that f λ is a solution of the eigenvalue equation Tf λ =λf λ . Then by Lemma 4.8,
We will use this expression with \(f_{\lambda }=\varphi_{\lambda }^{\pm}\), and we want to let a↑1. We need to consider several cases.
4.5.1 Case 1: λ,λ′∈Ω 2
From Lemma 4.6, we find for x↓−1, cf. the proof of Lemma 4.10,
The behavior at x=−1 of \([\varphi_{\lambda }^{-},\varphi_{\lambda '}^{+}](x)\) and \([\varphi_{\lambda }^{-},\varphi_{\lambda '}^{-}](x)\) follows from \(\varphi_{\lambda }^{+}(x) = \overline{\varphi_{\lambda }^{-}(x)}\). For x↑1, we use the expansion from (4.2) (recall that \(\varphi_{\lambda }^{\pm}= \lim_{\varepsilon \downarrow0} \phi _{\lambda +i\varepsilon }^{\pm}\)) and Lemma 4.6 to find
and
We will need the following behavior of the c-functions.
Lemma 4.20
The c-function defined by (2.6) satisfies
Proof
This follows from the definition (2.6) of the c-function and well-known asymptotic properties of the Γ-function, see, e.g., [2, Sect. 1.4]. □
Proposition 4.21
Let f∈C(Ω 2) satisfy
for some ε>0, and let λ′∈Ω 2. Then
Proof
Note that, for λ≠λ′,
and similar expressions are valid for δ λ . Now use δ λ =i|δ λ | and η λ =i|η λ |, and write \(N=-\frac{1}{2}\ln(\frac {1-a}{2})\). Then
where
The terms with \(\xi_{\pm}^{+}\) vanish by the Riemann–Lebesgue lemma, which follows from Lemma 4.20 and the assumptions on f.
Claim
Proof of Claim
Using (4.9) and \(\cos \theta _{1}-\cos \theta _{2} = -2\sin(\frac{\theta _{1}+\theta _{2}}{2}) \sin(\frac{\theta _{1}-\theta _{2}}{2})\), we obtain
We multiply the right-hand side of the above identity by f(λ) and integrate over λ. Since the function \(\lambda \mapsto\frac{\xi_{-}^{-}(\lambda )(\delta _{\lambda }+\delta _{\lambda '})-(\eta _{\lambda }+\eta_{\lambda '})}{\lambda -\lambda '}\) has a removable singularity by Lemma 4.4(ii), it follows from the Riemann–Lebesgue lemma that the first term vanishes as N→∞. For the second term, we may use
for λ in a neighborhood of λ′ and for some B>0. Then we see that we can apply the Riemann–Lebesgue lemma again, which proves the claim. □
To finish the proof of the proposition, we use
if g∈L 1(A,B) is continuous. Then
provided \(\xi_{+}^{-}\,f\) and f are continuous functions in L 1(Ω 2), which is indeed the case. Here we used the substitutions x=|δ λ | and x=|η λ | before applying (4.10); note that \(\frac{dx}{d\lambda }=-\frac{1}{2x}\) in both cases. Finally, applying Lemma 4.4(ii) with (x,y)=(δ λ ,η λ ), the last expression becomes
which finishes the proof. □
The following result is proved in the same way as Proposition 4.21.
Proposition 4.22
Let f∈C(Ω 2) satisfy the same conditions as in Proposition 4.21, and let λ′∈Ω 2. Then
By combining Propositions 4.21 and 4.22, we obtain the following result.
Proposition 4.23
Let f 1 and f 2 satisfy the conditions from Proposition 4.21. Then
where
Proof
Let f 1 and f 2 satisfy the conditions of Proposition 4.21. Then from this proposition and from applying Fubini’s theorem, we obtain
From Propositions 4.21 and 4.22, we find three similar identities, leading to
which is the desired result. □
We need the inverse of the matrix A(λ) from Proposition 4.23.
Lemma 4.24
For λ∈Ω 2, A(λ)−1=V(λ), with V(λ) defined by (2.8).
Proof
We have
by Lemma 4.4(ii). Now it is straightforward to compute the inverse of A. The result then follows from the definition of V(λ) and Lemma 4.4(i). □
Let \(C_{0}(\varOmega _{2};\mathbb{C}^{2})\) denote the set of continuous \(\mathbb{C}^{2}\)-valued functions \(g= \big( \begin{array}{c} \scriptstyle g_{1}\\[-3pt] \scriptstyle g_{2} \end{array} \big)\) on Ω 2 satisfying
for some ε>0. For \(g \in C_{0}(\varOmega _{2};\mathbb{C}^{2})\), we define the function \(\mathcal{G}_{c}^{(2)}g\) by
Proposition 4.25
Let \(g \in C_{0}(\varOmega _{2};\mathbb{C}^{2})\) and λ∈Ω 2. Then \((\mathcal {F}_{c}^{(2)} \mathcal{G}_{c}^{(2)} g)(\lambda ) = g(\lambda )\).
Proof
Let \(g \in C_{0}(\varOmega _{2};\mathbb{C}^{2})\), and define the \(\mathbb{C}^{2}\)-valued function f by \(f(\lambda )=\big( \begin{array}{c} \scriptstyle f_{1}(\lambda )\\[-3pt] \scriptstyle f_{2}(\lambda ) \end{array} \big) = \frac{1}{-i\delta _{\lambda }} A(\lambda )^{-1}g(\lambda )\). Since
by Lemma 4.20, the functions f 1 and f 2 satisfy the conditions from Proposition 4.21. Now Proposition 4.23 shows that
From Lemma 4.24, we see that the term inside square brackets is exactly \((\mathcal{G}_{c}^{(2)}g)(x)\). □
4.5.2 Case 2: λ,λ′∈Ω 1
In this case,
and for x↑1, we have
We have the following behavior of the c-functions.
Lemma 4.26
The c-function defined by (2.6) satisfies
In the same way as in Proposition 4.21, this leads to the following result.
Proposition 4.27
Let f be a continuous function satisfying
for some ε>0, and let λ′∈Ω 1. Then
where (recall from (2.7)) v(λ′)=(c(δ λ′;η(λ′))c(−δ λ′;η(λ′)))−1.
Note that
by Lemma 4.26. Let C 0(Ω 1) denote the set of continuous functions g on Ω 1 satisfying
for some ε>0. We define an integral transform \(\mathcal {G}_{c}^{(1)}\) on C 0(Ω 1) by
Now similarly to Proposition 4.25, it follows from Proposition 4.27 that \(\mathcal {F}_{c}^{(1)}\) is a left-inverse of \(\mathcal{G}_{c}^{(1)}\).
Proposition 4.28
For g∈C 0(Ω 1) and λ∈Ω 1, we have \((\mathcal {F}_{c}^{(1)} \mathcal{G}_{c}^{(1)} g)(\lambda ) = g(\lambda )\).
4.6 The Integral Transform \(\mathcal{G}\)
We define \(\mathcal{G}\) on \(C_{0}(\varOmega _{1}) \cup C_{0}(\varOmega _{2};\mathbb{C}^{2})\) by \(\mathcal{G} = \mathcal{G}_{c}^{(1)} \oplus\mathcal{G}_{c}^{(2)}\). We will show that \(\mathcal{F}\) is a left-inverse of \(\mathcal{G}\). We need the following result.
Proposition 4.29
- (i):
-
Let λ∈Ω 1 and g∈C 0(Ω 1). Then \((\mathcal {F}_{c}^{(2)} \mathcal{G}_{c}^{(1)} g)(\lambda ) = ( \begin{array}{c} \scriptstyle 0 \\[-3pt] \scriptstyle0 \end{array} )\).
- (ii):
-
Let λ∈Ω 2 and \(g \in C_{0}(\varOmega _{2};\mathbb{C}^{2})\). Then \((\mathcal {F}_{c}^{(1)} \mathcal{G}_{c}^{(2)} g)(\lambda ) = 0\).
Proof
Let λ∈Ω 2 and λ′∈Ω 1. Then
and for x↑1,
Similarly to the proof of Proposition 4.21, it follows by application of the Riemann–Lebesgue lemma that
for suitable functions f. As in Proposition 4.23, we obtain from this \((\mathcal{F}_{c}^{(2)} \mathcal{G}_{c}^{(1)} g)(\lambda ) = \big( \begin{array}{c} \scriptstyle 0 \\[-3pt] \scriptstyle0 \end{array} \big)\).
In the same way, it follows from
for suitable functions f, that \((\mathcal{F}_{c}^{(1)} \mathcal {G}_{c}^{(2)} g)(\lambda ) = 0\). □
Combining Propositions 4.25, 4.28, and 4.29 shows that \((\mathcal{F}\circ\mathcal{G})g=g\) for \(g \in C_{0}(\varOmega _{1}) \cup C_{0}(\varOmega _{2};\mathbb{C}^{2})\).
Proposition 4.30
The integral transform \(\mathcal{G}\) extends uniquely to an operator \(\mathcal{G}:L^{2}(\mathcal{V}) \to\mathcal{H}\) such that \(\mathcal{G} = \mathcal {F}^{-1}\).
Proof
Let \(g \in C_{0}(\varOmega _{1}) \cup C_{0}(\varOmega _{2};\mathbb{C}^{2})\). Then
by Proposition 4.19. Since \(C_{0}(\varOmega _{1}) \cup C_{0}(\varOmega _{2};\mathbb{C}^{2})\) is dense in \(\mathcal{H}\), \(\mathcal{G}\) extends by continuity uniquely to an operator \(\mathcal {G}:L^{2}(\mathcal{V}) \to\mathcal{H}\), and \(\mathcal{F} \circ\mathcal{G}\) extends to the identity operator on \(\mathcal{H}\); hence \(\mathcal{G} = \mathcal{F}^{-1}\). □
Remark 4.31
In case the discrete spectrum Ω d is nonempty, the inverse of \(\mathcal{F}\) is the extension of the operator \(\mathcal{G} = \mathcal {G}^{(1)}_{c} \oplus\mathcal{G}^{(2)}_{c} \oplus\mathcal{G}_{d}\), with
for any function \(g:\varOmega _{d} \to \mathbb{C}\). The proof in this case is the same as in the case of the empty discrete spectrum.
References
Andrews, G.E., Askey, R.: Classical orthogonal polynomials. In: Polynômes orthogonaux et applications. Lecture Notes in Math., vol. 1171, pp. 36–62. Springer, Berlin (1985)
Andrews, G.E., Askey, R., Roy, R.: Special Functions. Encycl. Math. Appl., vol. 71. Cambridge University Press, Cambridge (1999)
Askey, R., Wilson, J.: Some basic hypergeometric orthogonal polynomials that generalize Jacobi polynomials. Mem. Am. Math. Soc. 54, 319 (1985)
Dunford, N., Schwartz, J.T.: Linear Operators. Part II: Spectral Theory. Self Adjoint Operators in Hilbert Space. Interscience/Wiley, New York/London (1963)
Durán, A.J., Van Assche, W.: Orthogonal matrix polynomials and higher-order recurrence relations. Linear Algebra Appl. 219, 261–280 (1995)
Groenevelt, W.: The Wilson function transform. Int. Math. Res. Not. 52, 2779–2817 (2003)
Groenevelt, W.: The vector-valued big q-Jacobi transform. Constr. Approx. 29(1), 85–127 (2009)
Groenevelt, W., Ismail, M.E.H., Koelink, E.: Spectral decompositions and matrix-valued orthogonal polynomials. Adv. Math. 244, 91–105 (2013)
Groenevelt, W., Koelink, E., Rosengren, H.: Continuous Hahn functions as Clebsch–Gordan coefficients. In: Theory and Applications of Special Functions. Dev. Math., vol. 13, pp. 221–284. Springer, New York (2005)
Grünbaum, F.A.: The bispectral problem: an overview. In: Special Functions 2000: Current Perspective and Future Directions, Tempe, AZ. NATO Sci. Ser. II Math. Phys. Chem., vol. 30, pp. 129–140. Kluwer Academic, Dordrecht (2001)
Ismail, M.E.H.: Classical and Quantum Orthogonal Polynomials in One Variable, Cambridge University Press, Cambridge (2009)
Ismail, M.E.H., Koelink, E.: Spectral properties of operators using tridiagonalisation. Anal. Appl. 10(3), 327–343 (2012)
Ismail, M.E.H., Koelink, E.: Spectral analysis of certain Schrödinger operators. SIGMA 8, 061 (2012), 19 pages
Koelink, E.: Spectral theory and special functions. In: Laredo Lectures on Orthogonal Polynomials and Special Functions. Adv. Theory Spec. Funct. Orthogonal Polynomials, pp. 45–84. Nova Sci., Hauppauge (2004)
Koornwinder, T.H.: Jacobi functions and analysis on noncompact semisimple Lie groups. In: Special Functions: Group Theoretical Aspects and Applications, Math. Appl., pp. 1–85. Reidel, Dordrecht (1984)
Koornwinder, T.H.: Special orthogonal polynomial systems mapped onto each other by the Fourier–Jacobi transform. In: Orthogonal Polynomials and Applications, Bar-le-Duc, 1984. Lecture Notes in Math., vol. 1171, pp. 174–183. Springer, Berlin (1985)
Neretin, Yu.A.: Some continuous analogues of the expansion in Jacobi polynomials, and vector-valued orthogonal bases. Funkc. Anal. Prilozh. 39(2), 31–46, 94 (2005) (in Russian). Translation in Funct. Anal. Appl. 39(2), 106–119 (2005)
Ruijsenaars, S.N.M.: A generalized hypergeometric function satisfying four analytic difference equations of Askey–Wilson type. Commun. Math. Phys. 206(3), 639–690 (1999)
Wilson, J.A.: Some hypergeometric orthogonal polynomials. SIAM J. Math. Anal. 11(4), 690–701 (1980)
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Tom H. Koornwinder.
Rights and permissions
About this article
Cite this article
Groenevelt, W., Koelink, E. A Hypergeometric Function Transform and Matrix-Valued Orthogonal Polynomials. Constr Approx 38, 277–309 (2013). https://doi.org/10.1007/s00365-013-9207-1
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00365-013-9207-1
Keywords
- Hypergeometric function transform
- Matrix-valued orthogonal polynomials
- Second-order differential operator
- Five-term operator
- Spectral analysis