Abstract
We give some structural formulas for the family of matrix-valued orthogonal polynomials of size \(2\times 2\) introduced by C. Calderón et al. in an earlier work, which are common eigenfunctions of a differential operator of hypergeometric type. Specifically, we give a Rodrigues formula that allows us to write this family of polynomials explicitly in terms of the classical Jacobi polynomials, and write, for the sequence of orthonormal polynomials, the three-term recurrence relation and the Christoffel–Darboux identity. We obtain a Pearson equation, which enables us to prove that the sequence of derivatives of the orthogonal polynomials is also orthogonal, and to compute a Rodrigues formula for these polynomials as well as a matrix-valued differential operator having these polynomials as eigenfunctions. We also describe the second-order differential operators of the algebra associated with the weight matrix.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In the last few years, the search for examples of matrix-valued orthogonal polynomials that are common eigenfunctions of a second-order differential operator, that is to say, satisfying a bispectral property in the sense of [13], has received a lot of attention after the seminal work of A. Durán in [15].
The theory of matrix-valued orthogonal polynomials was started by Krein in 1949 [37, 38] (see also [1, 2]), in connection with spectral analysis and moment problems. Nevertheless, the first examples of orthogonal matrix polynomials satisfying this extra property and non-reducible to scalar case appeared more recently in [19, 25, 27,28,29]. The collection of examples has been growing lately (see for instance [3, 4, 16, 17, 21, 22, 26, 34,35,36, 40,41,42]). Moreover, the problem of giving a general classification of these families of matrix-valued orthogonal polynomials as solutions of the so-called Matrix Bochner Problem has been also recently addressed in [7, 8] for the special case of \(2\times 2\) hypergeometric matrix differential operators.
As the case of classical orthogonal polynomials, the families of matrix-valued orthogonal polynomials satisfy many formal properties such as structural formulas (see for instance [3, 18, 20, 24, 34]), which have been very useful to compute explicitly the orthogonal polynomials related with several of these families. Having these explicit formulas is essential when one is looking for applications of these matrix-valued bispectral polynomials, such as in the problem of time and band limiting over a non-commutative ring and matrix-valued commuting operators, see [10,11,12, 30,31,32].
Recently, in [4], a new family of matrix-valued orthogonal polynomials of size \(2\times 2\) was introduced, which are common eigenfunctions of a differential operator of hypergeometric type (in the sense defined by Juan A. Tirao in [44]):
In particular, the polynomials \((P^{\left( \alpha ,\beta ,v\right) }_n)_{n\ge 0}\) introduced in [4], orthogonal with respect to the weight matrix \(W^{(\alpha ,\beta ,v)}\) given in (2.4) and (2.5), are common eigenfunctions of an hypergeometric operator with matrix eigenvalues \(\Lambda _n\), which are diagonal matrices with no repetition in their entries. This fact could be especially useful if one intends to use this family of polynomials in the context of time and band limiting, where the commutativity of the matrix-valued eigenvalues \((\Lambda _n)_n\) could play an important role.
In this paper, we give some structural formulas for the family of matrix-valued orthogonal polynomials introduced in [4]. In particular, in Sect. 3, we give a Rodrigues formula (see Theorem 3.1), which allows us to write this family of polynomials explicitly in terms of the classical Jacobi polynomials (see Corollary 3.3).
In Sect. 4, this Rodrigues formula allows us to compute the norms of the sequence of monic orthogonal polynomials and therefore, we can find the coefficients of the three-term recurrence relation and the Christoffel–Darboux identity for the sequence of orthonormal polynomials.
In Sect. 5, we obtain a Pearson equation (see Proposition 5.4), which allows us to prove that the sequence of derivatives of k-th order, \(k\ge 1\), of the orthogonal polynomials is also orthogonal with respect to the weight matrix given explicitly in Proposition 5.3.
In Sect. 6, following the ideas in [34, Section 5.1], we use the Pearson equation to give explicit lowering and rising operators for the sequence of derivatives. Thus, we deduce a Rodrigues formula for these polynomials and find a matrix-valued differential operator that has these matrix-valued polynomials as common eigenfunctions.
Finally, in Sect. 7, we describe the algebra of second-order differential operators associated with the weight matrix \(W^{(\alpha ,\beta ,v)}\) given in (2.4) and (2.5). Indeed, for a given weight matrix W, the analysis of the algebra D(W) of all differential operators that have a sequence of matrix-valued orthogonal polynomials with respect to W as eigenfunctions has received much attention in the literature in the last fifteen years [6, 8, 9, 33, 42, 45, 47]. While for classical orthogonal polynomials, the structure of this algebra is very well-known (see [39]), in the matrix setting, where this algebra is non-commutative, the situation is highly non-trivial.
2 Preliminaries
In this section, we give some background on matrix-valued orthogonal polynomials (see [23] for further details). A weight matrix W is a complex \(N\times N\) matrix-valued integrable function on the interval (a, b), such that W is positive definite almost everywhere and with finite moments of all orders, i.e., \( \int _a^b t^{n}\mathrm{d}W(t)\in {\mathbb {C}}^{N \times N}, \ n \in {\mathbb {N}}\). The weight matrix W induces a Hermitian sesquilinear form,
for any pair of \(N\times N\) matrix-valued functions P(t) and Q(t), where \(Q^{*}(t)\) denotes the conjugate transpose of Q(t).
A sequence \((P_n)_{n\ge 0}\) of orthogonal polynomials with respect to a weight matrix W is a sequence of matrix-valued polynomials satisfying that \(P_n(t)\), \(n\ge 0\), is a matrix polynomial of degree n with non-singular leading coefficient, and \(\left\langle P_{n},P_{m}\right\rangle _{W}=\Delta _n\delta _{n,m}\), where \(\Delta _n\), \(n\ge 0\), is a positive definite matrix. When \(\Delta _n=I\), here I denotes the identity matrix, we say that the polynomials \((P_n)_{n\ge 0}\) are orthonormal. In particular, when the leading coefficient of \(P_n(t)\), \(n\ge 0\), is the identity matrix, we say that the polynomials \((P_n)_{n \ge 0}\) are monic.
Given a weight matrix W, there exists a unique sequence of monic orthogonal polynomials \(\left( P_{n}\right) _{n\ge 0}\) in \( {\mathbb {C}} ^{N\times N}[t]\), and any other sequence of orthogonal polynomials \(\left( Q_{n}\right) _{n\ge 0}\) can be written as \(Q_{n}(t)=K_{n}P_{n}(t)\) for some non-singular matrix \(K_{n}.\)
Any sequence of monic orthogonal matrix-valued polynomials \(\left( P_{n}\right) _{n\ge 0}\) satisfies a three-term recurrence relation
where \(P_{-1}(t)=0\), \(P_{0}(t)=I\). The \(N \times N \) matrix coefficients \(A_{n}\) and \(B_{n}\) enjoy certain properties; in particular, \(A_{n}\) is non-singular for any n.
Two weights W and \({\widetilde{W}}\) are said to be equivalent if there exists a non-singular matrix M, which does not depend on t, such that
A weight matrix W reduces to a smaller size if there exists a non-singular matrix M such that
where \(W_{1}\) and \(W_{2}\) are weights of smaller size. A weight matrix W is said to be irreducible if it does not reduce to a smaller size (see [19, 46]).
Let D be a right-hand side ordinary differential operator with matrix-valued polynomial coefficients,
The operator D acts on a polynomial function \(P\left( t\right) \) as \( PD=\sum _{i=0}^{s}\partial ^{i}PF_{i}\left( t\right) .\)
We say that the differential operator D is symmetric with respect to W if
The differential operator \(D=\displaystyle \frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}}F_{2}(t)+\frac{\mathrm{d}}{\mathrm{d}t} F_{1}(t)+F_{0}\) is symmetric with respect to W if and only if [19, Theorem 3.1]
and
We will need the following result to find the Rodrigues’ formula for the sequence of orthogonal polynomials with respect to a weight matrix W.
Theorem 2.1
[18, Lemma 1.1] Let \(F_{2}\), \(F_{1}\) and \(F_{0}\) be matrix polynomials of degrees not larger than 2, 1 , and 0, respectively. Let W, \(R_{n}\) be \(N\times N\) matrix functions twice and n times differentiable, respectively, in an open set of the real line \(\Omega \). Assume that W(t) is non-singular for \(t\in \) \(\Omega \) and that satisfies the identity and the differential equations in (2.2). Define the functions \(P_{n}\), \(n\ge 1\), by
If for a matrix \(\Lambda _{n}\), the function \(R_{n}\) satisfies
then the function \(P_{n}\) satisfies
2.1 The Family of Matrix-Valued Orthogonal Polynomials
In [4], the authors introduce a Jacobi-type weight matrix \( W^{\left( \alpha ,\beta ,v\right) }\left( t\right) \) and a differential operator \(D^{\left( \alpha ,\beta ,v\right) }\) such that \(D^{\left( \alpha ,\beta ,v\right) }\) is symmetric with respect to the weight matrix \( W^{\left( \alpha ,\beta ,v\right) }\left( t\right) .\ \)
Let \(\alpha \), \(\beta \), \(v\in {\mathbb {R}}\), \(\alpha ,\beta >-1\) and \(|\alpha -\beta |<|v|<\alpha +\beta +2\). We consider the weight matrix function
with
where for the sake of clearness in the rest of the paper, we use the notation:
\(W^{\left( \alpha ,\beta ,v\right) }\) is an irreducible weight matrix and the hypergeometric-type differential operator given by
where
and
is symmetric with respect to the weight matrix \(W^{\left( \alpha ,\beta ,v\right) }\).
In the same paper, the authors also give the corresponding monic orthogonal polynomials in terms of the hypergeometric function \(_{2}H_{1}\left( {C,U,V};t\right) \) defined by J. A. Tirao in [44] and their three-term recurrence relation.
Proposition 2.2
[4, Theorem 4.3] Let \(\left( P_{n}^{\left( \alpha ,\beta ,v\right) }\right) _{n\ge 0}\) be the sequence of matrix-valued monic orthogonal polynomials associated with the weight function \(W^{\left( \alpha ,\beta ,v\right) }(t)\). Then, \(P_{n}^{\left( \alpha ,\beta ,v\right) }\) is an eigenfunction of the differential operator \( D^{\left( \alpha ,\beta ,v\right) }\) with diagonal eigenvalue
Moreover, [4, Section 4.2], the matrix-valued monic orthogonal polynomials \(\left( P_{n}^{\left( \alpha ,\beta ,v\right) }\right) _{n\ge 0}\) are given by
where
and \(\left[ C,U,V\right] _{k}\) is defined inductively as \(\left[ C,U,V \right] _{0}=I\) and
Proposition 2.3
[4, Theorem 3.12] The monic orthogonal polynomials \(\left( P_{n}^{\left( \alpha ,\beta ,v\right) }\right) _{n\ge 0}\) satisfy the three-term recurrence relation
where
with
and the entries of \(B_n=B^{\left( \alpha ,\beta ,v\right) }_n\), \(n\ge 0\), are
Using the symmetry condition (2.1) and the three-term recurrence relation (2.13), one can easily see that the coefficients \(A_{n}^{\left( \alpha ,\beta ,v\right) }\) and \(B_{n}^{\left( \alpha ,\beta ,v\right) }\) satisfy the identities:
3 Rodrigues Formula
In this section, we will provide a Rodrigues formula for the sequence of monic orthogonal polynomials \(\left( P_{n}^{\left( \alpha ,\beta ,v\right) }\right) _{n\ge 0}\) with respect to the weight matrix \(W=W^{\left( \alpha ,\beta ,v\right) }\) in (2.4). Moreover, the Rodrigues formula will allow us to find an explicit expression for the polynomials in terms of Jacobi polynomials.
Theorem 3.1
Consider the weight matrix \(W(t)=W^{\left( \alpha ,\beta ,v\right) }(t)\) given by the expression in (2.4) and (2.5). Consider the matrix-valued functions \(\left( P_{n}\right) _{n\ge 0}\) and \(\left( R_{n}\right) _{n\ge 0}\) defined by
with
where \((c_n)_n\) and \((d_n)_n\) are arbitrary sequences of complex numbers. Then, \(P_n(t)\) is a polynomial of degree n with non-singular leading coefficient equal to
where \((a)_n=a(a+1)\ldots (a+n-1)\) denotes the usual Pochhammer symbol. Moreover, if we put
then \(\left( P_{n}\right) _{n\ge 0}\) is a sequence of monic orthogonal polynomials with respect to W and \(P_{n}=P_{n}^{\left( \alpha ,\beta ,v\right) }.\)
Proof
Let W be the weight matrix given in (2.4) and \(F_{2,}\) \( F_{1},\) \(F_{0}\) and \(\Lambda _{n}\) are the polynomials coefficients defined in (2.8)–(2.10).
Following straightforward computations, we can prove that the matrix-valued function \(R_{n}(t)\) satisfies the equation
Theorem 2.1 guarantees that the function \(P_{n}\left( t\right) =\left( R_{n}\left( t\right) \right) ^{n} \left( W\left( t\right) \right) ^{-1} \) is an eigenfunction of \( D^{\left( \alpha ,\beta ,v\right) }\) with eigenvalue \(\Lambda _{n}\) given in (2.10). Then, \( P_{n}^{\left( \alpha ,\beta ,v\right) }\left( t\right) \) and \(P_{n}\left( t\right) \) satisfy the same differential equation.
We will prove that \(P_{n}\) is a polynomial of degree n with non-singular leading coefficient. We will use the following Rodrigues formula for the classical Jacobi polynomial \(p_{n}^{(\alpha ,\beta )}(t)\) [43, Chapter IV]
where
Thus, we obtain
We can rewrite \(\left( W\left( t\right) \right) ^{-1}\) as
with
Observe that \(R_{n,0}J_{0}=0.\) Thus, \(P_{n}\left( t\right) \) becomes
Hence, \(P_{n}\left( t\right) \) is a polynomial of degree n if and only if \(t=0\) and \(t=1\) are zeros of the following polynomial
and \(t=1\) has multiplicity two, i.e., \(Q\left( 0\right) =Q\left( 1\right) =Q^{\prime }\left( 1\right) =0\).
Taking into account that \(p_{n}^{(\alpha ,\beta )}(1)=\dfrac{\Gamma \left( n+\alpha +1\right) }{n!\Gamma \left( \alpha +1\right) }\) and \(p_{n}^{(\alpha ,\beta )}(-1)=(-1)^{n}\dfrac{\Gamma \left( n+\beta +1\right) }{n!\Gamma \left( \beta +1\right) }\), we have
Now, by taking derivative of \(Q\left( t\right) \) with respect to t and considering the identity
we obtain
This shows that \(Q\left( t\right) \) is divisible by \(t\left( t-1\right) ^{2}\;\) therefore, \(P_{n}\left( t\right) \) is a polynomial of degree n since \(\deg \left( Q(t)\right) =n+3\).
Observe that the leading coefficient of \(P_{n}\left( t\right) \) is determined by the leading coefficient of
\(n!p_{n}^{(\alpha +2,\beta )}(1-2t)R_{n,2}J_{2}t^{4}\). Considering (3.4), we have
The previous matrix coefficient is non-singular since \(|\alpha -\beta |<|v|<\alpha +\beta +2.\)
Moreover, if (3.3) holds true, then \(P_{n}\left( t\right) \) is a monic polynomial and equal to \( P_{n}^{\left( \alpha ,\beta ,v\right) }\left( t \right) \). \(\square \)
Corollary 3.2
Consider the weight matrix \(W^{(\alpha ,\beta ,v)}(t)\) given in (2.4) and (2.5). Then, the monic orthogonal polynomials \(P_{n}^{\left( \alpha ,\beta ,v\right) }(t)\) satisfy the Rodrigues formula
We can see in the proof of Theorem 3.1 that Rodrigues’ formula allows us to find an explicit expression for the polynomials in terms of the classical Jacobi polynomials.
Corollary 3.3
Consider the matrix-valued function \({\widetilde{W}}^{(\alpha ,\beta ,v)}(t)\) given in (2.5) and let \(R_{n,i}^{(\alpha ,\beta ,v)}\), \(i=0,1,2\), be as in Theorem 3.1. Define the coefficients \(c_n\) and \(d_n\) as in (3.3). Then, the sequence of monic orthogonal polynomials \(\left( P_{n}\right) _{n\ge 0},\) defined by (3.1) can be written as
Moreover,
with
Proof
The expression in (3.5) follows from the proof above and to obtain (3.6), we use the following property for the classical Jacobi polynomials \(p_{n}^{(\alpha ,\beta )}(t)\) [43, Section 4.5]:
in (3.5). \(\square \)
4 Orthonormal Polynomials
In this section, we give an explicit expression for the norm of the matrix-valued polynomials \( \left( P_{n}^{\left( \alpha ,\beta ,v\right) }\right) _{n\ge 0} \). In addition, for the sequence of orthonormal polynomials, we show the three-term recurrence relation and the Christoffel–Darboux formula, introduced for a general sequence of matrix-valued orthogonal polynomials in [14] (see also [23]).
Proposition 4.1
The norm of the monic orthogonal polynomials \(P_{n}^{\left( \alpha ,\beta ,v\right) }\left( t\right) \), \(n\ge 0\), is determined by
Therefore, the sequence of polynomials
is orthonormal with respect to W.
Proof
Let \(P_{n}^{\left( \alpha ,\beta ,v\right) }\left( t\right) =\sum _{k=0}^{n} {\mathcal {P}}_{n}^{k} t^{k};\) using Rodrigues’ formula, we have
Integrating by parts n times, we have,
where \(B\left( x,y\right) =\int _{0}^{1}t^{x-1}\left( 1-t\right) ^{y-1} \mathrm{d}t\) is the Beta function. Using the following property,
we obtain,
Using the expressions in (3.2), after some straightforward computations, we complete the proof. \(\square \)
The sequence of orthonormal polynomials satisfies the following properties.
Proposition 4.2
The orthonormal polynomials \(\left( {\widetilde{P}}_{n}^{\left( \alpha ,\beta ,v\right) }\right) _{n\ge 0}\) satisfy the three-term recurrence relation
with
where \(B^{\left( \alpha ,\beta ,v\right) }_{n}\) is the coefficient of the three-term recurrence relation for the monic orthogonal polynomials \(\left( P_{n}^{\left( \alpha ,\beta ,v\right) }\right) _{n\ge 0}\) (2.13). Clearly, \({\widetilde{B}}_{n}^{\left( \alpha ,\beta ,v\right) }\) is a symmetric matrix.
Proof
By replacing the identity \({\widetilde{P}}_{n}^{\left( \alpha ,\beta ,v\right) }\left( t\right) =\left\| P_{n}^{\left( \alpha ,\beta ,v\right) } \right\| ^{-1}P_{n}^{\left( \alpha ,\beta ,v\right) }(t)\) in the three-term recurrence relation (2.13) and using identity (2.17), we obtain (4.2), and by (2.18) one verifies that
\(\left( {\widetilde{B}}^{\left( \alpha ,\beta ,v\right) }_{n}\right) ^{*}={\widetilde{B}}^{\left( \alpha ,\beta ,v\right) }_{n}\). \(\square \)
We also have the following Christoffel–Darboux formula for the sequence of orthonormal polynomials \(\left( {\widetilde{P}}_{n}^{\left( \alpha ,\beta ,v\right) }\right) _{n\ge 0}\):
Hence, the sequence of monic polynomials \(\left( P_{n}^{\left( \alpha ,\beta ,v\right) }\right) _{n\ge 0}\) satisfies
where the explicit expression of \(\left\| P_{n}^{\left( \alpha ,\beta ,v\right) } \right\| ^{-2}\) follows from (4.1).
5 The Derivatives of the Orthogonal Matrix-Valued Polynomials
In this section, we prove that polynomials in the sequence of derivatives of the orthogonal matrix polynomials \(\left( P_{n}^{\left( \alpha ,\beta ,v\right) }\right) _{n \ge 0}\) are also orthogonal by obtaining a Pearson equation for the weight matrix \(W^{\left( \alpha ,\beta ,v\right) }(t).\)
Let \(\displaystyle \frac{\mathrm{d}^{k}}{\mathrm{d}t^{k}}P_{n}^{\left( \alpha ,\beta ,v\right) }\left( t\right) \) be the derivative of order k of the monic polynomial \(P_{n}^{\left( \alpha ,\beta ,v\right) }(t) \), for \(n\ge k\). Then,
are monic polynomials of degree \(n-k\) for all \( n \ge k.\)
The polynomial \(P_{n}^{\left( \alpha ,\beta ,v\right) }\left( t\right) \) is an eigenfunction of the operator \(D^{\left( \alpha ,\beta ,v\right) }\) given above in (2.7)–(2.9).
Taking derivative k times, we have that \(P_{n}^{\left( \alpha ,\beta ,v,k\right) }\left( t\right) \) is an eigenfunction of the differential hypergeometric operator
with
where C, U and V are the matrix entries of the operator \(D^{\left( \alpha ,\beta ,v\right) }\) given in (2.9). One has that
where \(\Lambda _{n}^{(k)}=\Lambda _{n}+kU+k(k-1)I\), with \(\Lambda _{n}\) given in (2.10). One has, in particular, the standard expression for the eigenvalue shown in [4, Proposition 3.3], \(\Lambda _{n}^{(k)}=-(n-k)(n-k-1)I-(n-k)U^{(k)}-V\). More explicitly,
Remark 5.1
One notices that \(D^{\left( \alpha ,\beta ,v,k\right) }=D^{\left( \alpha +k,\beta +k,v\right) }.\) Thus, the sequence of derivatives are still common eigenfunctions of an hypergeometric operator with diagonal matrix eigenvalues \(\Lambda ^{(k)}_n\), with no repetition in their entries.
Proposition 5.2
As in (2.11), we have the following explicit expression for the sequence of polynomials \( \left( P_{n}^{\left( \alpha ,\beta ,v,k\right) }\right) _{n\ge k}\) in terms of hypergeometric functions
where \(C^{(k)}\), \(U^{(k)}\) and V are the entries of the differential operator in (5.2) and \(\lambda _{n}^{(k)}\) and \(\mu _n^{(k)}\) are the diagonal entries of the matrix eigenvalue \(\Lambda _n^{(k)}\) given in (5.3).
We include the proof for completeness.
Proof
Indeed, the polynomials \(\left( P_{n}^{\left( \alpha ,\beta ,v,k\right) }\right) _{n\ge k}\) are common eigenfuntions of the matrix hypergeometric-type operator (5.2) with diagonal eigenvalue \(\Lambda _{n}^{(k)}\).
The fact that the eigenvalue is diagonal implies that the matrix equation can be written as two vectorial hypergeometric equations as in [44, Theorem 5], and the solutions of these equations are the columns of \(\left( P_{n}^{\left( \alpha ,\beta ,v,k\right) }\right) _{n\ge k}\). Since the eigenvalues of the matrices \(C^{(k)}\), \(3+\alpha +k\) and \(1+\alpha +k\) are nonnegative integers for all \(k\ge 1\), then these solutions are hypergeometric vector functions.
Moreover, the vectorial functions are polynomials of degree \(n-k\) since the form of the factor \(\left( (n-k)(n-k-1)I+(n-k)U^{(k)}+V+\lambda _{n}^{(k)}I\right) =-\Lambda _n^{(k)}+\lambda _{n}^{(k)}I\) appearing in the expression of \(\left[ C^{(k)},U^{(k)},V+\lambda _{n}^{(k)}I\right] _{n-k+1}\) (see 2.12), makes its first column equal to zero. Analogously for the second column of \(\left[ C^{(k)},U^{(k)},V+\mu _{n}^{(k)}I\right] _{n-k+1}\).
The matrices \(\left[ C^{(k)},U^{(k)},V+\mu _{n}^{(k)}I\right] _{n-k}\) and \(\left[ C^{(k)},U^{(k)},V+\lambda _{n}^{(k)}I\right] _{n-k}\) are non-singular, since \(\lambda _{q}^{(k)}\ne \mu _{\ell }^{(k)}\), \(\lambda _{q}^{(k)}\ne \lambda _{\ell }^{(k)}\) and \(\mu _{q}^{(k)}\ne \mu _{\ell }^{(k)}\) for all \(q\ne \ell \). \(\square \)
Proposition 5.3
Let \(\alpha ,\beta >-(k+1)\) and \(|\alpha -\beta |<|v|<\alpha +\beta +2\left( k+1\right) \). We write
with
Then, \(W^{(k)}\) is an irreducible weight matrix and the differential hypergeometric operator \(D^{\left( k\right) }\) in (5.2 ) is symmetric with respect to the weight matrix \(W^{(k)}\). Moreover, it holds that \(W^{\left( k\right) }(t)=W^{\left( \alpha +k,\beta +k,v\right) }(t)\).
Proof
Taking into account Remark 5.1 and the fact that \(W^{\left( k\right) }(t)=W^{\left( \alpha +k,\beta +k,v\right) }(t)\), from Proposition 4.1 in [4], one has that \(D^{\left( \alpha +k,\beta +k,v\right) }\) is symmetric with respect to \(W^{\left( \alpha +k,\beta +k,v\right) }\) and \(W^{\left( \alpha +k,\beta +k,v\right) }\) is an irreducible weight matrix if and only if \(\alpha +k\) and \(\beta +k\) satisfy \( \alpha +k>-1\), \(\beta +k>-1\) and \(|\left( \alpha +k\right) -\left( \beta +k\right) |<|v|<\left( \alpha +k\right) +\left( \beta +k\right) +2.\) \(\square \)
We will use the following Pearson equation to prove that the sequence of polynomials \( \left( P_{n}^{\left( \alpha ,\beta ,v,k\right) }\right) _{n\ge k} \) is orthogonal with respect to \( W^{(k)}\).
Theorem 5.4
The weight matrix \(W^{(k)}\) satisfies the following Pearson equation:
where
Proof
By replacing the expression of \(\Phi ^{(k)}\left( t\right) \) and \(\Psi ^{(k)}\left( t\right) \) in (5.6) and taking derivative, we obtain
The derivative of \(W^{(k)}\left( t\right) \) is
Hence, the left-hand side of (5.12) is a product between a polynomial of degree five and
\(t^{\left( \alpha +k-1\right) }\left( 1-t\right) ^{\left( \beta +k-1\right) }.\) Therefore, equating to zero the entries of this polynomial, taking into account (5.10) and the equality \(W_{0}^{(k)} {\mathscr {A}}_{0}^{k}=0\), it only remains to verify the identities below, which follow immediately by straightforward computations.
\(\square \)
Remark 5.5
Let us consider the matrix-valued functions \(W^{\left( \alpha ,\beta ,v,k\right) }\left( t\right) =W^{(k)}(t)\), \(\Phi ^{(k)}(t)\) and \(\Psi ^{(k)}(t)\), \(k\in {\mathbb {N}}\), defined in (5.5) and Theorem 5.4, respectively. Then, by straightforward computations, one can verify the following identities:
Taking into account that \(\deg \left( \Phi ^{(k)}(t)\right) =2\) and \(\deg \left( \Psi ^{(k)}(t)\right) =1\), we obtain from [5, Corollary 3.10] the following:
Corollary 5.6
The sequence of polynomials \(\left( P_{n}^{\left( \alpha ,\beta ,v,k\right) }\right) _{n\ge k}\) is orthogonal with respect to the weight matrix \(W^{(k)} =W^{\left( \alpha ,\beta ,v,k-1\right) }\left( t\right) \Phi ^{(k-1)}\left( t\right) .\)
The following results are obtained in a similar way than in Theorem 3.1 and Corollary 3.3.
Proposition 5.7
Let \(W^{(k)}(t)\) be defined as in (5.5). A Rodrigues formula for the sequence of polynomials \(\left( P_{n}^{\left( \alpha ,\beta ,v,k\right) }\right) _{n\ge k}\) is
Corollary 5.8
Let the matrix-valued function \(W^{(k)}(t),\) and the matrices \(R^{\left( \alpha ,\beta ,v\right) }_{n-k,2},R^{\left( \alpha ,\beta ,v\right) }_{n-k,1}\) and \(R^{\left( \alpha ,\beta ,v\right) }_{n-k,0}\) be defined as in (5.5) and (3.2). From the Rodrigues formula, we get the explicit expressions for the sequence of polynomials \(\left( P_{n}^{\left( \alpha ,\beta ,v,k\right) }\right) _{n\ge k}\) in terms of the classical Jacobi polynomials \(p_{n}^{(\alpha ,\beta )}(t)\),
and
with \({\mathscr {C}}_{n-k,i}^{\left( \alpha +k,\beta +k,v\right) },\)\(i=0,1,2\), given by (3.6).
Proposition 5.9
The orthogonal monic polynomials \(\left( P_{n}^{\left( \alpha ,\beta ,v,k\right) }\right) _{n\ge k}\) satisfy the three-term recurrence relation
with
The explicit expressions of \(B_{n}^{\left( \alpha ,\beta ,v\right) }\) and \(A_{n}^{\left( \alpha ,\beta ,v\right) }\) are given in (2.14)-(2.16).
Considering that \(W^{\left( k\right) }(t)=W^{\left( \alpha +k,\beta +k,v\right) }(t)\) (see Proposition 5.3), the previous recurrence follows directly from (2.3). Notwithstanding, we include the following proof for completeness.
Proof
If we write \(P_{n}^{\left( \alpha ,\beta ,v,k\right) } (t)=\sum _{s=0} ^{n-k}{\mathcal {P}}_{n-k}^{s}t^{s}\), from (5.4), we have the following explicit expressions,
If we consider the coefficient of order \(n-k\) and \(n-k-1\) in the three-term recurrence relation, we have,
respectively. Comparing with the expressions of \(B_{n-k}^{\left( \alpha +k ,\beta +k ,v\right) }\) and \(A_{n-k}^{\left( \alpha +k ,\beta +k ,v\right) }\) given by substituting properly in (2.14)-(2.16), the proposition follows. \(\square \)
6 Shift Operators
In this section, we use Pearson equation (5.6) to give explicit lowering and rising operators for the monic n-degree polynomials \(P_{n+k}^{\left( \alpha ,\beta ,v,k\right) }\left( t\right) \), \(n\ge 0\), defined in (5.1). Moreover, from the existence of the shift operators, we deduce a Rodrigues formula for the sequence of derivatives \(\left( P_{n+k}^{\left( \alpha ,\beta ,v,k\right) }\right) _{n\ge 0}\), and we find a matrix-valued differential operator for which these matrix-valued polynomials are eigenfunctions. In what follows, we will consider the matrix-valued functions \(W^{(k)}\left( t\right) ,\Phi ^{(k)}\left( t\right) \) and \(\Psi ^{(k)}\left( t\right) \), \(k\in {\mathbb {N}}\) as defined in Theorem 5.4.
For any pair of matrix-valued functions P and Q, we denote
Proposition 6.1
Let \(\eta ^{(k)}\) be the first-order matrix-valued right differential operator
Then, \(\dfrac{\mathrm{d}}{\mathrm{d}t}:L^{2}\left( W^{(k)}\right) \rightarrow L^{2}\left( W^{(k+1)}\right) \) and \(\eta ^{k}:L^{2}\left( W^{(k+1)}\right) \rightarrow L^{2}\left( W^{(k)}\right) \) satisfy
Proof
From \(\left\langle \dfrac{\mathrm{d}P}{\mathrm{d}t},Q\right\rangle _{k+1}=\displaystyle \int _{0}^{1}\dfrac{\mathrm{d}P(t)}{\mathrm{d}t} W^{(k+1)}(t)Q^{*}(t)\mathrm{d}t,\) integrating by parts and taking into account equalities (5.13) and (5.14) in Remark 5.5, we get,
\(\square \)
Lemma 6.2
The following identity holds true
for a given \(k\ge 0\), where
Proof
It holds that \(I\eta ^{(k+n-1)}\cdots \eta ^{(k+1)}\eta ^{(k)}\) is a polynomial of degree n. From the definition of the monic sequence of derivatives in (5.1), one has
Thus, Proposition 6.1 implies that \(P_{n+k}^{(\alpha ,\beta ,v,k+1)}\eta ^{(k)}\) is a multiple of \(P_{n+k}^{(\alpha ,\beta ,v,k)}\).
Therefore, applying the raising operators \(\eta ^{(k+n-1)}\cdots \eta ^{(k+1)}\eta ^{(k)}\) to \(P_{n+k}^{(\alpha ,\beta ,v,k+n)}=I\), we get a multiple of \(P_{n+k}^{\left( \alpha ,\beta ,v,k\right) }.\) For the leading coefficient \({C}_{n}^{k}\) of the polynomial \(I\eta ^{(k+n-1)}\cdots \eta ^{(k+1)} \eta ^{(k)}\), one obtains the expression
The diagonal matrix entries \({\mathscr {A}}_2^k\) and \({\mathscr {B}}_1^k\) are defined in (5.7) and (5.10). Then, by replacing \({\mathscr {B}}_{1}^{k}=\left( \alpha +\beta +4+2k\right) {\mathscr {A}}_{2}^{k}\) in the identity above, we have
Hence, the proof follows.
Note that \({\mathcal {C}}_{n}^{k}\) is non-singular since \(|\alpha -\beta |<|v|<\alpha +\beta +2\left( k+1\right) .\) \(\square \)
From the proposition and the lemma above, we obtain another expression for the Rodrigues formula.
Proposition 6.3
The polynomials \(\left( P_{n+k}^{\left( \alpha ,\beta ,v,k\right) }\right) _{n\ge 0}\) satisfy the following Rodrigues formula:
where the matrices \({\mathcal {C}}_n^k\) are given by the expression in (6.2).
Proof
Let Q be a matrix-valued function and \(\eta ^{(k)}\) the raising operator in (6.1), then
Using the identities (5.13) and (5.14), we obtain
Iterating, it gives
Now, taking \(Q\left( t\right) =I\,\) and using Lemma 6.2 we have
\(\square \)
Corollary 6.4
Let \(W^{(k)}\left( t\right) \) be the weight matrix (5.5). Then, the differential operator
is symmetric with respect to \(W^{(k)}\left( t\right) \) for all \(k\in {\mathbb {N}} _{0}.\) Moreover, the polynomials \(\left( P_{n+k}^{\left( \alpha ,\beta ,v,k\right) } \right) _{n\ge 0}\) are eigenfunctions of the operator \(E^{(k)}\) with eigenvalue
where \({\mathscr {A}}^{k}_{2}\) is given by (5.7).
Proof
From Proposition 6.1 and the factorization \(E^{(k)}=\dfrac{\mathrm{d}}{\mathrm{d}t}\circ \eta ^{(k)}\), it follows directly that \(E^{(k)}\) is symmetric with respect to \( W^{(k)}.\)
The eigenvalue is obtained by looking at the leading coefficients of \(\Phi ^{(k)}(t)\) and \(\Psi ^{(k)}(t)\) in (5.6). Thus, we obtain \(\Lambda _{n}\left( E^{(k)}\right) =n\left( n-1\right) {\mathscr {A}}^{k}_{2}+n{\mathscr {B}}^{k}_{1}=n(n+\alpha +\beta +3+2k){\mathscr {A}}^{k}_{2}\). \(\square \)
Remark 6.5
The operators \(E^{(k)}\) and \(D^{\left( k\right) }\) in (5.2) commute. This result follows from the fact that the corresponding eigenvalues \(\Lambda _{n}\left( E^{(k)}\right) \) and \(\Lambda ^{(k)} _{n+k}\) in (5.3) commute, and the linear map that assigns to each differential operator in the algebra of differential operators \(D(W^{(k)})\) its corresponding sequence of eigenvalues, is an isomorphism (see [33, Propositions 2.6 and 2.8]).
Remark 6.6
The Darboux transform \({\widetilde{E}}^{(k)}= \eta ^{(k)} \circ \dfrac{\mathrm{d}}{\mathrm{d}t}\) of the operator \(E^{(k)}\) is not symmetric with respect to \(W^{(k)}.\) Moreover, it is symmetric with respect to \(W^{(k+1)}.\) Indeed,
In fact, if we substitute the coefficient of the second derivative in the first symmetry condition in (2.2), we obtain
which does not hold. Taking the main coefficient \(W^{(k)}_2\) of \({{\widetilde{W}}}^{(\alpha ,\beta ,v,k)}\) in (5.5), one has in particular
The second statement follows from Proposition 6.1.
7 The Algebra \(D\left( W\right) \)
In this section, we will discuss some properties of the structure of the algebra of matrix differential operators having as eigenfunctions a sequence of polynomials \(\left( P_{n}\right) _{n\ge 0}\), orthogonal with respect to the weight matrix \(W=W^{\left( \alpha ,\beta ,v\right) }\), i.e.,
The definition of D(W) does not depend on the particular sequence of orthogonal polynomials (see [33, Corollary 2.5]).
Theorem 7.1
Consider the weight matrix function \(W=W^{(\alpha ,\beta ,v)}\)(t). Then, the differential operators of order at most two in D(W) are of the form
where
Proof
Let \(\left( P_{n}^{\left( \alpha ,\beta ,v\right) }\right) _{n\ge 0}\) be the monic sequence of orthogonal polynomials with respect to \(W^{\left( \alpha ,\beta ,v\right) }.\) The polynomial \(P_{n}^{\left( \alpha ,\beta ,v\right) }\) is an eigenfunction of the operator D (7.1) if
with \(\Lambda _{n}=n\left( n-1\right) {\mathcal {A}}_{2}+n{\mathcal {B}}_{1}+{\mathcal {C}}_{0}.\) This equation holds if and only if
where \({\mathcal {P}}_{n}^{k}\) denotes de \(k-th\) coefficient of \(P_{n}, \ k=0,1,2,\ldots n.\)
To prove the theorem, we need to solve equation (7.2) for \(k=n-1\) and \(k=n-2\) to find relations between the parameters of the matrix-valued coefficients \( {\mathcal {A}}_{2},{\mathcal {A}}_{1},{\mathcal {A}}_{0},{\mathcal {B}}_{1},{\mathcal {B}}_{0}\) and \({\mathcal {C}}_{0}\).
We obtain the explicit expressions of \({\mathcal {P}}_{n}^{n-1}\) and \({\mathcal {P}}_{n}^{n-2}\) by substituting \(k=0\) in the equalities (5.15) and (5.16), respectively.
From equation (7.2) for \(k=n-1\), we get
Multiplying equation (7.3) by
one obtains a matrix polynomial on n of degree four, where each coefficient must be equal to zero. From the expression of the coefficient of \(n^{4}\), we get the expression for \({\mathcal {A}}_{1}\) given above, and from the expression of the coefficient of \(n^{3}\), we get \({\mathcal {B}}_{0}\) in terms of \({\mathcal {A}}_{2}\) and \({\mathcal {B}}_{1}\). Looking at the entries \(\left( 1,1\right) ,\left( 1,2\right) \) and \(\left( 2,2\right) \) of the coefficient of \(n^{2}\) and the fact that \(\kappa _{v,-\beta }\) and \(\kappa _{-v,-\beta }\) are nonzero, we get \(\left( {\mathcal {C}}_{0}\right) _{12},\) \(\left( {\mathcal {C}}_{0}\right) _{11}\) and \(\left( {\mathcal {B}}_{1}\right) _{12}\), respectively, in terms of \({\mathcal {A}}_{2}\) and the other entries of \({\mathcal {C}}_{0}\) and \({\mathcal {B}}_{1}\). Finally, looking at the coefficient of n, we get the values of \(\left( {\mathcal {B}}_{1}\right) _{11},\) \( \left( {\mathcal {B}}_{1}\right) _{21},\) \(\left( {\mathcal {B}}_{1}\right) _{22}\) and \(\left( {\mathcal {C}}_0\right) _{21}\); consequently, we obtain the values of \({\mathcal {B}}_{1}\), \({\mathcal {B}}_{0}\) and \({\mathcal {C}}_{0}\) written above.
Analogously, from equation (7.2) for \(k=n-2\), we obtain
Multiplying equation (7.4) by
one obtains a matrix polynomial on n of degree eight, where each coefficient must be equal to zero. We get the expression of \( {\mathcal {A}}_{0}\) from the coefficient of \(n^{8}\).
Thus, if we replace the expressions of \({\mathcal {A}}_{0},\ {\mathcal {A}}_{1},\ {\mathcal {B}}_{1},\ {\mathcal {B}}_{0}\) and the entries \(\left( 1,1\right) ,\) \(\left( 1,2\right) \) and \(\left( 2,1\right) \) of \({\mathcal {C}}_0\) in (7.3) and (7.4), both equations hold true.
Let \({\mathcal {D}}_{2}\) be the complex vector space of differential operators in D(W) of order at most two. We have already proved that \(\dim {\mathcal {D}}_{2}\le 5\).
If D is symmetric, then \(D\in D\left( W\right) \). Using symmetry equations in (2.2), one verifies that the operator D in (7.1) is symmetric with respect to W if and only if \(a,d,e\in {\mathbb {R}} \) and condition
holds true. Indeed, writing \(W(t)=W^{\left( \alpha ,\beta ,v\right) }=W_2t^2+W_1t+W_0\), from the first equation of symmetry in (2.2), we have that \(W_{2}{\mathcal {A}}_{2}^{*}-{\mathcal {A}}_{2}W_{2}=0\), i.e.,
where Im(z) denotes the imaginary part of a complex number z. Then, since \(\kappa _{v,\beta }+2>0\) because of the restrictions of the parameters \(\alpha \), \(\beta \) and v in the definition of \(W^{(\alpha ,\beta ,v)}\) in (2.4), to verify (7.6), one needs to have \(a,d\in {\mathbb {R}} \) and condition (7.5).
In addition, from the third symmetry equation (2.2), we have that \(e\in {\mathbb {R}}\). Thus, there exists at least five linearly independent symmetric operators of order at most two in D(W). Therefore, \(\dim {\mathcal {D}}_{2}=5\). \(\square \)
By taking as the only nonzero parameters \(a=1\) and \(d=1\), respectively, in the expression of the operator in (7.1), we write the operators:
Analogously, by choosing as nonzero parameters \(c =1\), \(b=-\dfrac{\kappa _{v,-\beta }(\kappa _{-v,\beta }+2)}{\kappa _{-v,-\beta }(\kappa _{v,\beta }+2)}\) and \(c =i\),
\(b=i\dfrac{\kappa _{v,-\beta }(\kappa _{-v,\beta }+2)}{\kappa _{-v,-\beta }(\kappa _{v,\beta }+2)}\), respectively, we define the operators:
One has the following:
Corollary 7.2
The set of symmetric operators \(\left\{ D_{1},D_{2},D_{3},D_{4},I\right\} \) is a basis of the space of differential operators of order at most two in D(W). Moreover, the corresponding eigenvalues for the differential operators \( D_{1},D_{2},D_{3}\) and \(D_{4}\) are
Corollary 7.3
The differential operators appearing in (2.7) and (6.3) are \( D^{\left( \alpha ,\beta ,v\right) }=-D_{1}-D_{2}\) and \(E^{(0)}=-\dfrac{\kappa _{v,\beta }+4}{\kappa _{v,\beta }+2}D_{1}-\dfrac{\kappa _{-v,\beta }+4}{\kappa _{-v,\beta }+2}D_{2}\) respectively.
Corollary 7.4
There are no operators of order one in the algebra D(W).
Proof
Suppose that there exists a right differential operator of order one, such that \(D= aD_{1}+bD_{2}+cD_{3}+d(iD_{4})+eI\), with \(a,b,c,d,e \in {\mathbb {R}}\). Equating to zero the matrix-valued coefficient of \(\dfrac{\mathrm{d}^{2}}{\mathrm{d}t^{2}}\), one obtains:
Therefore \(a=b=c=d=0.\) \(\square \)
Corollary 7.5
The algebra \(D\left( W\right) \) is not commutative.
Proof
Using the isomorphism between the algebra of differential operators and the algebra of matrix-valued functions of n generated by the eigenvalues going with this operators, we have that \(D_{1}D_{3}\ne D_{3}D_{1}\) since \(\Lambda _{n}\left( D_{1}\right) \Lambda _{n}\left( D_{3}\right) \ne \Lambda _{n}\left( D_{3}\right) \Lambda _{n}\left( D_{1}\right) \). \(\square \)
Remark 7.6
In [42], the authors study the algebra \(D\left( W^{\left( p,q\right) }\right) \), where \(W^{\left( p,q\right) }\) is, for \(p\ne \dfrac{q}{2}\), the irreducible weight matrix
Let us denote by \(D_{1}^{\left( p,q\right) },D_{2}^{\left( p,q\right) },D_{3}^{\left( p,q\right) }\) and \(D_{4}^{\left( p,q\right) }\) the differential operators appearing in [42]. Then, taking \(\alpha =\beta =\dfrac{q}{2}-1\) in (2.4) and writing \(v=2p-q\),
we have the following relations with the operators \(D_i,\ i=1,2,3,4\), defined above:
References
Atkinson, F.V.: Discrete and Continous Boundary Problems. Academic Press, New York (1964)
Berezanskii, J.M.: Expansions in eigenfunctions of selfadjoint operators, Transl. Math. Monographs, AMS 17 (1968)
Borrego, J., Castro, M., Durán, A.J.: Orthogonal matrix polynomials satisfying differential equations with recurrence coefficients having non-scalar limits. Integral Transforms Spec. Funct. 23, 685–700 (2012)
Calderón, C., González, Y., Pacharoni, I., Simondi, S., Zurrián, I.: \(2\times 2\) hypergeometric operators with diagonal eigenvalues. J. Approx. Theory, 248, 105299, 17pp (2019)
Cantero, M.J., Moral, L., Velázquez, L.: Matrix orthogonal polynomials whose derivatives are also orthogonal. J. Approx. Theory 146(2), 174–211 (2007)
Casper, W.R.: Elementary examples of solutions to Bochner’s problem for matrix differential operators. J. Approx. Theory 229, 36–71 (2018)
Casper, W.R.: The symmetric \(2\times 2\) hypergeometric matrix differential operators. arXiv:1907.12703 (2019)
Casper, W.R., Yakimov, M.: The matrix Bochner problem. Am. J. Math. (2021) To appear. arXiv:1803.04405
Castro, M.M., Grünbaum, F.A.: The algebra of differential operators associated to a family of matrix-valued orthogonal polynomials: five instructive examples. Int. Math. Res. Not. 7, 1–33 (2006)
Castro, M., Grünbaum, F.A.: The Darboux process and time-and-band limiting for matrix orthogonal polynomials. Linear Algebra Appl. 487, 328–341 (2015)
Castro, M., Grünbaum, F.A.: Time-and-band limiting for matrix orthogonal polynomials of Jacobi type. Random Matrices: Theory Appl., 06(04):1740001, 12pp (2017)
Castro, M., Grünbaum, F.A., Pacharoni, I., Zurrián, I.: A further look at time-and-band limiting for matrix orthogonal polynomials. In Nashed, M., Li, X. (Eds.), Frontiers in Orthogonal Polynomials and \(q\)-Series. 139–153, Contemp. Math. Appl. Monogr. Expo. Lect. Notes, 1, World Sci Publ., Hackensack, NJ, (2018)
Duistermaat, J.J., Grünbaum, F.A.: Differential equations in the spectral parameter. Commun. Math. Phys. 103(2), 177–240 (1986)
Durán, A.J.: Markov’s theorem for orthogonal matrix polynomials. Can. J. Math. 48, 1180–1195 (1996)
Durán, A.J.: Matrix inner product having a matrix symmetric second-order differential operator. Rocky Mt. J. Math. 27(2), 585–600 (1997)
Durán, A.J.: Generating orthogonal matrix polynomials satisfying second order differential equations from a trio of triangular matrices. J. Approx. Theory 161, 88–113 (2009)
Durán, A.J.: A method to find weight matrices having symmetric second-order differential operators with matrix leading coefficents. Const. Approx. 29, 181–205 (2009)
Durán, A.J.: Rodrigues’ formulas for orthogonal matrix polynomials satisfying second-order differential equations. Int. Math. Res. Not 5, 824–855 (2010)
Durán, A.J., Grünbaum, F.A.: Orthogonal matrix polynomials satisfying second-order differential equations. Int. Math. Res. Not. 10, 461–484 (2004)
Durán, A.J., Grünbaum, F.A.: Structural formulas for orthogonal matrix polynomials satisfying second order differential equations, I. Constr. Approx. 22, 255–271 (2005)
Durán, A.J., Grünbaum, F.A.: Matrix orthogonal polynomials satisfying second order differential equations: coping without help from group representation theory. J. Approx. Theory 148, 35–48 (2007)
Durán, A.J., de la Iglesia, M.D.: Some examples of orthogonal matrix polynomials satisfying odd order differential equations. J. Approx. Theory 150, 153–174 (2008)
Durán, A.J., López, P.: Orthogonal matrix polynomials. Laredo Lectures on Orthogonal Polynomials and Special Functions, 13–44, Adv. Theory Spec. Funct. Orthogonal Polynomials, Nova Science Publishers Hauppauge, NY (2004)
Durán, A.J., López, P.: Structural formulas for orthogonal matrix polynomials satisfying second order differential equations, II. Constr. Approx. 26(1), 29–47 (2007)
Grünbaum, F.A.: Matrix-valued Jacobi polynomials. Bull. Sci. Math. 127(3), 207–214 (2003)
Grünbaum, F.A., de la Iglesia, M.D.: Matrix-valued orthogonal polynomials arising from group representation theory and a family of quasi-birth-and death processes. SIAM J. Matrix Anal. Appl. 30, 741–761 (2008)
Grünbaum, F.A., Pacharoni, I., Tirao, J.: A matrix-valued solution to Bochner’s problem. J. Phys. A 34(48), 10647–10656 (2001)
Grünbaum, F.A., Pacharoni, I., Tirao, J.: Matrix-valued spherical functions associated to the complex projective plane. J. Funct. Anal. 188(2), 350–441 (2002)
Grünbaum, F.A., Pacharoni, I., Tirao, J.: Matrix-valued orthogonal polynomials of the Jacobi type. Indag. Math. (N.S.) 14(3–4), 353–366 (2003)
Grünbaum, F.A., Pacharoni, I., Zurrián, I.: Time and band limiting for matrix-valued functions, an example. SIGMA Symmetry Integr. Geom. Methods Appl. 11(044), 1–14 (2015)
Grünbaum, F.A., Pacharoni, I., Zurrián, I.: Time and band limiting for matrix-valued functions. Inverse Probl. 33(2), 1–26 (2017)
Grünbaum, F.A., Pacharoni, I., Zurrián, I.: Bispectrality and time-band-limiting: matrix-valued polynomials. Int. Math. Res. Not. 13, 4016–4036 (2020)
Grünbaum, F.A., Tirao, J.: The algebra of differential operators associated to a weight matrix. Integr Equ. Oper. Theory 58(4), 449–475 (2007)
Koelink, E., de los Ríos, A., Román, P.: Matrix-valued Gegenbauer-type polynomials. Constr. Approx., 46(3):459–487 (2017)
Koelink, E., van Pruijssen, M., Román, P.: Matrix-valued orthogonal polynomials related to (SU(2) \(\times \) SU(2), \(diag\)). Int. Math. Res. Not. 2012(24), 5673–5730 (2012)
Koelink, E., van Pruijssen, M., Román, P.: Matrix-valued orthogonal polynomials related to (SU(2) \(\times \) SU(2), SU(2)), II. PRIMS 49(2), 271–312 (2013)
Krein, M.G.: Infinite j-matrices and a matrix moment problem. Dokl. Akad. Nauk. SSSR 69(2), 125–128 (1949)
Krein, M.G.: Fundamental aspects of the representation theory of hermitian operators with deficiency index \((m, m)\). AMS Transl. Ser. 2(97), 75–143 (1971)
Miranian, L.: On classical orthogonal polynomials and differential operators. J. Phys. A Math. Gen. 38, 6379–6383 (2005)
Pacharoni, I., Román, P.: A sequence of matrix-valued orthogonal polynomials associated to spherical functions. Constr. Approx. 28(2), 127–147 (2008)
Pacharoni, I., Tirao, J.: Matrix-valued orthogonal polynomials arising from the complex projective space. Constr. Approx. 25(2), 177–192 (2006)
Pacharoni, I., Zurrián, I.: Matrix Gegenbauer polynomials: the \(2\times 2\) fundamental cases. Constr. Approx. 43(2), 253–271 (2016)
Szegö, G.: Orthogonal Polynomials. Coll. Publ, XXIII, American Mahematical Society, Providence, RI (1975)
Tirao, J.: The matrix-valued hypergeometric equation. Proc. Natl. Acad. Sci. USA 100(14), 8138–8141 (2003)
Tirao, J.: The algebra of differential operators associated to a weight matrix: a first example, Polcino Milies, César (ed.), Groups, algebras and applications. XVIII Latin American algebra colloquium, São Pedro, Brazil, August 3–8, 2009. Proceedings. Providence, RI: American Mathematical Society (AMS). Contemporary Mathematics 537, 291–324 (2011)
Tirao, J., Zurrián, I.: Reducibility of Matrix Weights. Ramanujan J. 45(2), 349–374 (2018)
Zurrián, I.: The algebra of differential operators for a Gegenbauer weight matrix. Int. Math. Res. Not. 8, 2402–2430 (2016)
Acknowledgements
The authors would like to thank Pablo Román for useful suggestions on earlier versions of the paper. The authors also thank the anonymous referees for their careful reading and remarks.
Funding
Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Rosihan M. Ali.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The research of the first author was partially supported by CONICET and SECTYP-UNCUYO Argentina Grant 06\M125. The research of the second author was partially supported by PGC2018-096504-B-C31 (FEDER(EU)/Ministerio de Ciencia e Innovación-Agencia Estatal de Investigación), FQM-262 and Feder-US-1254600 (FEDET(EU)/Junta de Anadalucía).
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Calderón, C., Castro, M.M. Structural Formulas for Matrix-Valued Orthogonal Polynomials Related to \(2\times 2\) Hypergeometric Operators. Bull. Malays. Math. Sci. Soc. 45, 697–726 (2022). https://doi.org/10.1007/s40840-021-01211-x
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40840-021-01211-x