1 Introduction

The theory of matrix valued orthogonal polynomials (MVOPs) was initiated by Krein in 1940s, and it has since been used in various areas of mathematics and mathematical physics. These areas include spectral theory, scattering theory, tiling problems, integrable systems, and stochastic processes. For further details and insights on these subjects, we refer to [1, 5, 7, 9, 11, 13], and the references therein.

Significant progress has been made in the past two decades toward understanding how the differential and algebraic properties of classical scalar orthogonal polynomials are extended to the matrix valued setting. A fundamental role has been played by the connection between harmonic analysis of matrix valued functions on compact symmetric pairs and MVOPs. In [8], Durán poses the problem of determining families of MVOPs which are eigenfunctions of a suitable second-order differential operator. In the scalar case, the answer to this problem is a classical result due to Bochner [2]. The only families with this property are those of Hermite, Laguerre, and Jacobi. The matrix valued setting turns out to be much more involved. The first explicit examples appeared in connection with spherical functions of the compact symmetric pair \((\textrm{SU}(3),\textrm{U}(2))\), see [12]. Following [21], a direct approach was taken in [19, 20] for the case of \((\textrm{SU}(2) \times \textrm{SU}(2), \textrm{diag})\), leading to a general set-up in the context of multiplicity free pairs [15]. In this context, certain properties of the orthogonal polynomials, such as orthogonality, recurrence relations, and differential equations, are understood in terms of the representation theory of the corresponding symmetric spaces. Recently, Casper and Yakimov solved the matrix Bochner problem [3]. This is the classification of all \(N \times N\) weight matrices W(x) whose associated MVOPs are eigenfunctions of a second-order differential operator.

For \(N \in {\mathbb {N}}\), let \(M_N({\mathbb {C}})\) be the set of complex matrices of size \(N \times N\) and by \(M_N({\mathbb {C}})[x]\) the set of one variable polynomials with coefficients in \(M_{N}({\mathbb {C}})\). We consider a matrix valued weight function \(W:(a,b) \rightarrow M_N({\mathbb {C}})\), where ab could be \(\pm \infty \), such that W(x) is positive definite for all \(x\in (a,b)\) and W has finite moments of all orders. In such a case, W induces a matrix valued inner product

$$\begin{aligned} \langle P,Q \rangle =\int _a^b P(x)W(x)Q(x)^*\text {d}x \in M_N({\mathbb {C}}), \end{aligned}$$
(1.1)

such that for all \(P,Q\in M_N({\mathbb {C}})[x]\) and \(T\in M_N({\mathbb {C}})\), the following properties are satisfied

$$\begin{aligned} \langle TP,Q\rangle =T\langle P,Q\rangle , \qquad \langle P,Q\rangle ^*=\langle Q,P\rangle ,\qquad \langle P, P\rangle \ge 0. \end{aligned}$$

Moreover, \(\langle P,P \rangle = 0\) if and only if \(P=0\). Using standard arguments, it can be shown that there exists a unique sequence \((P(x,n))_n\) of monic MVOPs with respect to W in the following sense:

$$\begin{aligned} \langle P(x,n), P(x,m)\rangle = {\mathcal {H}}(n) \delta _{n,m} \end{aligned}$$
(1.2)

where the squared norm \({\mathcal {H}}(n)\) is a positive definite matrix. We will write

$$\begin{aligned} P(x,n) = x^n + X(n) x^{n-1} + Y(n)x^{n-2} + \mathrm {l.o.t.} \end{aligned}$$
(1.3)

The monic MVOPs satisfy a three-term recurrence relation:

$$\begin{aligned} xP(x,n) = P(x,n+1) + B(n) P(x,n) + C(n) P(x,n-1) \end{aligned}$$
(1.4)

where \(B(n), C(n) \in M_N({\mathbb {C}})\) and \(n \ge 1\). The coefficients of the recurrence relation are given by

$$\begin{aligned} B(n)=X(n)-X(n+1), \quad C(n)= {\mathcal {H}}(n) {\mathcal {H}}(n-1)^{-1}, \end{aligned}$$
(1.5)

where X(n) is given in (1.3) and \({\mathcal {H}}(n)\) in (1.2). Moreover, for \(n \ge 2\), we have that

$$\begin{aligned} Y(n)=Y(n+1)+B(n)X(n)+C(n). \end{aligned}$$
(1.6)

In [6], the authors studied difference–differential relations for a class of Hermite type MVOPs associated with the weight \(W(x)=e^{-v(x)}e^{xA}e^{xA^{*}}\), where \(x \in {\mathbb {R}}\), v(x) is a scalar polynomial of even degree, and A is a constant matrix. The approach of [6] is to get information on the MVOPs by investigating two mutually adjoint operators \({\mathcal {D}}\) and \({\mathcal {D}}^\dagger \). If v(x) is a polynomial of degree two, in addition to \({\mathcal {D}}\) and \({\mathcal {D}}^\dagger \), there exists a second-order differential operator D having the MVOPs as eigenfunctions. It turns out that \({\mathcal {D}}, {\mathcal {D}}^\dagger \), and D generate a finite dimensional Lie algebra which is isomorphic to the Lie algebra of the oscillator group. The Casimir operator for this algebra is given explicitly and the commutativity properties of this operator provide information about MVOPs. Here, we study the analogous problem for Laguerre type weights.

We will first consider a matrix valued weight supported on \([0,\infty )\) of the form

$$\begin{aligned} W^{(\nu )}_{\phi }(x) = e^{Ax} T^{(\nu )}_{\phi }(x) e^{A^*x},\qquad T^{(\nu )}_{\phi }(x) = e^{-\phi (x)} \sum _{k=1}^N \delta ^{(\nu )}_k x^{\nu +k} E_{k,k}, \end{aligned}$$
(1.7)

where \(\nu >0\), \(\delta ^{(\nu )}_k\in {\mathbb {R}}\), A is a constant matrix and \(\phi (x)\) is an analytic function and \(E_{i,j}\) denotes the \(N\times N\) matrix whose (ij)-th coordinate is equal to \(\delta _{i,j}\). Next we introduce first-order differential operators

$$\begin{aligned} {\mathcal {D}}= \partial _x x + (A-1)x, \qquad {\mathcal {D}}^{\dagger }=-\partial _x x -(1+\nu +J)+x\phi '(x)-x. \end{aligned}$$

Our first result shows that \({\mathcal {D}}\) and \({\mathcal {D}}^\dagger \) are mutually adjoint with respect to the matrix valued inner product related to \(W^{(\nu )}_{\phi }(x)\). We are now concerned with the Lie algebra generated by \({\mathcal {D}}\) and \({\mathcal {D}}^{\dagger }\). The main result of this paper is the characterization of all analytic functions \(\phi (x)\) defined on \((0,\infty )\) such that the Lie algebra generated by \({\mathcal {D}}\) and \({\mathcal {D}}^{\dagger }\) is finite dimensional. We also obtain an explicit description and classification of these Lie algebras, up to isomorphisms.

The paper is organized as follows. In Sect. 2, we recall some preliminaries about differential and difference operators and introduce the left and right Fourier algebras related to the sequence of monic MVOPs.

In Sect. 3, we introduce a Laguerre type weight \(W^{(\nu )}_{\phi }\) for a given analytic function \(\phi \) on a neighborhood of the interval \([0,\infty )\). We introduce the operators \({\mathcal {D}}\) and \({\mathcal {D}}^{\dagger }\), and we prove that they are mutually adjoint with respect to \(W^{(\nu )}_{\phi }(x)\). For the MVOPs \((P(x,n))_n\) with respect to \(W^{(\nu )}_{\phi }\), we find difference operators \(M,\, M^{\dagger }\) associated to \({\mathcal {D}},\, {\mathcal {D}}^{\dagger }\), respectively, given by the relations \((M \cdot P)(x,n) = (P\cdot {\mathcal {D}})(x,n)\) and \((M^{\dagger } \cdot P)(x,n) = (P \cdot {\mathcal {D}}^{\dagger })(x,n)\).

In Sect. 4, we study the Lie algebra \({\mathfrak {g}}_{\phi }\) generated by the differential operators \({\mathcal {D}}\) and \({\mathcal {D}}^{\dagger }\). We prove that \({\mathfrak {g}}_{\phi }\) is finite dimensional if and only if \(\phi \) is a polynomial. When \(\phi \) is a polynomial, we obtain that

$$\begin{aligned} {\mathfrak {g}}_{\phi }\cong {\left\{ \begin{array}{ll} {\mathbb {C}}^2 \oplus {\mathfrak {h}} &{} \text {for }N\ge 2 \\ {\mathbb {C}} \oplus {\mathfrak {h}} &{} \text {for }N=1 \end{array}\right. } \end{aligned}$$

where \({\mathfrak {h}}\) is a solvable Lie algebra with nilradical of codimension one. Moreover, we classify this family of Lie algebras up to isomorphisms. As a byproduct, we give a partial solution to a problem proposed by Ismail in [16, Problem 24.5.2] in the case that \(\phi (x)\) is an analytic real function defined on \((0,\infty )\).

In Sect. 5, we analyze the case \(\phi (x)=x\) and we give an explicit expression for \({\mathcal {D}}\), \({\mathcal {D}}^{\dagger }\), M, and \(M^{\dagger }\). We also find a symmetric second-order differential operator D which has \((P(x,n))_n\) as eigenfunctions. We describe the Lie algebra \({\mathfrak {a}}\) generated by \({\mathcal {D}}\), \({\mathcal {D}}^{\dagger }\), and D. We prove that \({\mathfrak {a}}={\mathcal {Z}}_{{\mathfrak {a}}} \oplus [{\mathfrak {a}},{\mathfrak {a}}]\) where the dimension of the center \({\mathcal {Z}}_{{\mathfrak {a}}}\) is two and \([{\mathfrak {a}},{\mathfrak {a}}]\) is isomorphic to \(\mathfrak {sl}(2,{\mathbb {C}})\). Finally, we obtain non-abelian relations between \({\mathcal {H}}(n)\), B(n), and C(n).

1.1 Application: solution to a problem by Ismail

In [4], the authors investigated ladder operators for exponential type weights of the form \(e^{-v(x)}\), where v(x) is a suitable differentiable function. One of the results in [4] states that the Lie algebra generated by the ladder operators are finite dimensional whenever v is a polynomial. However, the converse of this result is still open. The ideas in [4] are extended to scalar Laguerre type weights in [16, Section 3.7]; in this case, both problems are open.

Our classification of all finite dimensional Lie algebras related with the Laguerre type MVOPs is closely related to [16, Problem 24.5.2]. In order to state this problem, we first consider a scalar weight \(w_1(x)=x^{\alpha }e^{-\phi (x)}\) with \(\phi (x)\) a twice continuously differentiable function on \((0,\infty )\). Let \((p_n(x))_n\) be a sequence of orthonormal polynomials satisfying a recurrence relation

$$\begin{aligned} x p_n(x)=a_{n+1}p_{n+1}(x)+ \alpha _n p_n(x) + a_{n} p_{n-1}(x), \qquad n>0, \end{aligned}$$
(1.8)

where \(p_0(x)=1\), \(p_1(x)=\frac{(x-\alpha _0)}{a_1}\). We introduce the coefficients \(\{\beta _n(x)\}_{n \in {\mathbb {N}}_0}\) by

$$\begin{aligned} \frac{\beta _n(x)}{a_n}= & {} \frac{w_1(y)p_n(y) p_{n-1}(y)}{x-y} \Big |_{0}^{\infty }\nonumber \\ {}{} & {} + \int _{0}^{\infty } \frac{\phi '(x)-\phi '(y)}{x-y} p_n(y) p_{n-1}(y) w_1(y) \text {d}y. \end{aligned}$$
(1.9)

Now we have a pair of n-dependent first-order differential operators

$$\begin{aligned} xL_{1,n}=\partial _x x +x \beta _{n}(x), \qquad xL_{2,n}=- \partial _x x + x \beta _{n}(x)+x\phi '(x). \end{aligned}$$
(1.10)

For a fixed n, the operators \(xL_{1,n}\), \( xL_{2,n}\) are mutually adjoint with respect to the scalar Laguerre type weight \(w_1(x)=x^{\alpha } e^{-\phi (x)}\), \(\alpha > -1\), \(x>0\). We are now ready to state [16, Problem 24.5.2]: “The Lie algebra generated by \(xL_{1,n}\) and \(xL_{2,n}\) is finite dimensional if and only if \(\phi \) is a polynomial.”

In this paper, we provide an answer to a similar problem in a more general context of MVOPs. More precisely, we consider a matrix valued weight \(W_{\phi }^{(\nu )}\) as in (1.7) and two matrix valued differential operators

$$\begin{aligned} {\mathcal {D}}_{1,n} = \partial _x x + x(A-\beta _n(x)), \quad {\mathcal {D}}_{2,n} = -\partial _x x - (1+\nu +J)+x\phi '(x)-x\beta _n(x), \end{aligned}$$

which are mutually adjoint with respect to \(W_{\phi }^{(\nu )}(x)\). In Sect. 4.3, we prove that if \(x(\phi '(x) +2\beta _n(x))\) is a real analytic non-polynomial function, then the Lie algebra generated by \({\mathcal {D}}_{1,n}\) and \({\mathcal {D}}_{2,n}\) is infinite dimensional. Conversely, if \(\phi \) is a polynomial and \(x\beta _n(x)\) is a matrix valued polynomial function, then the Lie algebra generated by \({\mathcal {D}}_{1,n}\) and \({\mathcal {D}}_{2,n}\) is finite dimensional. By specializing \(N=1\) with \(\beta _n\) as in (1.9) and \(\phi \) a real analytic function, we give a solution to [16, Problem 24.5.2] when the corresponding operators live in the right Fourier algebra. More precisely, if the differential operators \(xL_{1,n}\) and \(xL_{2,n}\) are in \({\mathcal {F}}_R(P)\) then \(x\beta _n(x)\) is a polynomial. In general, we obtain that \(\phi \) polynomial implies that the Lie algebra \({\mathfrak {g}}_n\) generated by \( xL_{1,n}(x)\) and \( xL_{2,n}(x)\) is finitely dimensional. Conversely, if the operators \(xL_{1,n}\) and \(xL_{2,n}\) are in \({\mathcal {F}}_R(P)\) and the Lie algebra \({\mathfrak {g}}_n\) is finitely dimensional, then the function \(\phi \) is a polynomial.

2 Preliminaries

This section presents the left and right Fourier algebras associated with the sequence of monic MVOPs, as developed by Casper and Yakimov in [3]. The results discussed in this section have been previously covered in a more comprehensive context in [3].

Let Q(xn) be a function \(Q:{\mathbb {C}}\times {\mathbb {N}}_0 \rightarrow M_N({\mathbb {C}})\) such that, for each fixed \(n\in {\mathbb {N}}_0\), Q(xn) is a rational function of x. A differential operator of the form

$$\begin{aligned} {\mathcal {D}}=\sum _{j=0}^n \partial _x^j F_j(x), \qquad \partial _x^j:= \tfrac{\text {d}^j}{\text {d}x^j}, \end{aligned}$$
(2.1)

where \(F_j:{\mathbb {C}}\rightarrow M_N({\mathbb {C}})\) is a rational function of x, acts on Q from the right by

$$\begin{aligned} (Q\cdot {\mathcal {D}})(x,n) = \sum _{j=0}^n (\partial _x^jQ)(x,n)\, F_j(x). \end{aligned}$$

The algebra of all differential operators of the form (2.1) will be denoted by \({\mathcal {M}}_N\). In addition to the right action by differential operators, we also consider a left action on Q by difference operators on the variable n. For \(j\in {\mathbb {Z}}\), let \(\delta ^{j}\) be the discrete operator which acts on a sequence \(A:{\mathbb {N}}_0 \rightarrow M_N({\mathbb {C}})\) by

$$\begin{aligned} (\delta ^j \cdot A)(n)=A(n+j). \end{aligned}$$

Here we assume that the value of a sequence at a negative integer is equal to zero. For given sequences \(A_{-\ell },\ldots ,A_k\), a discrete operator of the form

$$\begin{aligned} M(n)=\sum _{j=-\ell }^k A_j(n) \delta ^j, \end{aligned}$$
(2.2)

acts on Q from the left by

$$\begin{aligned} (M \cdot Q)(x,n)&= \sum _{j=-\ell }^k A_j(n) \, (\delta ^j\cdot Q)(x,n) = \sum _{j=-\ell }^k A_j(n) \, Q(x,n+j). \end{aligned}$$

We shall denote the algebra of difference operators (2.2) by \({\mathcal {N}}_N\).

We will be interested in the sequence of monic orthogonal polynomials \((P(x,n))_n\) related to a weight matrix, as in (1.2). As in [3, Definition 2.20], we define:

Definition 2.1

The left and right Fourier algebras associated with the sequence of monic orthogonal polynomials \((P(x,n))_n\) are given by:

$$\begin{aligned} \begin{aligned} {\mathcal {F}}_L(P)&=\{ M\in {\mathcal {N}}_N :\exists \, {\mathcal {D}}\in {\mathcal {M}}_N,\, M\cdot P = P\cdot {\mathcal {D}}\} \subset {\mathcal {N}}_{N},\\ {\mathcal {F}}_R(P)&=\{ {\mathcal {D}}\in {\mathcal {M}}_N :\exists \, M\in {\mathcal {N}}_N,\, M\cdot P = P\cdot {\mathcal {D}}\}\subset {\mathcal {M}}_{N}. \end{aligned} \end{aligned}$$
(2.3)

The definition of the Fourier algebras directly implies a connection between the elements of \({\mathcal {F}}_L(P)\) and \({\mathcal {F}}_R(P)\). Moreover, the map

$$\begin{aligned} \varphi :{\mathcal {F}}_L(P) \rightarrow {\mathcal {F}}_R(P),\qquad \text { defined by }\quad M\cdot P = P \cdot \varphi (M), \end{aligned}$$

is an algebra isomorphism. In [3], this map is called the generalized Fourier map. More precisely, \(M_{1}M_{2}\cdot P = P \cdot \varphi (M_{1})\varphi (M_{2})\) for all \(M_{1},M_{2}\in {\mathcal {F}}_{L}(P)\). On the other hand, by the definition of \(\varphi \), we have that \(M_{1}M_{2}\cdot P = P\cdot \varphi (M_{1}M_{2})\).

Remark 2.2

In this context, the three-term recurrence relation (1.4) can be written as

$$\begin{aligned} xP = P\cdot x = L\cdot P, \qquad \text {where } \quad L=\delta + B(n) + C(n)\delta ^{-1}. \end{aligned}$$

Therefore \(x\in {\mathcal {F}}_R(P)\), \(L\in {\mathcal {F}}_L(P)\) and \(\varphi (L)=x\). For every polynomial \(v \in {\mathbb {C}}[x]\), we have

$$\begin{aligned} P\cdot v(x) =P\cdot v(\varphi (L))= v(L)\cdot P. \end{aligned}$$

One of the crucial results from [3] is the existence of an adjoint operation \(\dagger \) in the Fourier algebras \({\mathcal {F}}_L(P)\) and \({\mathcal {F}}_R(P)\) as described in [3, §3.1]. To define the adjoint operation in \({\mathcal {F}}_L(P)\), we initially observe that the algebra of difference operators \({\mathcal {N}}_N\) has a \(*\)-operation defined as follows:

$$\begin{aligned} \left( \sum _{j=-\ell }^k A_j(n) \, \delta ^j \right) ^*= \sum _{j=-\ell }^k A_j(n-j)^*\, \delta ^{-j}, \end{aligned}$$
(2.4)

where \(A_j(n-j)^*\) is the conjugate transpose of \(A_j(n-j)\). Now, the adjoint of \(M\in {\mathcal {N}}_N\) is given by

$$\begin{aligned} M^\dagger = {\mathcal {H}}(n) M^*{\mathcal {H}}(n)^{-1}, \end{aligned}$$
(2.5)

where \({\mathcal {H}}(n)\) is the squared norm which we view as a difference operator of order zero. The following holds:

$$\begin{aligned} \langle (M\cdot P)(x,n),P(x,m)\rangle = \langle P(x,n),(M^\dagger \cdot P)(x,m)\rangle . \end{aligned}$$

In [3, Corollary 3.8], the authors show that every differential operator \(D\in {\mathcal {F}}_R(P)\) has a unique adjoint \({\mathcal {D}}^\dagger \in {\mathcal {F}}_R(P)\) with the property

$$\begin{aligned} \langle P\cdot {\mathcal {D}}, Q \rangle = \langle P,Q\cdot {\mathcal {D}}^\dagger \rangle , \end{aligned}$$

for all \(P,Q\in M_N({\mathbb {C}})[x]\). Moreover, \(\varphi (M^\dagger ) = \varphi (M)^\dagger \) for all \(M\in {\mathcal {F}}_L(P)\).

3 Differential operators for semi-classical Laguerre type weights

The goal of this section is two-fold. On the one hand, by assuming the existence of two mutually adjoint first-order differential operators D and \(D^\dagger \), we give explicit formulas for the associated difference operators in the left Fourier algebra \({\mathcal {F}}_L(P)\). On the other hand, we introduce an explicit Laguerre type weight of arbitrary size and we show that D and \(D^\dagger \) are mutually adjoint with respect to the inner product induced by this weight. In the rest of this paper, we will use the following \(N\times N\) constant matrices:

$$\begin{aligned} J=\sum _{k=1}^{N} k E_{k,k} \qquad A =\sum _{k=1}^{N-1} a_{k} E_{k+1,k}, \end{aligned}$$
(3.1)

where \(a_k\in {\mathbb {C}}\). Recall that \(E_{i,j}\) denotes the \(N\times N\) matrix whose (ij)-th coordinate is equal to \(\delta _{i,j}\). It is straightforward to show that

$$\begin{aligned} {[}J,A]=A \quad \text {and} \quad e^{xA}J e^{-xA}=J-Ax. \end{aligned}$$
(3.2)

3.1 Mutually adjoint differential operators

In this subsection, we let W be a matrix valued weight with a matrix valued inner product as in (1.1). Let \(\phi \) be an analytic function such that \(\int _a^b x^n\phi (x)W(x)\text {d}x<\infty \) for all \(n\in {\mathbb {N}}_0\). We consider the first-order differential operators

$$\begin{aligned} {\mathcal {D}} = \partial _x x + x(A-1), \quad {\mathcal {D}}^\dagger = -\partial _x x - (1+\nu +J)+x\phi '(x)-x. \end{aligned}$$
(3.3)

We will be interested in weight matrices such that \({\mathcal {D}}\) and \({\mathcal {D}}^\dagger \) are mutually adjoint, i.e., \(\langle P\cdot {\mathcal {D}}, Q\rangle =\langle P, Q\cdot {\mathcal {D}}^{\dagger }\rangle \) for all \(P,Q \in M_{N}({\mathbb {C}})[x]\).

We note that

$$\begin{aligned} {\mathcal {D}}^\dagger = -{\mathcal {D}} + {\mathcal {C}} -(1+\nu ) + x\phi '(x)-2x \end{aligned}$$
(3.4)

where \({\mathcal {C}}=Ax-J\).

Lemma 3.1

Assume that \({\mathcal {D}}\) and \({\mathcal {D}}^\dagger \) are mutually adjoint operators with respect to the matrix valued inner product associated to W. Then \({\mathcal {C}}=Ax-J\) is a symmetric operator and \({\mathcal {C}}\in {\mathcal {F}}_R(P)\). Moreover, if \((P(x,n))_n\) is the sequence of monic MVOPs for W and \(M_{{\mathcal {C}}} = \varphi ^{-1}({\mathcal {C}})\), then

$$\begin{aligned} P\cdot {\mathcal {C}}=M_{{\mathcal {C}}}\cdot P, \qquad \text {where} \quad M_{{\mathcal {C}}}=\sum _{i=-1}^{1} U_j(n)\delta ^j. \end{aligned}$$

The coefficients of \(M_{\mathcal {C}}\) are:

$$\begin{aligned} U_1(n)= & {} A,\qquad U_0(n)=X(n)A-AX(n+1)-J,\\ U_{-1}(n)= & {} Y(n)A-AY(n+1) + [J,X(n)] + (AX(n+1)-X(n)A)X(n), \end{aligned}$$

where X(n) and Y(n) are given in (1.3).

Proof

By (1.1), the operator multiplication by \(-(1+\nu ) + x\phi '(x)-2x\) is symmetric with respect to the matrix inner product associated to W. On the other hand, since \({\mathcal {D}}\) and \({\mathcal {D}}^\dagger \) are mutually adjoint, we have that \({\mathcal {D}}+{\mathcal {D}}^\dagger \) is symmetric. Hence

$$\begin{aligned} {\mathcal {C}} = {\mathcal {D}}+{\mathcal {D}}^\dagger +(1+\nu ) - x\phi '(x)+2x, \end{aligned}$$

is symmetric. Moreover, by [3, Theorem 3.7], we get \({\mathcal {D}}\in {\mathcal {F}}_L(P)\).

If we write

$$\begin{aligned} M_{{\mathcal {C}}}=\sum _{j=-\ell _1}^{\ell _2} U_{j}(n)\delta ^j, \qquad \ell _1,\ell _2\in {\mathbb {N}}_0, \end{aligned}$$

then by taking into account that \({\mathcal {C}}\) increases the degree of any polynomial by 1, we immediately obtain \(U_{j}(n)=0\) for \(j>1\). This implies that \(\ell _2=1\). On the other hand, since \({\mathcal {C}}\) is a symmetric operator with respect to W, we have that \(M_{{\mathcal {C}}}=M^{\dagger }_{{\mathcal {C}}}\) and so \(U_{j}(n)=0\) for \(j<-1\). Finally, the formulas for \(U_{-1}(n),U_{0}(n)\), and \(U_{1}(n)\) are obtained by direct computation from the relation \(P\cdot {\mathcal {C}}=M_{{\mathcal {C}}}\cdot P\). \(\square \)

Theorem 3.2

Suppose that the operators \({\mathcal {D}}\) and \({\mathcal {D}}^\dagger \) in (3.3) are mutually adjoint and let \(M=\varphi ^{-1}({\mathcal {D}})\) and \(M^\dagger = \varphi ^{-1}({\mathcal {D}}^\dagger )\). If \(\phi \) is a polynomial of degree k then

$$\begin{aligned} P\cdot {\mathcal {D}}=M\cdot P, \qquad M= \sum _{j=-k}^{1} A_{j}(n)\delta ^j \end{aligned}$$
(3.5)

with

$$\begin{aligned}{} & {} A_1(n)=A-1,\quad A_0(n)=n+X(n)A - A X(n+1)-B(n),\\{} & {} A_{-1}(n)=(n-1)X(n)+Y(n)(A-1)-(A-1)Y(n+1)-A_{0}(n)X(n),\\{} & {} A_j(n)=(v(L))_j(n), \quad -k+1<j< -1 \end{aligned}$$

where B(n) is given by (1.5) and \(v(x)=-(1+\nu ) + x\phi '(x)-2x\).

Proof

Since \({\mathcal {D}}\) increases the degree of P(xn) by one, we have that

$$\begin{aligned} M:=\varphi ^{-1}({\mathcal {D}}) = \sum _{t=\ell }^1 A_t(n)\delta ^t, \end{aligned}$$

for certain coefficients \(A_j(n)\). The formulas for \(A_{j}(n)\) with \(j=-1,0,1\) are obtained from (3.5) and the definition of \({\mathcal {D}}\) by comparing leading coefficients. By orthogonality, we have that

$$\begin{aligned} \langle M\cdot P,\delta ^j \cdot P \rangle =\sum _{t=\ell }^1 A_{t}(n)\langle P(x,n+t),P(x,n+j) \rangle =A_{j}(n){\mathcal {H}}(n+j). \end{aligned}$$

By (3.4) \({\mathcal {D}}^{\dagger }=-{\mathcal {D}}+{\mathcal {C}}+v(x)\), and so for \(j<-1\), we have that

$$\begin{aligned} A_{j}(n)= & {} \langle P\cdot {\mathcal {D}},\delta ^j \cdot P \rangle {\mathcal {H}}(n+j)^{-1}=\langle P, \delta ^j \cdot P \cdot {\mathcal {D}}^{\dagger }\rangle {\mathcal {H}}(n+j)^{-1}\\= & {} \langle P, \delta ^j \cdot P \cdot v(x)\rangle {\mathcal {H}}(n+j)^{-1}=\langle P\cdot v(x), \delta ^j \cdot P\rangle {\mathcal {H}}(n+j)^{-1}\\= & {} \langle v(L)\cdot P, \delta ^{j}\cdot P \rangle {\mathcal {H}}(n+j)^{-1}. \end{aligned}$$

In the third equality, we have used that \(\langle P, \delta ^j \cdot P\cdot {\mathcal {D}}\rangle \) and \(\langle P, \delta ^j \cdot P\cdot {\mathcal {C}}\rangle \) vanish for \(j<-1\) and in the fourth equality, the fact that any scalar polynomial is symmetric. Since v is a polynomial of degree k, then

$$\begin{aligned} v(L)= \sum _{j=-k}^{0} (v(L))_{j} \delta ^{j}, \end{aligned}$$

and so

$$\begin{aligned} \langle v(L)\cdot P, \delta ^{j}\cdot P \rangle =\sum _{t=-k}^1 (v(L))_{t}(n)\langle P(x,n+t),P(x,n+j) \rangle =(v(L))_{j}(n){\mathcal {H}}(n+j). \end{aligned}$$

Then, we have that

$$\begin{aligned} A_{j}(n)= (v(L))_j(n) \quad \text {for }j<-1. \end{aligned}$$

We complete the proof by noting that \((v(L))_{j}(n)=0\) for \(j<-k\). \(\square \)

As a direct consequence, we obtain the following corollary.

Corollary 3.3

If \(\phi \) is a monic polynomial of degree one, then \(M=\varphi ^{-1}(D)\) satisfies

$$\begin{aligned} M= A_{0}(n)+(A-1)\delta \end{aligned}$$

with \(A_{0}(n)\) as in Theorem 3.2. Moreover, in this case, we have that

$$\begin{aligned}{} & {} (n-1)X(n)+Y(n)(A-1)-(A-1)Y(n+1)\nonumber \\{} & {} \qquad -\Big ( n+X(n)A - A X(n+1)-B(n)\Big )X(n) = 0. \end{aligned}$$
(3.6)

Proof

If \(v(x)=-(1+\nu ) + x\phi '(x)-2x\),x since \(\deg (v) =\deg (\phi )< 2\), by Theorem 3.2 we obtain that \(A_{j}(n)=0\) for all \(j<-1\). On the other hand, notice that since \(\phi \) is monic, then \({\mathcal {D}}^\dagger = -\partial _x x - (1+\nu +J)\) and so \({\mathcal {D}}^\dagger \) does not increase degrees. This implies that \(A_{1}^{\dagger }=0\) and therefore

$$\begin{aligned} A_{-1}(n)=(n-1)X(n)+Y(n)(A-1)-(A-1)Y(n+1)-A_{0}(n)X(n) = 0. \end{aligned}$$

The formulas for \(A_{1}(n)\) and \(A_{0}(n)\) follow from Theorem 3.2. \(\square \)

Remark 3.4

Since \({\mathcal {C}}\) is symmetric, we have that \({\mathcal {C}}={\mathcal {C}}^\dagger \) implies \(M_{{\mathcal {C}}}=M_{{\mathcal {C}}}^{\dagger }\). Thus, from Eq. (2.5), we have that

$$\begin{aligned} U_1(n) ={\mathcal {H}}(n) U_{-1}(n+1)^{*}{\mathcal {H}}(n+1)^{-1}, \qquad U_0(n) ={\mathcal {H}}(n)U_0(n)^{*}{\mathcal {H}}(n)^{-1}, \end{aligned}$$

and so we obtain that

$$\begin{aligned} A= & {} {\mathcal {H}}(n)\Big (Y(n+1)A-AY(n+2)+[J,X(n+1)]\\ {}{} & {} +(AX(n+2)-X(n+1)A)X(n+1)\Big )^{*} {\mathcal {H}}(n+1)^{-1}, \end{aligned}$$

and

$$\begin{aligned} X(n)A-AX(n+1)-J={\mathcal {H}}(n) \Big (X(n)A-AX(n+1)-J\Big )^{*} {\mathcal {H}}(n)^{-1}. \end{aligned}$$

3.2 The Laguerre type weight

Let \(\delta ^{(\nu )}_1,\ldots ,\delta ^{(\nu )}_N\) be non-zero real numbers, and let \(\phi \) be an analytic function on a neighborhood of the interval \([0,\infty )\). The matrix valued Laguerre type weight is given by

$$\begin{aligned} W^{(\nu )}_{\phi }(x) = e^{Ax} T^{(\nu )}_{\phi }(x) e^{A^*x},\qquad T^{(\nu )}_{\phi }(x) = e^{-\phi (x)} \sum _{k=1}^N \delta ^{(\nu )}_k x^{\nu +k} E_{k,k}, \end{aligned}$$
(3.7)

with support on the interval \([0,\infty )\). This is an extension of the Laguerre weight given in [18]. In the rest of the paper, we assume that W(x)P(x) has vanishing limits at the endpoints of the support for any matrix polynomial P.

Remark 3.5

Since \(\delta _{k}^{(\nu )}\ne 0\) for any k and \(e^{Ax}\) is invertible, then the weight matrix \(W_{\phi }^{(\nu )}(x)\) is also invertible for all \(x\in [0,\infty )\).

Proposition 3.6

Let \(A,J\in M_{N}({\mathbb {C}})\) as in (3.1) and let \(W^{(\nu )}_{\phi }(x)\) be as in (3.7). Then, the first-order differential operators

$$\begin{aligned} {\mathcal {D}} = \partial _x x + x(A-1), \quad {\mathcal {D}}^\dagger = -\partial _x x - (1+\nu +J)+x\phi '(x)-x, \end{aligned}$$
(3.8)

are mutually adjoint with respect to \(W^{(\nu )}_{\phi }(x)\).

Proof

Let \(P,Q \in M_N({\mathbb {C}})[x]\). In order to simplify the notation, in the rest of the proof, we denote by \(W(x):=W^{(\nu )}_{\phi }(x)\) and \(T(x):=T^{(\nu )}_{\phi }(x)\). Thus, we have that

$$\begin{aligned} \langle P \cdot {\mathcal {D}}, Q \rangle = \int _{0}^{\infty } \left( xP'(x)+xP(x)(A-1) \right) W(x) Q^{*}(x) \text {d}x. \end{aligned}$$
(3.9)

Notice that, since W(x)P(x) has vanishing limits at the endpoints \(x=0\), and \(x=\infty \), integration by parts implies that

$$\begin{aligned} \int _{0}^{\infty } xP'(x)W(x)Q^{*}(x)\text {d}x = -\int _{0}^{\infty } P(x) \left( xW(x)Q^{*}(x) \right) ' \text {d}x. \end{aligned}$$
(3.10)

By replacing (3.10) in (3.9) and using the linearity of the matrix valued inner product, we write \(\langle P \cdot {\mathcal {D}}, Q \rangle \) as the sum of four integrals:

$$\begin{aligned} \langle P \cdot {\mathcal {D}}, Q \rangle= & {} -\int _{0}^{\infty } P(x) W(x)Q^{*}(x) \text {d}x- \int _{0}^{\infty } x P(x)W'(x) Q^{*} \text {d}x \nonumber \\ {}{} & {} -\int _{0}^{\infty } x P(x)W(x)(Q^{*}(x))' \text {d}x\nonumber \\{} & {} + \int _{0}^{\infty } xP(x)(A-1)W(x) Q^{*}(x) \text {d}x. \end{aligned}$$
(3.11)

We note that the sum of the first and third integral of (3.11) can be written in terms of the first-order differential operator \((1+\partial _{x} x) \) in the following way

$$\begin{aligned} \int _{0}^{\infty } P(x)W(x)(Q^{*}(x) +x(Q^{*}(x))') \text {d}x=\langle P(x), Q \cdot (1+\partial _{x} x) \rangle . \end{aligned}$$

Therefore, we are left with the second and fourth integrals of (3.11). In order to deal with these integrals, we first write

$$\begin{aligned} \Delta = -\int _{0}^{\infty }x P(x) W'(x) Q^{*}(x) \text {d}x + \int _{0}^{\infty } xP(x)(A-1)W(x) Q^{*}(x) \text {d}x. \end{aligned}$$
(3.12)

By (3.7), we have that

$$\begin{aligned} W^{-1}(x) W'(x)&=e^{-A^{*} x} \left( {T}^{-1}(x)\,A \,T(x)+T ^{-1}(x)T'(x)+ A^{*}\right) e^{A^{*} x},\\ W^{-1}(x) x (A-1)W(x)&= e^{-A^{*}x} \left( x T^{-1}(x) \,A \,T(x) - x\right) e^{A^{*}x}. \end{aligned}$$

Thus, we obtain

$$\begin{aligned} \Delta&= - \int _{0}^{\infty } P(x) W(x) xW^{-1}(x) W'(x) Q^{*}(x) \text {d}x \\&\quad + \int _{0}^{\infty } P(x)W(x) \left( W^{-1}(x) x (A-1)W(x) \right) Q^{*} (x) \text {d}x, \\&= - \int _{0}^{\infty } P(x) W(x) e^{-A^{*} x} \\&\quad \left( xT^{-1}(x) \,A \, T(x)+x T^{-1}(x)T'(x)+ x A^{*}\right) e^{A^{*} x} Q^{*}(x) \text {d}x\\&\quad + \int _{0}^{\infty } P(x)W(x) e^{-A^{*}x} \left( x T^{-1}(x) \,A \,T(x) - x\right) e^{A^{*}x} Q^{*}(x)\text {d}x\\&= -\int _{0}^{\infty } P(x) W(x) e^{-A^{*} x} \left( x T^{-1}(x)T'(x)+ x A^{*}+x\right) e^{A^{*} x} Q^{*}(x) \text {d}x. \end{aligned}$$

By taking into account that \(xT'(x)=T(x) (-x\phi '(x)+\nu +J)\), we obtain that

$$\begin{aligned} \Delta = \int _{0}^{\infty } P(x) W(x) e^{-A^{*} x} (-x\phi '(x)+\nu +J+xA^{*}+x) e^{A^{*}x} Q^{*}(x) \text {d}x. \end{aligned}$$

Notice that the second expression of the right hand of the above equality is

$$\begin{aligned} e^{-A^{*} x} ( -x\phi '(x)+\nu +J+xA^{*}+x) e^{A^{*}x} = -x\phi '(x)+ \nu + e^{-A^{*} x} J e^{A^{*}x}+xA^{*}+x. \end{aligned}$$

On the other hand, the equation \(e^{xA}J e^{-xA}=J-Ax\) implies that \(e^{-A^{*} x} J e^{A^{*}x}=J-A^{*}x\). Hence, we obtain that

$$\begin{aligned} \langle P \cdot {\mathcal {D}}, Q \rangle= & {} - \int _{0}^{\infty } P(x)W(x)\left( Q \cdot (1+\partial _{x} x) \right) ^{*}(x) \text {d}x\\{} & {} - \int _{0}^{\infty } P(x) W(x) (x -x\phi '(x)+(\nu +J-A^{*}x+xA^{*}) Q^{*}(x) \text {d}x\\= & {} \int _{0}^{\infty } P(x)W(x)\left( Q \cdot -(\partial _{x} x+x-x\phi '(x)+(\nu +J+1) \right) ^{*}(x) \text {d}x\\= & {} \langle P,Q\cdot {\mathcal {D}}^{\dagger }\rangle . \end{aligned}$$

Therefore, the operators \({\mathcal {D}}\) and \({\mathcal {D}}^\dagger \) are mutually adjoint, as asserted. \(\square \)

The following corollary is a consequence of the previous results.

Corollary 3.7

Let \({\mathcal {D}}\) and \({\mathcal {D}}^{\dagger }\) as in (3.8). Then, \(\phi \) is a polynomial if and only if \({\mathcal {D}}, {\mathcal {D}}^{\dagger } \in {\mathcal {F}}_R(P)\).

Proof

By Theorem 3.2 and Proposition 3.6, if \(\phi \) is a polynomial then \({\mathcal {D}}, {\mathcal {D}}^{\dagger } \in {\mathcal {F}}_R(P)\), as asserted.

Conversely, if \( {\mathcal {D}}^\dagger = -\partial _x x - (1+\nu +J)+x\phi '(x)-x, \in {\mathcal {F}}_R(P)\), then \(x\phi '(x)\) is a polynomial. Indeed, by taking into account that \(P_0(x)=I\), we have

$$\begin{aligned} (P \cdot {\mathcal {D}}^{\dagger })(x,0) = - (1+\nu +J)+x\phi '(x)-x = (M^{\dagger } \cdot P)(x,0), \end{aligned}$$

Now, since \(M^{\dagger }\) is as in (2.2) we obtain that \((M^{\dagger }\cdot P)(x) \in M_N({\mathbb {C}})[x]\) and so \(x\phi '(x)\) is a polynomial. Now, as \(\phi (x)\) is an analytic real function, \(\phi (x)=\sum _{r=0}^{\infty } a_r x^{r}\), and then \(x\phi '(x)=\sum _{r=1}^{\infty } r a_r x^{r}\). If \(x\phi '(x)\) is a polynomial, then there exists \(S\in {\mathbb {N}}\) such that \(r a_r = 0\) for all \(r \ge S\). Then, \(a_r = 0\) for all \(r \ge S\), and then, \(\phi (x)\) is a polynomial, as desired. \(\square \)

4 Lie algebras associated to orthogonal polynomials

The goal of this section is to study the dimension of the Lie algebra generated by \({\mathcal {D}}\) and \({\mathcal {D}}^{\dagger }\) and to classify, up to isomorphisms, all the finite dimensional cases. As a byproduct, we solve a problem proposed by Ismail in [16, Problem 24.5.2], as described in the introduction.

If \({\mathfrak {g}}\) is a finite dimensional Lie algebra, and if \({\mathfrak {g}}^{j}\) and \({\mathfrak {g}}_j\) denote

$$\begin{aligned} {\mathfrak {g}}^0={\mathfrak {g}}_0={\mathfrak {g}}, \quad {\mathfrak {g}}^{j+1}=[{\mathfrak {g}}^j,{\mathfrak {g}}^{j}] \quad \text {and } \quad {\mathfrak {g}}_{j+1}=[{\mathfrak {g}},{\mathfrak {g}}_{j}], \end{aligned}$$

then \({\mathfrak {g}}\) is called solvable (nilpotent) if \({\mathfrak {g}}^j=0\) for some j (if \({\mathfrak {g}}_j=0\) for some j). Clearly, any nilpotent Lie algebra is solvable. The radical (nilradical) of \({\mathfrak {g}}\) is its maximal solvable ideal (maximal nilpotent ideal) of \({\mathfrak {g}}\). We will denote by \(\textrm{Rad}({\mathfrak {g}})\) and \(\textrm{Nil}({\mathfrak {g}})\) the radical and nilradical of \({\mathfrak {g}}\), respectively.

4.1 Lie algebra generated by \({\mathcal {D}}\) and \({\mathcal {D}}^{\dagger }\)

In the sequel, given \(\phi \) an analytic real function over \({\mathbb {R}}\), we denote by

$$\begin{aligned} {\mathfrak {g}}_{\phi }=\langle 1,{\mathcal {D}}, {\mathcal {D}}^{\dagger }, x, x\phi '(x), x^{2} \phi ^{(2)}(x),\ldots \rangle \end{aligned}$$
(4.1)

with the usual bracket. The first result in this section consists in deciding whether \({\mathfrak {g}}_{\phi }\) is finite dimensional and giving its dimension. The following lemma gives the relations for the generators of \({\mathfrak {g}}_{\phi }\). The proof, which can be done by a direct computation, is omitted.

Lemma 4.1

Let \(A,J\in M_{N}({\mathbb {C}})\) as in (3.1) and let \(\phi \) an analytic real function over \({\mathbb {R}}\). Let us consider the operators \({\mathcal {D}}\) and \({\mathcal {D}}^\dagger \) introduced in (3.3). Then we have that

$$\begin{aligned} {[}{\mathcal {D}},x]= & {} -x,\qquad [{\mathcal {D}}^\dagger ,x]=x,\qquad [{\mathcal {D}},{\mathcal {D}}^\dagger ] = -x^2\phi ^{(2)}(x)+(2-\phi '(x))x,\\ {[}{\mathcal {D}},\phi ^{(j)}(x)x^j]= & {} -(jx^{j}\phi ^{(j)}(x)+x^{j+1}\phi ^{(j+1)}(x)) \\= & {} -[{\mathcal {D}}^{\dagger },\phi ^{(j)}(x)x^{j}] \quad \text {for all } j\ge 1, \end{aligned}$$

where x and \(x^{j}\phi ^{(j)}(x)\) act on matrix valued polynomials by right multiplication.

We are interested in determining whether the Lie algebra \({\mathfrak {g}}_\phi \) is finite dimensional. In order to prove this, we introduce the following Lie subalgebra

$$\begin{aligned} {\mathfrak {a}}_{\phi }= \langle x^{i}\phi ^{(i)}(x) \rangle _{i\in {\mathbb {N}}}. \end{aligned}$$
(4.2)

The following lemma states that \({\mathfrak {a}}_{\phi }\) is finite dimensional if and only if \(\phi \) is a polynomial.

Lemma 4.2

Let \(\phi \) be an analytic real function over \({\mathbb {R}}\) and let \({\mathfrak {a}}_{\phi }\) as in (4.2). Then, \({\mathfrak {a}}_{\phi }\) is finite dimensional if and only if \(\phi \) is a polynomial. In such a case, \(\dim {\mathfrak {a}}_{\phi }=\ell \) where \(\ell \) is the number of non-zero coefficients of \(\phi \).

Proof

Clearly, if \(\phi \) is a polynomial, then the dimension of \({\mathfrak {a}}_{\phi }\) is finite, since \(\phi ^{(m)}(x)\) vanishes for all m greater than the degree of \(\phi \).

Conversely, assume now that \(\phi \) is given by the power series

$$\begin{aligned} \phi (x)= \sum _{i=0}^{\infty } a_{i}x^{i}. \end{aligned}$$
(4.3)

This implies that

$$\begin{aligned} x^{j}\phi ^{(j)}(x)= \sum _{i=0}^{\infty } b_{i,j} x^{i}, \quad \text {with} \quad b_{i,j}= {\left\{ \begin{array}{ll} \genfrac(){0.0pt}1{i}{j}j! \,a_i &{} \text { if }j\le i, \\ 0 &{} \text {if }i<j. \end{array}\right. } \end{aligned}$$
(4.4)

In particular, if \(a_i=0\) then \(b_{i,j}=0\) for all \(j \in {\mathbb {N}}\).

Let \(\{i_t\}_{t\in {\mathbb {N}}}\) be the sequence of indices whose coefficients in (4.3) are non-zeros of \(\phi \), that is \(i_{t}<i_{t+1}\) for all \(t \in {\mathbb {N}}\), and \(a_i \ne 0\) if and only if \(i= i_{t}\) for some \(t \in {\mathbb {N}}\).

Claim: The vector space \(\langle x^{i_1}\phi ^{(i_1)}(x),\ldots , x^{i_{\ell }}\phi ^{(i_{r})}(x)\rangle \) has dimension r. Let \(c_{1},\ldots ,c_{r} \in {\mathbb {C}}\) be such that

$$\begin{aligned} c_1x^{i_1}\phi ^{(i_1)}(x)+\cdots + c_{r}x^{i_r}\phi ^{(i_r)}(x)=0. \end{aligned}$$

This induces the following system of equations

$$\begin{aligned} \sum _{j=1}^{h} c_{j} \genfrac(){0.0pt}1{i_h}{i_j} i_{h}! a_{i_j}=0, \qquad \text {for }h=1,\ldots ,r. \end{aligned}$$

By taking into account that \(a_{i_1}\ne 0\), the equation for \(h=1\) implies that \(c_1=0\). In the same way, since \(c_1=0\), the equation for \(h=2\) implies that \(c_2 a_{i_2} i_2!=0\) and so \(c_2=0\) since \(a_{i_2}\ne 0\). Inductively, if \(c_1=c_2=\ldots =c_{r-1}=0\), then the equation for \(h=r\) implies that \(c_r a_{i_r} i_r!=0\) and so \(c_r=0\) since \(a_{i_r}\ne 0\). Hence, \(c_{h}=0\) for all \(h\in \{1,\ldots ,r\}\).

By the claim, the space \({\mathfrak {a}}_{\phi }\) has subspaces of all of the possible dimensions and so is non-finite dimensional.

Now, assume that \(\phi \) is a polynomial of degree n, in the same notation as above, by (4.4), we have that

$$\begin{aligned} \langle x\phi '(x),x^{2}\phi ^{(2)}(x),\ldots , x^{n}\phi ^{(n)}(x)\rangle \subseteq \langle x^{i_1},\ldots ,x^{i_\ell }\rangle . \end{aligned}$$

The claim and the above statement imply that \(\langle x\phi '(x),x^{2}\phi ^{(2)}(x),\ldots , x^{n}\phi ^{(n)}(x)\rangle \) has dimension \(\ell \). \(\square \)

We will need the following notation: given \(\phi \) a polynomial over \({\mathbb {C}}\) with \(\ell \) non-zero coefficients,

$$\begin{aligned} k={\left\{ \begin{array}{ll} \ell +2 &{} \text {if } \phi '(0)=\phi (0)=0,\\ \ell +1 &{} \text {if } \phi '(0)=0, \phi (0)\ne 0,\\ \ell +1 &{} \text {if } \phi (0)=0, \phi '(0)\ne 0,\\ \ell &{} \text {if } \phi (0)\ne 0, \phi '(0)\ne 0. \end{array}\right. } \end{aligned}$$
(4.5)

Proposition 4.3

Let \(\phi \) be an analytic real function over \({\mathbb {R}}\) and let \({\mathfrak {g}}:={\mathfrak {g}}_{\phi }\) its associated Lie algebra as in (4.1). Then, we have that \(\dim ({\mathfrak {g}})\) is finite if and only if \(\phi \) is a polynomial. In such a case, if k is as in (4.5), then

$$\begin{aligned} \dim ({\mathfrak {g}})= {\left\{ \begin{array}{ll} k+2 &{} \text {for }N\ge 2,\\ k+1 &{} \text {for }N=1. \end{array}\right. } \end{aligned}$$

Proof

Clearly, if \(\phi \) is a polynomial, then the dimension of \({\mathfrak {g}}\) is finite, since \(\phi ^{(m)}(x)\) vanishes for all m greater than the degree of \(\phi \).

Conversely, assume now that \(\phi \) is not a polynomial. Notice that \({\mathfrak {a}}_{\phi }\) is a Lie subalgebra of \({\mathfrak {g}}\), by Lemma 4.2, \({\mathfrak {h}}_{\phi }\) has infinite dimension, and hence, the dimension of \({\mathfrak {g}}\) is infinite.

Finally, the dimension of \({\mathfrak {g}}\) follows from Lemma 4.2 and the fact that \({\mathcal {D}},{\mathcal {D}}^{\dagger }\) are linearly independent of \(1,x,x\phi '(x),x^{2}\phi ^{(2)}(x),\ldots , x^{n}\phi ^{(n)}(x)\) for \(N\ge 2\) and this vector space has dimension k, with k as in the statement. In the case \(N=1\), notice that

$$\begin{aligned} {\mathcal {D}}+ {\mathcal {D}}^{\dagger }=-(2+\nu )-2x+x\phi '(x) \end{aligned}$$
(4.6)

and so \({\mathcal {D}}^{\dagger }\) is a linear combination of \(1,{\mathcal {D}},x\) and \(x\phi '(x)\) and so the dimension of \({\mathfrak {g}}\) is \(k+1\) in this case, as desired. \(\square \)

Remark 4.4

As a direct consequence of the above results, \({\mathfrak {g}}_{\phi }\) has finite dimension of and only if \({\mathcal {D}}, {\mathcal {D}}^{\dagger } \in {\mathcal {F}}_R(P)\). Indeed, by Proposition 4.3, \({\mathfrak {g}}_{\phi }\) is finitely dimensional if and only if \(\phi \) is a polynomial, and by Corollary 3.7, \(\phi \) is a polynomial if and only if \({\mathcal {D}}, {\mathcal {D}}^{\dagger } \in {\mathcal {F}}_R(P)\).

Remark 4.5

By the proof of Proposition 4.3, if \(\phi (x)=a_0+a_1 x +\ldots + a_{n}x^n\) is a polynomial of degree n with \(\ell \) non-zero coefficients and if \(\{i_1,\ldots ,i_{\ell } \}\subseteq \{0,\ldots ,n\}\) is the set of indices such that \(a_{i_j}\ne 0\), then we have that

$$\begin{aligned} \langle x\phi '(x),x^{2}\phi ^{(2)}(x),\ldots , x^{n}\phi ^{(n)}(x)\rangle =\langle x^{i_1},\ldots ,x^{i_\ell }\rangle . \end{aligned}$$
(4.7)

Example 4.6

Let \(\phi _{1}(x)=x^3\) and \(\phi _{2}(x)=x^3+x^2\), by Proposition 4.3, the associated Lie algebras \({\mathfrak {g}}_{\phi _1}\) and \({\mathfrak {g}}_{\phi _2}\) have the dimensions 5 and 6, respectively. Then, the Lie algebras \({\mathfrak {g}}_{\phi _1}\) and \({\mathfrak {g}}_{\phi _2}\) are non-isomorphic.

4.2 Structure of \({\mathfrak {g}}_{\phi }\)

In this subsection, we give a classification of all finite dimensional Lie algebras \({\mathfrak {g}}_\phi \).

Lemma 4.7

The element \(z= {\mathcal {D}}+\mathcal {D^{\dagger }} +2x -x\phi '(x)\) is a symmetric differential operator which belongs to the center of the Lie algebra \({\mathfrak {g}}_{\phi }\).

Proof

It follows immediately from the definition of the bracket of \({\mathfrak {g}}_{\phi }\). \(\square \)

Remark 4.8

The central element that we found in Lemma 4.7 is related to the symmetric operator \({\mathcal {C}}\) that was considered in Lemma 3.1. This can be derived directly from (3.4).

In the sequel, given a polynomial \(\phi (x)=a_{0}+a_1 x+ \cdots +a_n x^n \in {\mathbb {R}}[x]\) of degree \(n\ge 2\) with \(\ell \) non-zero coefficients and k as in (4.5), we consider the following notations

$$\begin{aligned} I_{\phi }= & {} \{i \in \{2,\ldots ,n\}: a_i\ne 0\}=\{ j_1,\ldots ,j_{k-2}\}\nonumber \\ {}= & {} {\left\{ \begin{array}{ll} \{i_3,\ldots ,i_{\ell }\} &{} \quad \text {if}\;a_0\ne 0, a_1\ne 0, \\ \{i_2,\ldots ,i_{\ell }\} &{} \quad \text {if}\;a_0=0\hbox { and }a_1\ne 0, \\ \{i_2,\ldots ,i_{\ell }\} &{} \quad \text {if}\;a_0\ne 0\hbox { and }a_1= 0, \\ \{i_1,\ldots ,i_{\ell }\} &{} \quad \text {if}\;a_0=0\hbox { and }a_1=0, \\ \end{array}\right. } \end{aligned}$$
(4.8)

with \(j_t< j_{t+1}\) and \(i_t<i_{t+1}\).

Theorem 4.9

Let \(\phi (x)=a_{0}+a_1 x+ \cdots +a_n x^n \in {\mathbb {R}}[x]\) be a polynomial of degree \(n\ge 2\) with \(\ell \) non-zero coefficients and k as in (4.5). If \({\mathfrak {g}}_{\phi }\) is the associated Lie algebra of \(\phi \) as in (4.1), then we have that

$$\begin{aligned} {\mathfrak {g}}_{\phi }\cong {\left\{ \begin{array}{ll} {\mathbb {C}}^2 \oplus {\mathfrak {h}} &{} \text {for }N\ge 2 \\ {\mathbb {C}} \oplus {\mathfrak {h}} &{} \text {for }N=1 \end{array}\right. } \end{aligned}$$

where \({\mathfrak {h}}\) is a solvable Lie algebra of dimension k, with an abelian nilradical of dimension \(k-1\). More precisely, if \(I_{\phi }\) is as in (4.8) then

$$\begin{aligned} {\mathfrak {h}} \cong \langle E \rangle \ltimes \langle E_1, \ldots E_{k-1}\rangle \end{aligned}$$

where \(\langle E_1, \ldots E_{k-1}\rangle \) is abelian and the rest of the brackets satisfy

$$\begin{aligned} {[}E,E_1]=E_1 \quad \text {and} \quad [E,E_t]= j_{t-1} E_t \quad \text {for }t=2,\ldots ,k-1. \end{aligned}$$
(4.9)

Proof

By Lemma 4.7, the element \(z= {\mathcal {D}}+\mathcal {D^{\dagger }} +2x -x\phi '(x)\) belongs to the center of \({\mathfrak {g}}_{\phi }\). So, for \(N \ge 2\), we obtain an element in the center that does not belong to \(\langle 1\rangle \), and in the case \(N=1\), by (4.6), we obtain that \(z\in \langle 1\rangle \). Thus, if

$$\begin{aligned} {\mathfrak {h}}:=\langle {\mathcal {D}}, x,x\phi '(x),x^{2}\phi ^{(2)}(x),\ldots , x^{n}\phi ^{(n)}(x) \rangle \end{aligned}$$

then we obtain that

$$\begin{aligned} {\mathfrak {g}}_{\phi }\cong {\left\{ \begin{array}{ll} {\mathbb {C}}^2 \oplus {\mathfrak {h}} &{} \text {for }N\ge 2, \\ {\mathbb {C}} \oplus {\mathfrak {h}} &{} \text {for }N=1, \end{array}\right. } \end{aligned}$$

with \({\mathfrak {h}}\) a Lie algebra of dimension k.

Thus, it is enough to show that \({\mathfrak {h}}\) is solvable with nilradical of dimension \(k-1\). Let us consider

$$\begin{aligned} {\mathfrak {k}}=\langle x,x\phi '(x),x^{2}\phi ^{(2)}(x),\ldots , x^{n}\phi ^{(n)}(x)\rangle . \end{aligned}$$

By definition of the bracket and by taking into account that

$$\begin{aligned} {[}x,x^{l} \phi ^{(l)}(x)]=[x^i \phi ^{(i)}(x),x^j \phi ^{(j)}(x)]=0\qquad \text {for all }i\ne j, \end{aligned}$$
(4.10)

we obtain that \([{\mathfrak {h}},{\mathfrak {h}}]\subseteq {\mathfrak {k}}\). Hence, by (4.10), we obtain that \(\Big [[{\mathfrak {h}},{\mathfrak {h}}],[{\mathfrak {h}},{\mathfrak {h}}]\Big ]=0,\) and so \({\mathfrak {h}}\) is solvable. Finally, notice that \({\mathfrak {k}}\) is an abelian ideal of \({\mathfrak {h}}\) of dimension

$$\begin{aligned} \dim ({\mathfrak {k}})=\dim ({\mathfrak {h}})-1=k-1. \end{aligned}$$

This implies that \({\mathfrak {k}}\) is the nilradical of \({\mathfrak {h}}\), as desired.

As in Remark 4.5, we have that

$$\begin{aligned} {\mathfrak {h}}=\langle {\mathcal {D}}, x, x^{j_1},\ldots , x^{j_{k-2}} \rangle . \end{aligned}$$

In this case, \(\langle x, x^{j_1},\ldots , x^{j_{k-2}}\rangle \) is an abelian subalgebra of dimension \(k-1\). It is enough to compute the brackets \([{\mathcal {D}},x^{j_t}]\) and \([{\mathcal {D}},x]\). We obtain that

$$\begin{aligned} {[}{\mathcal {D}},x]=-x \qquad \text {and} \qquad [{\mathcal {D}},x^{j_t}]= -j_{t} x^{j_t}, \end{aligned}$$

so that \(\langle x, x^{j_1},\ldots , x^{j_{k-2}}\rangle \) is an abelian ideal. Finally, the correspondence

$$\begin{aligned} {\mathcal {D}}\longmapsto -E, \quad x\longmapsto E_1, \quad x^{j_i}\longmapsto E_{i+1} \quad \text {for }i=1,\ldots ,k-2, \end{aligned}$$

is a Lie Algebra isomorphism between \({\mathfrak {h}}\) and \(\langle E \rangle \ltimes \langle E_1, \ldots E_{k-1}\rangle \) with brackets given as in (4.9). \(\square \)

Next, we study the structure of the solvable Lie algebra \({\mathfrak {h}}_{\phi }\). In general, a Lie algebra \(\mathfrak {h'}\) with an abelian ideal of codimension 1 is called almost abelian. These algebras were studied by V.V. Gorbatsevich in [10], where it is shown that there is a decomposition of the form

$$\begin{aligned} {\mathbb {C}} \ltimes _{\psi } {\mathbb {C}}^{k-1}, \end{aligned}$$

where the semidirect product structure is given by a linear mapping \(\psi : {\mathbb {C}} \rightarrow gl_{k-1}({\mathbb {C}})\). More precisely, if \({\mathcal {B}}=\{E_1,\ldots ,E_k\}\) is the canonical basis of \({\mathbb {C}}^{k}\) and \(\mathcal {B'}=\{E_2,\ldots ,E_k\}\), then

$$\begin{aligned} {[}E_1,E_i] \in \langle {\mathcal {B}}'\rangle \text { for }i\ge 2 \qquad \text {and} \qquad [E_{i},E_j]=0 \text { for }i,j\in \{2,\ldots ,n\}. \end{aligned}$$

Hence, we can take the matrix

$$\begin{aligned} \Psi =\Big [ [E_1,E_2]_{{\mathcal {B}}'}, [E_1,E_2]_{{\mathcal {B}}'},\ldots , [E_1,E_k]_{{\mathcal {B}}'} \Big ]\in gl_{k-1}({\mathbb {C}}) \end{aligned}$$

where \([E_1,E_i]_{{\mathcal {B}}'}\) is the coordinate vector of \([E_1, E_i]\) with respect to the \({\mathbb {C}}\)-basis \({\mathcal {B}}'\). Thus, we can define \(\psi : {\mathbb {C}} \rightarrow gl_{k-1}({\mathbb {C}})\) by

$$\begin{aligned} \psi (z)=z\Psi \end{aligned}$$

and so \(\psi \) is linear and \(\psi (1)=\Psi \).

In [10], the author showed that the structure of the Lie algebra is determined by the matrix \(\Psi \). More precisely,

$$\begin{aligned} {\mathbb {C}} \ltimes _{\psi } {\mathbb {C}}^{k-1}\cong {\mathbb {C}} \ltimes _{\psi '} {\mathbb {C}}^{k-1} \Longleftrightarrow \quad \Psi \text { and }\Psi '\text { are conformally similar}. \end{aligned}$$
(4.11)

Recall that \(\Psi \) and \(\Psi '\) are conformally similar if and only if there exists a matrix \(P\in GL_{k-1}({\mathbb {C}})\) and a non-zero complex number \(\lambda \in {\mathbb {C}}\smallsetminus \{0\}\) such that \(\Psi = \lambda P \Psi ' P^{-1}\).

Lemma 4.10

Let k be an integer greater than 1 and let \(1<j_1<\ldots <j_{k-2}\) and \(1<j'_1<\ldots < j'_{k-2}\) be two sequences of positive integers. Let us consider

$$\begin{aligned} \Psi = \textrm{diag}(1, j_1,\ldots ,j_{k-2}) \quad \text {and} \quad \Psi ' = \textrm{diag}(1, j'_1,\ldots ,j'_{k-2}). \end{aligned}$$

Then, \(\Psi \) and \(\Psi '\) are conformally similar if and only if \(\Psi =\Psi '\).

Proof

If \(\Psi =\Psi '\), they are trivially conformally similar.

Now, assume that \(\Psi \) and \(\Psi '\) are conformally similar, so there exist \(\lambda \in {\mathbb {C}}^*\) and \(P\in GL_{k-1}({\mathbb {C}})\) such that

$$\begin{aligned} \Psi ' = \lambda P \Psi P^{-1}. \end{aligned}$$

By similarity, we obtain the following spectral relationship

$$\begin{aligned} \textrm{Spec}(\Psi ')= \lambda \cdot \textrm{Spec}(\Psi ), \end{aligned}$$

i.e., all of the eigenvalues of \(\Psi '\) can be obtained from the eigenvalues of \(\Psi \) by multiplication by \(\lambda \). Since \(\Psi \) and \(\Psi '\) are both diagonal, we have that

$$\begin{aligned} \textrm{Spec}(\Psi )=\{1,j_1,\ldots ,j_{k-2}\}\quad \text { and} \quad \textrm{Spec}(\Psi ')=\{1, j'_1,\ldots ,j'_{k-2}\}. \end{aligned}$$

Since 1 belongs to both spectra and the rest of the eigenvalues of \(\Psi \) and \(\Psi '\) are greater than 1, we obtain that \(\lambda =1\). Hence, we get that

$$\begin{aligned} \textrm{Spec}(\Psi ')= \textrm{Spec}(\Psi ). \end{aligned}$$

Therefore, \(\Psi =\Psi '\). \(\square \)

We are now in a position to state our main theorem, which says when the Lie algebra associated with two different polynomials (as in (4.1)) are isomorphic.

Theorem 4.11

Let \(\phi _1(x),\phi _2(x)\) be polynomials over \({\mathbb {R}}\) of degree greater than or equal to 2. Let \({\mathfrak {h}}_{\phi _1}\) and \({\mathfrak {h}}_{\phi _2}\) be their associated solvable Lie algebras as in Theorem 4.9. Then,

$$\begin{aligned} {\mathfrak {h}}_{\phi _1}\cong {\mathfrak {h}}_{\phi _2} \Longleftrightarrow I_{\phi _1} = I_{\phi _2}, \end{aligned}$$

where \(I_{\phi _1}, I_{\phi _2}\) are as in (4.8). Moreover, we have that

$$\begin{aligned} {\mathfrak {g}}_{\phi _1}\cong {\mathfrak {g}}_{\phi _2} \Longleftrightarrow I_{\phi _1} = I_{\phi _2}, \end{aligned}$$

where \({\mathfrak {g}}_{\phi _1}\) and \({\mathfrak {g}}_{\phi _2}\) are the associated Lie algebras of \(\phi _1\) and \(\phi _2\), respectively.

Proof

From Theorem 4.9, we have that \({\mathfrak {g}}_{\phi _1}\cong {\mathfrak {g}}_{\phi _2}\) if and only if \({\mathfrak {h}}_{\phi _1}\cong {\mathfrak {h}}_{\phi _2}\). So, it is sufficient to verify that

$$\begin{aligned} {\mathfrak {h}}_{\phi _1}\cong {\mathfrak {h}}_{\phi _2} \Longleftrightarrow I_{\phi _1} = I_{\phi _2}. \end{aligned}$$

Now by (4.11), it is enough to see that the associated matrices \(\Phi _1\) and \(\Phi _2\) are conformally similar. By Theorem 4.9 and (4.11), we have that \(\Phi _1\) and \(\Phi _2\) are conformally similar to

$$\begin{aligned} \Psi = \textrm{diag}(1, j_1,\ldots ,j_{k-2}) \quad \text {and} \quad \Psi ' = \textrm{diag}(1, j'_1,\ldots ,j'_{k-2}) \quad \text {respectively,} \end{aligned}$$

where \(\{j_1,\ldots ,j_{k-2}\} =I_{\phi _1}\) and \(\{j'_1,\ldots ,j'_{k-2}\}=I_{\phi _2}\). Finally, by Lemma 4.10, we obtain that \(\Psi \) and \(\Psi '\) are conformally similar if and only if \(\Psi =\Psi '\) which is equivalent to saying that \(I_{\phi _1}=I_{\phi _2}\). Therefore \({\mathfrak {h}}_{\phi _1}\cong {\mathfrak {h}}_{\phi _2}\) if and only if \(I_{\phi _1}= I_{\phi _2}\) as asserted. \(\square \)

As a direct consequence, we obtain the following result.

Corollary 4.12

Let \(\phi _1(x),\phi _2(x)\) be polynomials over \({\mathbb {R}}\) with degree greater or equal than 2. Then, we have the following cases:

  1. 1.

    If \(\deg \phi _1=\deg \phi _2=2\), then \({\mathfrak {g}}_{\phi _1}\cong {\mathfrak {g}}_{\phi _2}\).

  2. 2.

    If \(\deg \phi _1\ne \deg \phi _2\), then \({\mathfrak {g}}_{\phi _1}\not \cong {\mathfrak {g}}_{\phi _2}\).

Remark 4.13

Notice that if we consider \(\phi _m(x)=x^{m}+ax\) with \(m\ge 2\), then \(\dim ({\mathfrak {h}}_{\phi _m})=3\). The structure of solvable Lie algebras of dimension 3 was studied by Patera and Zassenhaus in [22]. In p. 4, the authors define the Lie algebra \(L_{3,6}=\langle a_1,a_2,a_3\rangle \) with brackets

$$\begin{aligned}{}[a_1,a_2]= a_3,\qquad [a_1,a_3]= a_3-\alpha \cdot a_2, \end{aligned}$$

with parameter \(\alpha \) satisfying \(\alpha \ne 0\) and \(1-4\alpha \ne 0\). This parameter \(\alpha \) is in one-to-one correspondence with isomorphism classes of this kind of algebras. Its associated matrix \(\psi _{\alpha }(1)\) is

$$\begin{aligned} \psi _{\alpha }(1)= \begin{pmatrix} 0 &{} -\alpha \\ 1 &{} 1 \end{pmatrix}. \end{aligned}$$

The eigenvalues of \(\psi _{\alpha }(1)\) are

$$\begin{aligned} \lambda _0=\tfrac{1-\sqrt{1-4\alpha }}{2}\quad \text {and} \quad \lambda _1=\tfrac{1+\sqrt{1-4\alpha }}{2}. \end{aligned}$$

On the other hand, if we consider \(\phi _m(x)=x^{m}+ax\) with \(m\ge 2\), then \({\mathfrak {h}}_{\phi _m}\) has dimension 3 and its associated matrix \(\Psi _m\) is conformally similar to

$$\begin{aligned} \begin{pmatrix} 1 &{} 0\\ 0 &{} m \end{pmatrix}. \end{aligned}$$

Thus, \(\Psi _m\) is conformally similar to \(\psi _{\alpha }(1)\) if and only if

$$\begin{aligned} \lambda _0=r, \qquad \lambda _1=rm \qquad \text {for some complex number }r, \end{aligned}$$

since diagonalizable matrices are similar if and only if their spectra are equal. This system of equations has a solution \(r=\tfrac{1}{m+1}\) and \(\alpha = \tfrac{m}{(m+1)^{2}}\). Therefore \({\mathfrak {h}}_{\phi _m} \cong L_{3,6}^{\alpha }\) with \(\alpha =\tfrac{m}{(m+1)^{2}}\).

4.3 Application: a problem by Ismail

In this subsection, we extend the differential operators \({\mathcal {D}}\) and \({\mathcal {D}}^\dagger \) by means of a real function \(\beta _{n}(x)\). Let us introduce the operators

$$\begin{aligned} {\mathcal {D}}_{1,n}={\mathcal {D}}+ x + x \beta _n(x), \qquad {\mathcal {D}}_{2,n}={\mathcal {D}}^{\dagger }+x+x\beta _n(x). \end{aligned}$$
(4.12)

By taking into account that x and \(\beta _n(x)x\) are symmetric with respect to \(W^{(\nu )}_{\phi }(x)\), and \({\mathcal {D}}, \, {\mathcal {D}}^{\dagger }\) are mutually adjoint with respect to \(W^{(\nu )}_{\phi }(x)\), we obtain that \({\mathcal {D}}_{1,n}\) and \({\mathcal {D}}_{2,n}\) are mutually adjoint. We are mainly interested in differential operators \({\mathcal {D}}_{1,n}\) and \({\mathcal {D}}_{2,n}\) which preserve polynomials. This is related with the fact that the coefficients \(x \beta _n(x)\) are polynomial.

Proposition 4.14

Let \(\phi (x)\) and \(x\beta _n(x)\) be analytic real functions supported on \([0,\infty )\). Let \({\mathfrak {g}}_{n}\) be the Lie algebra generated by \({\mathcal {D}}_{1,n}\), \({\mathcal {D}}_{2,n}\), with \({\mathcal {D}}_{1,n}\), \({\mathcal {D}}_{2,n}\) as in (4.12). Then, the Lie algebra \({\mathfrak {g}}_{n}\) has a Lie subalgebra isomorphic to \({\mathfrak {a}}_{n}:={\mathfrak {a}}_{\alpha _n}\), where \({\mathfrak {a}}_{\alpha _n}\) is given by (4.1) with

$$\begin{aligned} \alpha _{n}(x)=x^{2}\phi ^{(2)}(x)+\phi '(x)x+2\beta _n'(x)x^{2}+2\beta _n(x) x. \end{aligned}$$
(4.13)

The following are equivalents:

  1. 1.

    \({\mathfrak {g}}_{n}\) is finite dimensional.

  2. 2.

    \({\mathfrak {a}}_{n}\) is finite dimensional.

  3. 3.

    \(\alpha _{n}\) is a polynomial.

  4. 4.

    \(x(\phi '(x)+2\beta _{n}(x))\) is a polynomial.

Moreover, if \({\mathcal {D}}_{1,n},\, {\mathcal {D}}_{2,n}\in {\mathcal {F}}_{R}(P)\), then the Lie algebra \({\mathfrak {g}}_{n}\) is finite dimensional if and only if \(\phi \) is a polynomial.

Proof

Notice that

$$\begin{aligned} {[}{\mathcal {D}}_{1,n}, {\mathcal {D}}_{2,n}]=-x^{2}\phi ^{(2)}(x)-\phi '(x)x-2\beta _{n}'(x)x^{2}-2\beta _n(x) x=-\alpha _{n}(x), \end{aligned}$$

thus \(\alpha _n(x)\in {\mathfrak {g}}_{n}\). On the other hand, \(\alpha '_n(x) x \in {\mathfrak {g}}_n\), since

$$\begin{aligned} p\cdot [{\mathcal {D}}_{1,n}, \alpha _{n}]= & {} p\cdot [\partial _{x} x, \alpha _{n}]=(p(x)\cdot \partial _x x)\alpha _{n}(x)- (p(x)\alpha _{n}(x))\cdot \partial _{x} x\\ {}= & {} -p(x)\alpha _{n}'(x)x, \end{aligned}$$

and hence \([{\mathcal {D}}_{1,n}, \alpha _{n}]=-\alpha '_{n}(x)x\).

Inductively, if we assume that \(\alpha ^{(j)}_n(x)x^{j} \in {\mathfrak {g}}_n\), then

$$\begin{aligned} p\cdot [{\mathcal {D}}_{1,n},\alpha ^{(j)}_n(x)x^{j}]= & {} (p(x) \cdot \partial _x x)\alpha ^{(j)}_n(x)x^{j}- (p(x) \alpha ^{(j)}_n(x)x^{j}) \cdot \partial _x x \\= & {} -p(x) \big (\alpha ^{(j+1)}_n(x)x^{j+1} + j \alpha ^{(j)}_n(x)x^{j}\big ). \end{aligned}$$

Then \([{\mathcal {D}}_{1,n},\alpha ^{(j)}_n(x)x^{j}] = \alpha ^{(j+1)}_n(x)x^{j+1} + j \alpha ^{(j)}_n(x)x^{j} \in {\mathfrak {g}}_n\), and since \(j \alpha ^{(j)}_n(x)x^{j} \in {\mathfrak {g}}_n\), \(\alpha ^{(j+1)}_n(x)x^{j+1} \in {\mathfrak {g}}_n\). Thus \(\alpha _{n}^{i}(x)x^{i}\in {\mathfrak {a}}_{n}\) for all \(i\in {\mathbb {N}}_0\) and therefore \({\mathfrak {a}}_{n}\) is a Lie subalgebra of \({\mathfrak {g}}_n\), as asserted.

The fact that \({\mathfrak {g}}_{n}\) is finite dimensional if and only if \({\mathfrak {a}}_{n}\) is finite dimensional follows directly from the fact that

$$\begin{aligned} \mathfrak {g_{n}}=\langle {\mathcal {D}}_{1,n}, {\mathcal {D}}_{2,n}, \alpha _{n}^{(i)}(x)x^{i} \rangle _{i\in {\mathbb {N}}_{0}}. \end{aligned}$$

From Lemma 4.2, we get that \({\mathfrak {a}}_{n}\) is finite dimensional if and only if \(\alpha _{n}\) is a polynomial.

Now, we claim that if \(F(x)=x(\phi '(x)+2\beta _{n}(x))\) is a polynomial then \(\alpha _{n}\) is a polynomial as well. Indeed, the assertion follows immediately from the fact that \(xF'(x)=\alpha _{n}(x)\). Conversely, if \(\alpha _{n}(x)\) is a polynomial then \(xF'(x)\) is a polynomial. Since F(x) is real analytic, we have that

$$\begin{aligned} F(x)= \sum _{i=0}^{\infty } c_{i} x^{i} \end{aligned}$$

thus, it is enough to show that there exists \(L\in {\mathbb {N}}\) such that \(c_{i}=0\) for all \(i>L\). Notice that

$$\begin{aligned} xF'(x)= \sum _{i=1}^{\infty } i c_{i} x^{i}, \end{aligned}$$

since \(xF'(x)\) is a polynomial, then there exists \(L\in {\mathbb {N}}\) such that \(i c_i=0\) for all \(i>L\) and hence \(c_i=0\) for all \(i>L\). Therefore F(x) is a polynomial, as we wanted.

Finally, assume that \({\mathcal {D}}_{1,n}\) and \({\mathcal {D}}_{2,n}\) are in the right Fourier algebra.

Claim: \(x\beta _{n}(x)\) is a polynomial. By taking into account that \(D_{1,n}= \partial _x x + x A + x \beta _n(x)\) and \(P_0(x)=I\), we have

$$\begin{aligned} (P \cdot {\mathcal {D}}_{1,n})(x,0) = A x + x \beta _n(x) = (M_{1,n} \cdot P)(x,0), \end{aligned}$$

where \(M_{1,n} \in {\mathcal {F}}_L(P)\) is such that \(P \cdot {\mathcal {D}}_{1,n} = M_{1,n} \cdot P\), for all \(P \in M_N({\mathbb {C}})[x]\). Now, since \(M_{1,n}\) is as in (2.2), we obtain that \((M_{1,n}\cdot P)(x)\) is a polynomial, and so \(x\beta _{n}(x)\) is a polynomial.

By the above statements, it is enough to show that \(\phi (x)\) is polynomial if and only if \(x(\phi '(x)+2\beta _{n}(x))\). On the first hand, if \(\phi \) is a polynomial,then \(x\phi '(x)\) is a polynomial. By the above claim, we know that \(x\beta _{n}(x)\) is a polynomial. Therefore \(x(\phi '(x)+2\beta _{n}(x))\) is a polynomial. Conversely, if \(x(\phi '(x)+2\beta _{n}(x))\) is a polynomial, then \(x\phi '(x)\) is a polynomial. Since \(\phi (x)\) is an analytic real function, a similar argument to the one used when proving that F(x) is a polynomial allows us to assure that \(\phi \) is a polynomial, as desired. \(\square \)

In the case \(N=1\), we have the weight \(w_1(x)=e^{-\phi (x)}x^{\nu }\). We denote by \((p(x,n))_n\) be the sequence of orthonormal polynomials with respect to \(w_1(x)\). Let us define the sequence \(\beta _n(x)\) as in (1.9). M. Ismail in [16] proved that

$$\begin{aligned} \frac{\beta _n(x)}{a_n}= & {} \frac{\nu }{x} \int _0^{\infty } \frac{p_n(y)p_{n-1}(y)}{y} w_1(y) \text {d}y + \psi _n(x)\nonumber \\ {}= & {} \frac{\nu }{x} \ p_{n-1}(0) \lambda _n + \psi _n(x), \end{aligned}$$
(4.14)

with

$$\begin{aligned} \psi _n(x)= & {} \int _0^{\infty } \frac{\phi '(x)-\phi '(y)}{x-y} p_n(y)p_{n-1}(y)w_1(y)\text {d}y, \quad \lambda _n \nonumber \\ {}= & {} \int _0^{\infty } \frac{p_{n-1}(y)}{y} w_1(y)\text {d}y \end{aligned}$$
(4.15)

and \(a_n\) is the coefficient given in (1.8). We have the following lemma.

Lemma 4.15

Let \(\phi (x)\) and \(x\beta _n(x)\) be analytic real functions supported on \([0,\infty )\). If \(\phi \) is a polynomial, then \(x\beta _{n}(x)\) is a polynomial.

Proof

We have that \(x\beta _n(x)= a_n \nu p_{n-1}(0) \lambda _n + a_n x \psi _n(x)\), so it is enough to show that \(x\psi _n(x)\) is a polynomial.

Thus, if \(\phi (x)=\sum _{k=0}^{m} a_k x^{k}\) then \(\phi '(x)=\sum _{k=1}^{m} k a_k x^{k-1}\).

$$\begin{aligned} x\psi _n(x)= & {} x\int _0^{\infty } \frac{\phi '(x)-\phi '(y)}{x-y} p_n(y)p_{n-1}(y)w_1(y)\text {d}y\\= & {} x \int _0^{\infty } \sum _{k=1}^{m} ka_k \tfrac{x^{k-1}-y^{k-1}}{x-y} p_n(y) p_{n-1}(y) w_1(y) \text {d}y\\= & {} x \int _0^{\infty } \sum _{k=1}^{m} ka_k \big ( x^{k-2} + y x^{k-3} + \cdots + y^{k-3} x + y^{k-2}\big )\\{} & {} p_n(y) p_{n-1}(y) w_1(y) \text {d}y\\= & {} \sum _{k=1}^{m} \sum _{j=0}^{k-2} k a_k \Big ( \int _0^{\infty } y^{j} p_n(y) p_{n-1}(y) w_1(y) \text {d}y \Big ) x^{k-j}. \end{aligned}$$

Then, \(x\psi _n(x)\) is a polynomial and therefore \(x\beta _n(x)\) is a polynomial, as desired. \(\square \)

We have the following theorem.

Theorem 4.16

Let \(\phi (x)\) and \(x\beta _n(x)\) be analytic real functions supported on \([0,\infty )\). If \(\phi \) is a polynomial, then the Lie algebra generated by \(xL_{1,n}\), \(xL_{2,n}\) is finite dimensional. Moreover, if \(xL_{1,n}\), \(xL_{2,n}\) is finite dimensional then \(x(\phi '(x)+2\beta _{n}(x))\) is a polynomial.

Proof

Notice that if \(N=1\) in (4.12), then \({\mathcal {D}}_{1,n}=xL_{1,n}\) and \({\mathcal {D}}_{2,n}=xL_{2,n} + 2+\nu \), where

$$\begin{aligned} L_{1,n}=\partial _x + \beta _{n}(x), \qquad L_{2,n}=-\partial _x + \beta _{n}(x)+\phi '(x) \end{aligned}$$

as in [16], and \(W^{(\nu )}_{\phi }(x)= e^{-\phi (x)} x^{\nu }=w_{1}(x)\). In this case, we can observe that the Lie algebra generated by \({\mathcal {D}}_{1,n}\), \({\mathcal {D}}_{2,n}\) and 1 is isomorphic to the Lie algebra generated by \(xL_{1,n}, xL_{2,n}\) and 1.

Now, if \(\phi (x)\) is a polynomial,then \(x\beta _n(x)\) is a polynomial by Lemma 4.15 and hence \(x(\phi '(x)+2\beta _{n}(x))\) is a polynomial. Thus, Proposition  4.14 implies that the Lie algebra generated by \({\mathcal {D}}_{1,n}\) and \({\mathcal {D}}_{2,n}\) is finite dimensional and hence the Lie algebra generated by \(xL_{1,n}\) and \(xL_{2,n}\) is finite dimensional.

Now if the Lie algebra generated by \(xL_{1,n}\) and \(xL_{2,n}\) is finite dimensional, then the Lie algebra generated by \({\mathcal {D}}_{1,n}\) and \({\mathcal {D}}_{2,n}\) is finite dimensional. Thus, Proposition 4.14 implies that \(x(\phi '(x)+2\beta _{n}(x))\) is a polynomial, as asserted. \(\square \)

In the context of orthogonal polynomial, we are interested in differential operators \({\mathcal {D}}\) with \({\mathcal {D}} \in {\mathcal {F}}_R(P)\). In particular, we want to solve the problem proposed by M. Ismail in the case that \(xL_{1,n}, xL_{2,n} \in {\mathcal {F}}_R(P)\). The following corollary is an answer to this problem.

Corollary 4.17

Let \(\phi (x)\) and \(x\beta _n(x)\) be analytic real functions supported on \([0,\infty )\). If \(xL_{1,n}, xL_{2,n} \in {\mathcal {F}}_R(P)\), then the Lie algebra generated by \(xL_{1,n}\) and \(xL_{2,n}\) is finitely dimensional if and only if \(\phi (x)\) is a polynomial.

Proof

By Theorem 4.16, if \(\phi (x)\) is a polynomial, then the Lie algebra generated by \(xL_{1,n}\) and \(xL_{2,n}\) is finitely dimensional.

Conversely, if the Lie algebra generated by \(xL_{1,n}\) and \(xL_{2,n}\) is finite dimensional, then the Lie algebra generated by \({\mathcal {D}}_{1,n}\) and \({\mathcal {D}}_{2,n}\) is finite dimensional, and by Proposition 4.14, we have that \(\phi (x)\) is a polynomial, as desired. \(\square \)

5 The case \(\phi (x)=x\)

In this section, we study in detail the Lie algebra associated with the Laguerre type case \(\phi (x) = x\). In this case, by Corollary 3.7, we have that \({\mathcal {D}}\) and \({\mathcal {D}}^{\dagger }\) belong to the right Fourier algebra \({\mathcal {F}}_R(P)\), which means there exist M and \(M^{\dagger }\) in \({\mathcal {F}}_L(P)\) such that \(P \cdot {\mathcal {D}} = M \cdot P\) and \(P \cdot {\mathcal {D}}^{\dagger } = M^{\dagger } \cdot P\), respectively.

Following the strategy developed in [6] for a weight matrix of the form \(W(x)=e^{-v(x)} e^{xA} e^{xA^{*}}\), we introduce an extra second-order differential operator D with \((P(x,n))_n\) as its eigenfunctions. We investigate the structure of the Lie algebra generated by \({\mathcal {D}}\), \({\mathcal {D}}^{\dagger }\) and D, and compute its Casimir elements. With this information, we obtain explicit non-abelian relations for the coefficients of the three-term recurrence relation and the squared norms.

5.1 The structure of the Lie algebra

Let \(A,J\in M_N({\mathbb {C}})\) be constant matrices as in (3.1) and let \(\nu \in {\mathbb {R}}_{>0}\). When \(\phi (x)=x\), apart from the first-order differential operators \({\mathcal {D}}\) and \({\mathcal {D}}^\dagger \), we have a symmetric second-order differential operator having the matrix orthogonal polynomials as eigenfunctions. The weight matrix \(W^{(\nu )}\) is given by

$$\begin{aligned} W^{(\nu )}(x) = e^{xA} T^{(\nu )}(x) e^{xA^*},\qquad T^{(\nu )}(x) = e^{-x} \sum _{k=1}^N \delta ^{(\nu )}_k x^{\nu +k} E_{k,k}. \end{aligned}$$
(5.1)

Proposition 5.1

Let \(A,J\in M_{N}({\mathbb {C}})\) as in (3.1) and let \(W^{(\nu )}(x)\) be as in (5.1). Then, the first-order differential operators

$$\begin{aligned} {\mathcal {D}} = \partial _x x + x(A-1), \quad {\mathcal {D}}^\dagger = -\partial _x x - (1+\nu +J), \end{aligned}$$

are mutually adjoint with respect to \(W^{(\nu )}\). Moreover, if \(M = \varphi ^{-1}({\mathcal {D}})\) and \(M^\dagger = \varphi ^{-1}({\mathcal {D}}^\dagger )\), then

$$\begin{aligned} \begin{aligned} M&= (A-1)\delta -(n+1+\nu )-{\mathcal {H}}(n)J{\mathcal {H}}^{-1}(n),\\ M^\dagger&= -(n+\nu +J+1) + {\mathcal {H}}(n)(A-1)^*{\mathcal {H}}^{-1}(n-1) \delta ^{-1}. \end{aligned} \end{aligned}$$
(5.2)

Proof

By taking \(\phi (x)=x\) in Proposition 3.6, the first-order differential operators \({\mathcal {D}}\) and \({\mathcal {D}}^\dagger \) are given by

$$\begin{aligned} {\mathcal {D}} = \partial _x x + x(A-1), \quad {\mathcal {D}}^\dagger = -\partial _x x - (1+\nu +J), \end{aligned}$$

which are mutually adjoint by Proposition 3.6. By hypothesis, the corresponding elements in the left Fourier algebra are \(M = \varphi ^{-1}({\mathcal {D}})\) and \(M^\dagger = \varphi ^{-1}({\mathcal {D}}^\dagger )\). Now, by taking \(\phi (x)= x\) in Corollary 3.3, we obtain that

$$\begin{aligned} M(n)=A_0(n) + (A-1) \delta . \end{aligned}$$

By Eqs. (2.4) and (2.5), if \(M^{\dagger }(n)=\sum _{j=-l}^{k} A^{\dagger }_j(n) \delta ^{j}\), then

$$\begin{aligned} A^{\dagger }_0(n)={\mathcal {H}}(n) A^{*}_0(n)({\mathcal {H}}(n))^{-1}, \quad A^{\dagger }_{-1}(n)={\mathcal {H}}(n) A^{*}_1(n-1)({\mathcal {H}}(n))^{-1}, \quad A^{\dagger }_j(n)=0 \end{aligned}$$

for \(j \ne 0,-1\). Hence, we have that

$$\begin{aligned} M^{\dagger }(n)=A^{\dagger }_0(n) + A^{\dagger }_{-1}(n) \delta ^{-1}. \end{aligned}$$

From the relation \(P\cdot {\mathcal {D}}^{\dagger }=M^{\dagger }\cdot P\), with \({\mathcal {D}}^{\dagger }=-\partial _x x - (1+\nu +J)\), and \(P_n(x)=x^{n} + X(n) x^{n-1} + l.o.t\), we obtain

$$\begin{aligned} (P\cdot {\mathcal {D}}^{\dagger }) (x,n)= & {} - (n+1+\nu +J) x^{n} - n X(n) x^{n-1}+ l.o.t.\\ (M^{\dagger } \cdot P)(x,n)= & {} {\mathcal {H}}(n) (A-1)^{*}({\mathcal {H}}(n))^{-1} x^{n} \\ {}{} & {} + {\mathcal {H}}(n) (A-1)^{*}({\mathcal {H}}(n))^{-1} (X(n)+1) x^{n-1} + l.o.t. \end{aligned}$$

Comparing the coefficients of degrees n and \(n-1\), we obtain

$$\begin{aligned} \begin{aligned} M&= (A-1)\delta -(n+1+\nu )-{\mathcal {H}}(n)J{\mathcal {H}}^{-1}(n),\\ M^\dagger&= -(n+\nu +J+1) + {\mathcal {H}}(n)(A-1)^*{\mathcal {H}}^{-1}(n-1) \delta ^{-1}, \end{aligned} \end{aligned}$$

as desired. \(\square \)

The weight matrix \(W^{(\nu )}_{\phi }\) admits a symmetric second-order differential operator. By [18, Proposition 4.3] after conjugation by a lower triangular matrix, we obtain that the operator

$$\begin{aligned} D=\partial _{x}^2 x +\partial _x ( (A-1)x +1+\nu +J) +A\nu + JA -J \end{aligned}$$

is symmetric with respect to the weight \(W^{(\nu )}_{\phi }\). Moreover, we have

$$\begin{aligned} P(x,n) \cdot D = \Gamma \cdot P(x,n) \quad \text {where} \quad \Gamma (n)=A(n+\nu +1+J)-n-J. \end{aligned}$$

By direct computation, we verify that the following relations hold:

$$\begin{aligned} \begin{aligned}{} & {} {[}{\mathcal {D}},x] = -x,\qquad [{\mathcal {D}}^\dagger ,x]=x,\qquad [{\mathcal {D}},{\mathcal {D}}^\dagger ] = x, \qquad [D,x]=-{\mathcal {D}}+{\mathcal {D}}^{\dagger }, \\{} & {} {[}{\mathcal {D}},D]= -{\mathcal {D}}+ D - (1+\nu ), \quad [{\mathcal {D}}^{\dagger },D]= {\mathcal {D}}^{\dagger }- D+ (1+\nu ). \end{aligned} \end{aligned}$$
(5.3)

Let \({\mathfrak {a}}\) be the Lie algebra with generators \(\{x_1,x_2,x_3,x_4,x_5\}\) and brackets

$$\begin{aligned} {[}x_1,x_2]=-x_4,\quad [x_{1},x_4]=-x_4,\quad [x_2,x_4]=x_4, \quad [x_3,x_4]=-x_1+x_2, \\ {[}x_1,x_3]= -x_2+ x_3 +x_4 - (1+\nu )x_5 \quad \text {and} \quad [x_2,x_3]= x_1 - x_3 -x_4 + (1+\nu )x_5. \end{aligned}$$

By means of the identification

$$\begin{aligned} x_1\longmapsto {\mathcal {D}}+x,\quad x_2\longmapsto {\mathcal {D}}^{\dagger }+x, \quad x_3\longmapsto D,\quad x_4\longmapsto x, \quad x_5 \longmapsto 1, \end{aligned}$$
(5.4)

we get that the Lie algebra generated by \(\{{\mathcal {D}}, {\mathcal {D}}^\dagger , D, x, I\}\) is isomorphic with \({\mathfrak {a}}\).

We recall that a Lie algebra \({\mathfrak {a}}\) is called reductive if its radical is equal to its center. We have the following structure result.

Proposition 5.2

The Lie algebra \({\mathfrak {a}}\) defined as above is a 5-dimensional reductive algebra with a center of dimension two, given by \({\mathcal {Z}}_{{\mathfrak {a}}}=\langle x_{1}+x_2- x_4,x_5\rangle \). Moreover, \({\mathfrak {a}}=[{\mathfrak {a}},{\mathfrak {a}}]\oplus {\mathcal {Z}}_{{\mathfrak {a}}}\) with \([{\mathfrak {a}},{\mathfrak {a}}]\) isomorphic to \(\mathfrak {sl}(2,{\mathbb {C}})\). In particular,

$$\begin{aligned} {\mathcal {C}}_1= & {} -4x_4(x_1-x_3-x_4+(1+\nu )x_5)+(x_4-x_1+x_2)^2 \quad \text {and}\\ {\mathcal {C}}_2= & {} x_{1}+x_2- x_4\quad \text {and} \quad {\mathcal {C}}_3=x_5 \end{aligned}$$

are Casimir elements of \({\mathfrak {a}}\).

Proof

From the general theory (see, e.g., [14, 17]), the radical of \({\mathfrak {a}}\) can be computed from its Killing form. More precisely, the radical is the orthogonal complement of the derived algebra \({\mathfrak {a}}'=[{\mathfrak {a}},{\mathfrak {a}}]\) with respect to the Killing form

$$\begin{aligned} \textrm{Rad}({\mathfrak {a}})=\{x\in {\mathfrak {a}}: \, \kappa (x,y)=0, \, \text {for all} \, y \in {\mathfrak {a}}'\}. \end{aligned}$$

In our case, the derived Lie algebra \({\mathfrak {a}}'=[{\mathfrak {a}},{\mathfrak {a}}]\) is given explicitly by

$$\begin{aligned} {[}{\mathfrak {a}},{\mathfrak {a}}]= \langle \{x_4, \, x_1 -x_2,\, x_1-x_3 -x_4 +(1+\nu ) x_5 \}\rangle . \end{aligned}$$

Given \(x\in {\mathfrak {a}}\) and the generators of \({\mathfrak {a}}'\), it is enough to compute Ad(x) and Ad(a) for \(a\in \{x_4, \, x_1 -x_2,\, x_1-x_3 -x_4 +(1+\nu ) x_5 \}\).

In this case, if \(x=\sum _{i=1}^{5} a_i x_i\) and \({\mathcal {B}}\) denotes the basis \(\{x_1,\ldots ,x_5\}\), then after some computation, we obtain that

$$\begin{aligned} {[}Ad(x)]_{{\mathcal {B}}}=\begin{pmatrix} 0 &{} -a_3 &{} a_2+a_4 &{} -a_3 &{} 0 \\ a_3 &{} 0 &{} -a_1-a_4 &{} a_3 &{} 0 \\ -a_3 &{} a_3 &{} a_1-a_2 &{} 0 &{} 0 \\ a_2-a_3+a_4 &{} -a_1+a_3-a_4 &{} a_1-a_2 &{} -a_1+a_2 &{} 0 \\ (1+\nu )a_3 &{} -(1+\nu )a_3 &{} (1+\nu )(-a_1+a_2) &{} 0 &{} 0 \end{pmatrix} \end{aligned}$$

and hence

$$\begin{aligned}{} & {} {[}Ad(x_4)]_{{\mathcal {B}}}=\begin{pmatrix} 0 &{} 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 0 &{} -1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 1 &{} -1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 \end{pmatrix} \quad [Ad(x_1-x_2)]_{{\mathcal {B}}}=\begin{pmatrix} 0 &{} 0 &{} -1 &{} 0 &{} 0 \\ 0 &{} 0 &{} -1 &{} 0 &{} 0 \\ 0 &{} 0 &{} -2 &{} 0 &{} 0 \\ -1 &{} -1 &{} 2 &{} -2 &{} 0 \\ 0 &{} 0 &{} -2(1+\nu ) &{} 0 &{} 0 \end{pmatrix},\\{} & {} {[}Ad(x_1-x_3-x_4+(1+\nu )x_5)]_{{\mathcal {B}}}=\begin{pmatrix} 0 &{} 1 &{} -1 &{} 1 &{} 0 \\ -1 &{} 0 &{} 0 &{} -1 &{} 0 \\ 1 &{} -1 &{} 1 &{} 0 &{} 0 \\ 0 &{} -1 &{} 1 &{} -1 &{} 0 \\ -(1+\nu ) &{} (1+\nu ) &{} -(1+\nu ) &{} 0 &{} 0 \end{pmatrix}. \end{aligned}$$

In order to compute the Killing form, we need to compute the trace of some multiplication of these matrices. In this case, we have that

$$\begin{aligned}{} & {} \textrm{tr}([Ad(x)]_{{\mathcal {B}}} [Ad(x_4)]_{{\mathcal {B}}})= -4a_3,\qquad \textrm{tr}([Ad(x)]_{{\mathcal {B}}} [Ad(x_1-x_2)]_{{\mathcal {B}}})=4(a_1-a_2), \\{} & {} \textrm{tr}([Ad(x)]_{{\mathcal {B}}} [Ad(x_1-x_3-x_4+(1+\nu )x_5)]_{{\mathcal {B}}})=4(a_1+a_4). \end{aligned}$$

We get

$$\begin{aligned} {\textrm{Rad}}({\mathfrak {a}})=\langle \{x_{1}+x_2- x_4,x_5 \}\rangle , \end{aligned}$$

since \(x_{1}+x_2- x_4\) and \(x_5\) belong to the center of \({\mathfrak {a}}\), we obtain that \(\textrm{Rad}({\mathfrak {a}})={\mathcal {Z}}_{{\mathfrak {a}}}\) and so, the Lie algebra \({\mathfrak {a}}\) is reductive. Since \({\mathfrak {a}}\) is reductive, we obtain

$$\begin{aligned} {\mathfrak {a}}=[{\mathfrak {a}},{\mathfrak {a}}] \oplus {\mathcal {Z}}_{{\mathfrak {a}}}. \end{aligned}$$

In our case, the \([{\mathfrak {a}},{\mathfrak {a}}]\) part is given explicitly by

$$\begin{aligned} {[}{\mathfrak {a}},{\mathfrak {a}}]= \langle \{x_4, \, x_1 -x_2,\, x_1-x_3 -x_4 +(1+\nu ) x_5 \}\rangle . \end{aligned}$$

Now, by taking \( a_1=x_4\), \(a_2= x_1-x_2\) and \(a_3=x_1-x_3-x_4+(1+\nu )x_5\),we obtain that

$$\begin{aligned} {[}a_1,a_2]=-2a_1, \quad [a_1,a_3]=a_1-a_2, \quad [a_2,a_3]=-a_3 \end{aligned}$$

and so if we take \({\hat{a}}_2=a_1-a_2\) and \({\hat{a}}_3=-a_3\) we have

$$\begin{aligned} {[}a_1,{\hat{a}}_2]=2a_1, \quad [a_1,{\hat{a}}_3]=-{\hat{a}}_2, \quad [{\hat{a}}_2,{\hat{a}}_3]=2{\hat{a}}_3, \end{aligned}$$

and so \([{\mathfrak {a}},{\mathfrak {a}}] \) is isomorphic to \(\mathfrak {sl}(2,{\mathbb {C}})\) by considering the identification \(a_1\mapsto e_1\) \( {\hat{a}}_2 \mapsto e_2\) and \({\hat{a}}_3 \mapsto e_3\). In particular, the Casimir element of \(\mathfrak {sl}(2,{\mathbb {C}})\) given by \(4e_1e_3+e_{2}^2\) induces a Casimir element of \([{\mathfrak {a}},{\mathfrak {a}}]\)

$$\begin{aligned} {\mathcal {C}}_{[{\mathfrak {a}},{\mathfrak {a}}]}= -4x_4(x_1-x_3-x_4+(1+\nu )x_5)+(x_4-x_1+x_2)^2. \end{aligned}$$

By taking into account that \({\mathcal {C}}_{[{\mathfrak {a}},{\mathfrak {a}}]}\) commutes with the central elements of \({\mathfrak {a}}\), we obtain that \({\mathcal {C}}_{[{\mathfrak {a}},{\mathfrak {a}}]}\) commutes with all of the elements of \({\mathfrak {a}}\) and so is a Casimir element of \({\mathfrak {a}}\). Finally, \({\mathcal {C}}_2= x_{1}+x_2- x_4\) and \({\mathcal {C}}_3=x_5\) are central elements and hence are Casimir elements of \({\mathfrak {a}}\). \(\square \)

Remark 5.3

Notice that

$$\begin{aligned} {\mathcal {C}}={\mathcal {C}}_2+ (1+\nu ){\mathcal {C}}_3 = {\mathcal {D}}+{\mathcal {D}}^{\dagger }+x+(1+\nu ) \end{aligned}$$

it is also a Casimir invariant of \({\mathfrak {a}}\). Hence, under the representation given by \({\mathcal {D}},{\mathcal {D}}^{\dagger }, D,x,1\), the image of this Casimir satisfies

$$\begin{aligned} {\mathcal {C}}= Ax-J. \end{aligned}$$

Thus, in terms of \(M,M^{\dagger }\) and L, we obtain that

$$\begin{aligned} \varphi ^{-1}({\mathcal {C}})=M+M^{\dagger }+L+(1+\nu ). \end{aligned}$$
(5.5)

On the other hand, it can be verified that

$$\begin{aligned} {\mathcal {C}}_{[{\mathfrak {a}},{\mathfrak {a}}]}=\dfrac{1}{8}(A^{2}x^{2}+\nu ^{2}+J^{2}+Ax-2\nu Ax-2xJA+2\nu J -1). \end{aligned}$$

Thus, \({\mathcal {C}}'=A^{2}x^{2}-2xJA+J+J^{2}\), it is also a Casimir since

$$\begin{aligned} {\mathcal {C}}'=8{\mathcal {C}}_{[{\mathfrak {a}},{\mathfrak {a}}]}-(1-2\nu ){\mathcal {C}} - (\nu ^2-1){\mathcal {C}}_3. \end{aligned}$$

Moreover, notice that the relation \([J,A]=A\) implies that \({\mathcal {C}}'={\mathcal {C}}^2-{\mathcal {C}}\), and so we obtain that

$$\begin{aligned} {\mathcal {C}}_{[{\mathfrak {a}},{\mathfrak {a}}]}=\tfrac{1}{8}\Big ( {\mathcal {C}}^2-2\nu {\mathcal {C}} + (\nu ^2-1){\mathcal {C}}_3\Big ). \end{aligned}$$

Therefore, in this representation,the Casimir element corresponding to \({\mathcal {C}}_{[{\mathfrak {a}},{\mathfrak {a}}]}\) does not give more information than \({\mathcal {C}}\).

We can also consider the Lie subalgebra generated by \(\{x_1,x_2,x_4,x_5\}\). With the identification (5.4), this corresponds to removing the operator D from the generators of \({\mathfrak {a}}\). Notice that a representation of this algebra was considered in Sect. 4 by taking \(\phi (x)=x\). In this case, we have the following structure result.

Proposition 5.4

The Lie subalgebra \({\mathfrak {a}}' = \langle x_1,x_2,x_4,x_5 \rangle \) of \({\mathfrak {a}}\) is isomorphic to \({\mathfrak {g}}_2\oplus {\mathbb {C}}^2 \) where \({\mathfrak {g}}_2\) is the 2-dimensional solvable Lie algebra with bracket \([e_1,e_2]=e_2\). In particular, \({\mathfrak {a}}'\) has no non-central Casimir invariants.

Proof

By taking the map

$$\begin{aligned} x_2\mapsto e_1, \quad x_4 \mapsto e_2, \quad x_1+x_2-x_4\mapsto e_3 \quad \text {and } \quad x_5 \mapsto e_4, \end{aligned}$$

we obtain an isomorphic algebra of \({\mathfrak {a}}'\). In this case, the only non-vanishing bracket of \(\langle e_1,e_2,e_3,e_4\rangle \) is the bracket \([e_1,e_2]=e_2\) and so \({\mathfrak {a}}'\) is isomorphic to \({\mathfrak {g}}_2\oplus {\mathbb {C}}^2\), as asserted.

The last assertion is a consequence of \({\mathfrak {g}}_2\) and has no central elements. Hence, the only Casimir invariants of \({\mathfrak {a}}'\) are the central elements. \(\square \)

5.2 Non-abelian relations for the recurrence coefficients

The identification (5.4) maps the Lie algebra \({\mathfrak {a}}\) into a Lie subalgebra of the right Fourier algebra \({\mathcal {F}}_R(P)\) which we will still denote by \({\mathfrak {a}}\). This Lie subalgebra is isomorphic with a Lie algebra \(\varphi ^{-1}({\mathfrak {a}})\) of difference operators in the left Fourier algebra \({\mathcal {F}}_L(P)\). The goal of this subsection is to investigate the compatibility equations between the bracket relations (5.3) and the corresponding relations in \(\varphi ^{-1}({\mathfrak {a}})\). In the sequel, in order to simplify the notation, we will consider

$$\begin{aligned} B_{n}=B(n),\quad C_{n}=C(n), \quad {\mathcal {H}}_{n}={\mathcal {H}}(n), \quad \Gamma _n= \Gamma (n). \end{aligned}$$
(5.6)

The following Theorem, which is a consequence, of the relations between the brackets of \(M,M^{\dagger },L,\Gamma \), could be an analog of [6, Eq. (22)].

Theorem 5.5

Let AJ be matrices as in (3.1). Then,

$$\begin{aligned} {[}B_n,J]&= B_n - \Gamma _n C_n + C_n \Gamma _{n-1} + \Gamma _{n+1} C_{n+1} - C_{n+1} \Gamma _n, \\ {[}C_n,J]&= 2C_n+B_n (\Gamma _nC_n - C_n \Gamma _{n-1}) + (\Gamma _n C_n - C_n \Gamma _{n-1}) B_{n-1}, \end{aligned}$$

with \(B_n,C_n,{\mathcal {H}}_n\) and \(\Gamma _n\) as in (5.6).

Proof

First we observe that the difference-differential relations discussed in the paper give rise to the following identities:

$$\begin{aligned} B_n&= [B_n,J] + {\mathcal {H}}_n (A-1)^*{\mathcal {H}}_{n-1}^{-1} - {\mathcal {H}}_{n+1} (A-1)^*{\mathcal {H}}_n^{-1}, \end{aligned}$$
(5.7)
$$\begin{aligned} 2 {\mathcal {H}}_n {\mathcal {H}}_{n-1}^{-1}&= [{\mathcal {H}}_n {\mathcal {H}}_{n-1}^{-1},J] - B_n {\mathcal {H}}_n (A-1)^*{\mathcal {H}}_{n-1}^{-1} - {\mathcal {H}}_n (A-1)^*{\mathcal {H}}_{n-1}^{-1} B_{n-1}, \end{aligned}$$
(5.8)
$$\begin{aligned}{}[\Gamma _n, C_{n}\delta ^{-1}]&= {\mathcal {H}}_{n}(A-1)^*{\mathcal {H}}_{n-1}^{-1}. \end{aligned}$$
(5.9)

Equations (5.7), (5.8) are obtained from the coefficients of \(\delta ^{0}\) and \(\delta ^{-1}\) in the bracket relation \([M^\dagger , L] = L\). On the other hand, (5.9) is obtained from the coefficient of \(\delta ^{-1}\) in the bracket \([\Gamma ,L] = -M+M^{\dagger }\).

By (5.7), we have that

$$\begin{aligned}{}[B_n,J] = B_n - {\mathcal {H}}_n (A-1)^*{\mathcal {H}}_{n-1}^{-1} + {\mathcal {H}}_{n+1} (A-1)^*{\mathcal {H}}_n^{-1}. \end{aligned}$$
(5.10)

On the other hand, by (5.9) and taking into account that \({\mathcal {H}}_{n}\in M_N({\mathbb {R}})\), we obtain that

$$\begin{aligned} {\mathcal {H}}_{n}(A-1)^*{\mathcal {H}}_{n-1}^{-1} = [\Gamma _n, C_{n}\delta ^{-1}]= \Gamma _n C_n - C_n \Gamma _{n-1}. \end{aligned}$$
(5.11)

Evaluating (5.11) at \(n+1\), we obtain

$$\begin{aligned} {\mathcal {H}}_{n+1}(A-1)^*{\mathcal {H}}_{n}^{-1} = \Gamma _{n+1} C_{n+1} - C_{n+1} \Gamma _{n}. \end{aligned}$$
(5.12)

If we subtract (5.12) from (5.11) and replace in (5.10), we get the first formula of the proposition.

For the second equality, notice that since \(C_{n}={\mathcal {H}}_{n}{\mathcal {H}}_{n-1}^{-1}\), Eq. (5.8) implies that

$$\begin{aligned} {[}C_n,J]=2C_n+B_n {\mathcal {H}}_n (A-1)^*{\mathcal {H}}_{n-1}^{-1} + {\mathcal {H}}_n (A-1)^*{\mathcal {H}}_{n-1}^{-1} B_{n-1}. \end{aligned}$$

The second formula of the proposition is obtained by replacing (5.12) and (5.11) in the previous equation. This completes the proof of the proposition. \(\square \)