1 Introduction

There exist numerous interesting determinant evaluations, whose matrix entries are well-known combinatorial number sequences (cf. [1,2,3,4,5]), for example, binomial coefficients, rising and falling factorials, as well as their reciprocals. They are proved mainly by means of generating functions and combinatorial enumerations (cf. [6, 7]), finite difference method (cf. [8, 9]) and matrix decompositions (cf. [10,11,12,13]). The reader may refer to [14] for a comprehensive coverage.

Recently, Ekhad and Zeilberger [15] considered the following matrix

$$\begin{aligned} K_n=\big [K_n(i,j)\big ]_{0\le i,j\le n}:\quad K_n(i,j)=\bigg \{\sum _{k=0}^j(-1)^k\left( {\begin{array}{c}i\\ k\end{array}}\right) \left( {\begin{array}{c}n-i\\ j-k\end{array}}\right) \bigg \}^2 \end{aligned}$$

and determined explicitly its spectrum. This motivates us further to examine three extended matrices with their entries being binomial sums and containing many free parameters. In the next section, we shall evaluate the first determinant (see Theorem 1) by the coefficient extraction method. Then, the rest of the paper will be devoted to the other two matrices. By expressing them as products of a matrix of Vandermonde type and an upper triangular one, we shall explicitly evaluate their determinants (see Theorems 2 and 8) as products including factors of binomial sums with the \(\sigma \)-sequence. By specifying concretely the \(\sigma \)-sequence, we derive, from these two determinant evaluations, ten remarkable determinant identities (highlighted as propositions), involving binomial coefficients and quotients of rising factorials, as well as orthogonal polynomials named after Hermite, Laguerre and Legendre.

2 The First Determinant

As a warm up, we start to evaluate an easy determinant by means of the method of extracting coefficients. Denote by \([T^k]\phi (T)\) the coefficient of \(T^k\) in the formal power series \(\phi (T)\).

For the matrix A defined by

$$\begin{aligned} A=\big [a_{i,j}\big ]_{0\le i,j\le n}: \qquad a_{i,j}=\sum _{k=0}^j(-1)^k\left( {\begin{array}{c}x+i\\ k\end{array}}\right) \left( {\begin{array}{c}y-i\\ j-k\end{array}}\right) , \end{aligned}$$

we can express it as

$$\begin{aligned} \det A&=\det _{\begin{array}{c} 0\le i,j\le n \end{array}} \big [[T_j^j](1-T_j)^{x+i}(1+T_j)^{y-i}\big ]\\&=\prod _{k=0}^n[T^k_k](1-T_k)^{x}(1+T_k)^{y} \det _{\begin{array}{c} 0\le i,j\le n \end{array}} \bigg [\Big (\frac{1-T_j}{1+T_j}\Big )^i\bigg ]. \end{aligned}$$

The last determinant is Vandermonde type, which can be evaluated as

$$\begin{aligned}\det _{\begin{array}{c} 0\le i,j\le n \end{array}} \bigg [\Big (\frac{1-T_j}{1+T_j}\Big )^i\bigg ]&=\prod _{0\le i<j\le n} \bigg \{\frac{1-T_j}{1+T_j} -\frac{1-T_i}{1+T_i}\bigg \}\\&=\prod _{0\le i<j\le n} \frac{2(T_i-T_j)}{(1+T_i)(1+T_j)}. \end{aligned}$$

Observing further that

$$\begin{aligned}\prod _{k=0}^n[T^k_k] \prod _{0\le i<j\le n}(T_i-T_j) =(-1)^{\left( {\begin{array}{c}n+1\\ 2\end{array}}\right) }, \end{aligned}$$

we find the closed formula.

Theorem 1

(Determinant evaluation)

$$\begin{aligned} \det A=(-2)^{\left( {\begin{array}{c}n+1\\ 2\end{array}}\right) }. \end{aligned}$$

What is remarkable is that the \(\det A\) is independent of both variables x and y.

3 The Second Determinant

In this section and the next one, we shall evaluate two new determinants where the method of extracting coefficients does not work.

For indeterminates \(\rho ,\lambda \) and \(\{x_i\}^n_{i=0}\), consider the matrix B defined by

$$\begin{aligned}B=\big [b_{i,j}\big ]_{0\le i,j\le n}:\qquad b_{i,j}=\sum _{k=0}^j\left( {\begin{array}{c}x_i\\ k\end{array}}\right) \left( {\begin{array}{c}\rho -\lambda x_i\\ j-k\end{array}}\right) \sigma (k,j).\end{aligned}$$

Rewrite \(b_{i,j}\) as a polynomial of \(x_i\)

$$\begin{aligned}b_{i,j}=\sum _{k=0}^jx_i^{k}\mathcal {U}(k,j).\end{aligned}$$

Then, it is routine to check that the matrix \(\big [\mathcal {U}(k,j)\big ]_{0\le k,j\le n}\) is upper triangular with its diagonal entries equal to

$$\begin{aligned}\mathcal {U}(j,j)=\frac{(-1)^j}{j!}\varPhi (j), \quad \text {where}\quad \varPhi (j)=\sum _{k=0}^j(-1)^k \left( {\begin{array}{c}j\\ k\end{array}}\right) \lambda ^{j-k}\sigma (k,j).\end{aligned}$$

According to the matrix decomposition

$$\begin{aligned}B=\big [x_i^k\big ]_{0\le i,k\le n} \times \big [\mathcal {U}(k,j)\big ]_{0\le k,j\le n},\end{aligned}$$

we get the following reduction formula.

Theorem 2

(Determinant evaluation)

$$\begin{aligned}\det B=\prod _{0\le i<j\le n}(x_i-x_j) \times \prod _{m=0}^n\frac{\varPhi (m)}{m!}.\end{aligned}$$

By specifying \(\sigma (k,m)\) concretely so that the sums involving it can be evaluated in closed form, we can derive the following five determinant identities.

Firstly, for \(\sigma (k,m)=y^k\), we have, by means of the binomial theorem

$$\begin{aligned}\varPhi (m)=\sum _{k=0}^m(-1)^{k}\left( {\begin{array}{c}m\\ k\end{array}}\right) y^k\lambda ^{m-k}=(\lambda -y)^m.\end{aligned}$$

Proposition 3

(Determinant evaluation)

$$\begin{aligned}\det _{\begin{array}{c} 0\le i,j\le n \end{array}} \bigg [\sum _{k=0}^j\left( {\begin{array}{c}x_i\\ k\end{array}}\right) \left( {\begin{array}{c}\rho -\lambda x_i\\ j-k\end{array}}\right) y^k\bigg ] =\prod _{0\le i<j\le n}(x_i-x_j) \times \frac{(\lambda -y)^{\left( {\begin{array}{c}n+1\\ 2\end{array}}\right) }}{\prod _{m=0}^nm!}.\end{aligned}$$

Secondly, take \(\sigma (k,m)=\lambda ^{k-m}Q_m(k)\), where \(Q_m(k)\) is a polynomial of degree m in k with the leading coefficient equal to \(\beta _m\). By evaluating the finite differences, we have

$$\begin{aligned}\varPhi (m)=\sum _{k=0}^m(-1)^{k}\left( {\begin{array}{c}m\\ k\end{array}}\right) Q_m(k) =m!(-1)^m\beta _m.\end{aligned}$$

Proposition 4

(Determinant evaluation)

$$\begin{aligned}\det _{\begin{array}{c} 0\le i,j\le n \end{array}} \bigg [\sum _{k=0}^j\lambda ^{k-j}\left( {\begin{array}{c}x_i\\ k\end{array}}\right) \left( {\begin{array}{c}\rho -\lambda x_i\\ j-k\end{array}}\right) Q_j(k)\bigg ] =\prod _{0\le i<j\le n}(x_j-x_i) \times \prod _{m=0}^n\beta _m.\end{aligned}$$

In particular, we have two special examples from this proposition:

$$\begin{aligned}\det _{\begin{array}{c} 0\le i,j\le n \end{array}}&\bigg [\sum _{k=0}^j\lambda ^{k-j}\left( {\begin{array}{c}x_i\\ k\end{array}}\right) \left( {\begin{array}{c}\rho -\lambda x_i\\ j-k\end{array}}\right) (a+bk)^j\bigg ] =b^{\left( {\begin{array}{c}n+1\\ 2\end{array}}\right) }\times \prod _{0\le i<j\le n}(x_j-x_i),\\ \det _{\begin{array}{c} 0\le i,j\le n \end{array}}&\bigg [\sum _{k=0}^j\lambda ^{k-j}\left( {\begin{array}{c}x_i\\ k\end{array}}\right) \left( {\begin{array}{c}\rho -\lambda x_i\\ j-k\end{array}}\right) (a+bk)_j\bigg ] =b^{\left( {\begin{array}{c}n+1\\ 2\end{array}}\right) }\times \prod _{0\le i<j\le n}(x_j-x_i); \end{aligned}$$

where \((x)_k\) denotes the rising factorial defined by

$$\begin{aligned}(x)_0=1\quad \text {and}\quad (x)_k=x(x+1)\cdots (x+k-1) \quad \text {for}\quad k\in \mathbb {N}.\end{aligned}$$

Thirdly, take

$$\begin{aligned}\sigma (k,m)=\lambda ^{k-m}\frac{(a)_k}{(c)_k}.\end{aligned}$$

According to the Chu–Vandermonde–Gauss summation formula (see Bailey [16, §1.3], also for the notation of hypergeometric series), we can evaluate

$$\begin{aligned}\varPhi (m)={_2F_1}\left[ \!\begin{array}{r}-m,~a\\ c\end{array}{\Big |1}\right] =\frac{(c-a)_m}{(c)_m}.\end{aligned}$$

Here and forth (the three propositions below), the parameters abcd are complex with the restriction that the expressions involving them are well defined, i.e., there is no zero factor appearing in their denominators.

Proposition 5

(Determinant evaluation)

$$\begin{aligned}\det _{\begin{array}{c} 0\le i,j\le n \end{array}} \bigg [\sum _{k=0}^j\lambda ^{k-j} \left( {\begin{array}{c}x_i\\ k\end{array}}\right) \left( {\begin{array}{c}\rho -\lambda x_i\\ j-k\end{array}}\right) \frac{(a)_k}{(c)_k}\bigg ] =\prod _{0\le i<j\le n}(x_i-x_j) \times \prod _{m=0}^n\frac{(c-a)_m}{m!(c)_m}.\end{aligned}$$

Fourthly, for

$$\begin{aligned}\sigma (k,m)=\lambda ^{k-m}\frac{(a+m)_k(c)_k}{(1+a-b)_k(b+c)_k},\end{aligned}$$

the corresponding binomial sum can be evaluated by the Pfaff–Saalschütz summation theorem (cf. Bailey [16, §2.2])

$$\begin{aligned}\varPhi (m)={_3F_2}\left[ \!\begin{array}{c}-m,~a+m,~c\\ 1+a-b,b+c\end{array}{\Big |1}\right] =\frac{(1+a-b-c)_m(b)_m}{(1+a-b)_m(b+c)_m}.\end{aligned}$$

We get therefore the following determinant identity.

Proposition 6

(Determinant evaluation)

$$\begin{aligned}&\det _{\begin{array}{c} 0\le i,j\le n \end{array}} \bigg [\sum _{k=0}^j\lambda ^{k-j} \left( {\begin{array}{c}x_i\\ k\end{array}}\right) \left( {\begin{array}{c}\rho -\lambda x_i\\ j-k\end{array}}\right) \frac{(a+j)_k(c)_k}{(1+a-b)_k(b+c)_k}\bigg ]\\&\quad =\prod _{0\le i<j\le n}(x_i-x_j) \times \prod _{m=0}^n \frac{(1+a-b-c)_m(b)_m}{m!(1+a-b)_m(b+c)_m}. \end{aligned}$$

Finally, for

$$\begin{aligned}\sigma (k,m)=\lambda ^{k-m} \frac{(a+2k)(a)_k(b)_k(d)_k}{a(1+a-b)_k(1+a-d)_k(1+a+m)_k},\end{aligned}$$

the corresponding sum can be evaluated by Dougall’s formula (cf. Bailey [16, §4.3])

$$\begin{aligned} \varPhi (m)=&{_5F_4}\left[ \!\begin{array}{c}a,\quad 1+\frac{a}{2},\quad b,\quad d,\quad -m\\ \frac{a}{2},1+a-b,1+a-d,1+a+m\end{array}{\Big |1}\right] \\ =&\frac{(1+a)_m(1+a-b-d)_m}{(1+a-b)_m(1+a-d)_m}. \end{aligned}$$

This leads us to the following determinant identity.

Proposition 7

(Determinant evaluation)

$$\begin{aligned}&\det _{\begin{array}{c} 0\le i,j\le n \end{array}} \bigg [\sum _{k=0}^j\lambda ^{k-j} \left( {\begin{array}{c}x_i\\ k\end{array}}\right) \left( {\begin{array}{c}\rho -\lambda x_i\\ j-k\end{array}}\right) \frac{(a+2k)(a)_k(b)_k(d)_k}{a(1+a-b)_k(1+a-d)_k(1+a+j)_k}\bigg ]\\&\quad =\prod _{0\le i<j\le n}(x_i-x_j) \times \prod _{m=0}^n \frac{(1+a)_m(1+a-b-d)_m}{m!(1+a-b)_m(1+a-d)_m}. \end{aligned}$$

4 The Third Determinant

In this section, we shall investigate the matrix C, which is inspired by the following matrix examined by Ekhad and Zeilberger [15]

$$\begin{aligned}K_n=\big [K_n(i,j)\big ]_{0\le i,j\le n}:\quad K_n(i,j)=\bigg \{\sum _{k=0}^j(-1)^k\left( {\begin{array}{c}i\\ k\end{array}}\right) \left( {\begin{array}{c}n-i\\ j-k\end{array}}\right) \bigg \}^2.\end{aligned}$$

By weakening the matrix B definition examined in the last section, we introduce the matrix entries by the binomial convolution

$$\begin{aligned}c_{i,j}=\sum _{k=0}^j(-1)^k\left( {\begin{array}{c}x_i\\ k\end{array}}\right) \left( {\begin{array}{c}\rho -x_i\\ j-k\end{array}}\right) \sigma _k\sigma _{j-k}\end{aligned}$$

and evaluate the determinant for the matrix with squared entries

$$\begin{aligned}C=\Big [c^2_{i,j}\Big ]_{0\le i,j\le n}.\end{aligned}$$

Theorem 8

(Determinant evaluation) The following formula holds

$$\begin{aligned}\det C=\prod _{0\le i<j\le n}(x_i-x_j)(\rho -x_i-x_j) \times \prod _{m=0}^n \frac{\varPsi ^2(m)}{(m!)^2},\end{aligned}$$

where \(\varPsi (m)\) is given by the binomial convolution

$$\begin{aligned}\varPsi (m)=\sum _{k=0}^m\left( {\begin{array}{c}m\\ k\end{array}}\right) \sigma _k\sigma _{m-k}.\end{aligned}$$

Proof

Denote by \(\chi \) the logical function with \(\chi (\text {true})=1\) and \(\chi (\text {false})=0\). According to the definition of \(c_{i,j}\), we can rewrite it by splitting the sum into two parts and then reversing the summation order for the second part as follows:

$$\begin{aligned} c_{i,j}=&\sum _{0\le k\le j/2}(-1)^k \left( {\begin{array}{c}x_i\\ k\end{array}}\right) \left( {\begin{array}{c}\rho -x_i\\ j-k\end{array}}\right) \sigma _k\sigma _{j-k}\\&+\sum _{0\le k< j/2}(-1)^{j-k} \left( {\begin{array}{c}x_i\\ j-k\end{array}}\right) \left( {\begin{array}{c}\rho -x_i\\ k\end{array}}\right) \sigma _k\sigma _{j-k}\\ =&\sum _{0\le k< j/2}(-1)^k \frac{\sigma _k\sigma _{j-k}}{k!(j-k)!} \left\langle {x_i}\right\rangle _k\left\langle {\rho -x_i}\right\rangle _k\\&\times \Big \{\left\langle {\rho -x_i-k}\right\rangle _{j-2k} +(-1)^j\left\langle {x_i-k}\right\rangle _{j-2k}\Big \}\\&+(-1)^{j/2}\frac{\sigma ^2_{j/2}}{(j/2)!^2} \left\langle {x_i}\right\rangle _{j/2}\left\langle {\rho -x_i}\right\rangle _{j/2} \chi (\text {when { j} is even}), \end{aligned}$$

where \(\left\langle {x}\right\rangle _k\) is the falling factorial defined by

$$\begin{aligned}\left\langle {x}\right\rangle _0=1\quad \text {and}\quad \left\langle {x}\right\rangle _k=x(x-1)\cdots (x-k+1) \quad \text {for}\quad k\in \mathbb {N}.\end{aligned}$$

For the two variables \(\{x_i,y_i\}\) related by \(y_i=x_i(\rho -x_i)\), it is trivial to check the equalities

$$\begin{aligned}x_i=\frac{\rho -\sqrt{\rho ^2-4y_i}}{2} \quad \text {and}\quad \rho -x_i=\frac{\rho +\sqrt{\rho ^2-4y_i}}{2}.\end{aligned}$$

Then, we have the polynomial expressions

$$\begin{aligned} \left\langle {x_i}\right\rangle _k\left\langle {\rho -x_i}\right\rangle _k&=\prod _{\ell =0}^{k-1} \Big (\frac{\rho +\sqrt{\rho ^2-4y_i}}{2}-\ell \Big ) \Big (\frac{\rho -\sqrt{\rho ^2-4y_i}}{2}-\ell \Big )\\&=\prod _{\ell =0}^{k-1}\big (y_i-\rho \ell +\ell ^2\big ) \end{aligned}$$

and

$$\begin{aligned}&\left\langle {\rho -x_i-k}\right\rangle _{j-2k} +(-1)^j\left\langle {x_i-k}\right\rangle _{j-2k}\\ =&\left\langle {\frac{\rho +\sqrt{\rho ^2-4y_i}}{2}-k}\right\rangle _{j-2k} +(-1)^j\left\langle {\frac{\rho -\sqrt{\rho ^2-4y_i}}{2}-k}\right\rangle _{j-2k}. \end{aligned}$$

According to the last two expressions, the following statements are valid:

  • When j is even, the last expression results in a polynomial of degree “\(\frac{j}{2}-k\)” in \(y_i\) with the leading coefficient equal to \(2(-1)^{\frac{j}{2}-k}\). Instead, for odd j, the same expression results in \(\frac{\sqrt{\rho ^2-4y_i}}{2}\) times a polynomial of degree “\(\frac{j-1}{2}-k\)” in \(y_i\) with the leading coefficient equal to \(2(-1)^{\frac{j-1}{2}-k}\).

  • Consequently, when j is even, \(c_{i,j}\) is a polynomial of degree j/2 in \(y_j\) with the leading coefficient equal to \((-1)^{\frac{j}{2}}\frac{\varPsi (j)}{j!}\), while for odd j, \(c_{i,j}\) can be expressed as \(\frac{\sqrt{\rho ^2-4y_i}}{2}\) times a polynomial of degree “\(\frac{j-1}{2}\)” in \(y_i\) with the leading coefficient being determined by \((-1)^{\frac{j-1}{2}}\frac{\varPsi (j)}{j!}\).

Therefore, \(c^2_{i,j}\) results always in a polynomial of degree j in \(y_i\) with its leading coefficient being given by \((-1)^{j}\frac{\varPsi ^2(j)}{(j!)^2}\), no matter whether j is even or odd.

Now write explicitly the matrix entry \(c^2_{i,j}\) as a polynomial of \(y_i\):

figure a

where the matrix \(\big [\mathcal {V}(k,j)\big ]_{0\le k,j\le n}\) is upper triangular with the diagonal entries being

$$\begin{aligned} \mathcal {V}(j,j)=\frac{(-1)^j}{j!^2}\varPsi ^2(j). \end{aligned}$$

Then, the formula in Theorem 8 follows directly from the matrix decomposition displayed in equation ((\(\bigstar \))). \(\square \)

Similarly by choosing specific \(\sigma _{k}\) value concretely, we derive the following five determinant identities.

Firstly, for \(\sigma _{k}=1\) , we have by means of the binomial theorem

$$\begin{aligned} \varPsi \left( m\right) =\sum _{k=0}^{m}\left( {\begin{array}{c}m\\ k\end{array}}\right) \sigma _{k}\sigma _{m-k}=\sum _{k=0}^{m}\left( {\begin{array}{c}m\\ k\end{array}}\right) =2^{m}. \end{aligned}$$

We get the following determinant identity.

Proposition 9

(Determinant evaluation)

$$\begin{aligned}&\underset{0\le i,j\le n}{\det }\left[ \left\{ \sum _{k=0}^{j}\left( -1\right) ^{k}\left( {\begin{array}{c}x_{i}\\ k\end{array}}\right) \left( {\begin{array}{c}\rho -x_{i}\\ j-k\end{array}}\right) \right\} ^{2}\right] \\&=\prod \limits _{0\le i<j\le n}\left( x_{i}-x_{j}\right) \left( \rho -x_{i}-x_{j}\right) \times \prod \limits _{m=0}^{n}\frac{2^{2m}}{\left( m!\right) ^{2}}. \end{aligned}$$

Secondly, when we take \(\sigma _{k}=\left( \lambda \right) _{k},\) we have

$$\begin{aligned} \varPsi \left( m\right) =\sum _{k=0}^{m}\left( {\begin{array}{c}m\\ k\end{array}}\right) \left( \lambda \right) _{k}\left( \lambda \right) _{m-k}=\left( 2\lambda \right) _{m}. \end{aligned}$$

This leads us to the following determinant identity.

Proposition 10

(Determinant evaluation)

$$\begin{aligned}&\underset{0\le i,j\le n}{\det }\left[ \left\{ \sum _{k=0}^{j}\left( -1\right) ^{k}\left( {\begin{array}{c}x_{i}\\ k\end{array}}\right) \left( {\begin{array}{c}\rho -x_{i}\\ j-k\end{array}}\right) \left( \lambda \right) _{k}\left( \lambda \right) _{j-k}\right\} ^{2}\right] \\&=\prod \limits _{0\le i<j\le n}\left( x_{i}-x_{j}\right) \left( \rho -x_{i}-x_{j}\right) \times \prod \limits _{m=0}^{n}\frac{\left( 2\lambda \right) _{m}^{2}}{\left( m!\right) ^{2}}. \end{aligned}$$

Thirdly, if we take \(\sigma _{k}=H_{k}\left( y\right) ,\) where \(H_{n}\left( y\right) \) is the Hermite polynomial, then

$$\begin{aligned} \varPsi \left( m\right) =\sum _{k=0}^{m}\left( {\begin{array}{c}m\\ k\end{array}}\right) H_{k}\left( y\right) H_{m-k}\left( y\right) =2^{m/2}H_{m}\left( y\sqrt{2}\right) . \end{aligned}$$

Thus, we get the following determinant identity.

Proposition 11

(Determinant evaluation)

$$\begin{aligned}&\underset{0\le i,j\le n}{\det }\left[ \left\{ \sum _{k=0}^{j}\left( -1\right) ^{k}\left( {\begin{array}{c}x_{i}\\ k\end{array}}\right) \left( {\begin{array}{c}\rho -x_{i}\\ j-k\end{array}}\right) H_{k}\left( y\right) H_{j-k}\left( y\right) \right\} ^{2}\right] \\&=\prod \limits _{0\le i<j\le n}\left( x_{i}-x_{j}\right) \left( \rho -x_{i}-x_{j}\right) \times \prod \limits _{m=0}^{n}\frac{2^{m}H_{m}^{2}\left( y \sqrt{2}\right) }{\left( m!\right) ^{2}}. \end{aligned}$$

Fourthly, when \(\sigma _{k}=k!L_{k}^{\left\langle {\alpha }\right\rangle }(y) ,\) where \(L_{n}^{\left\langle {\alpha }\right\rangle }(y) \) denotes the Laguerre polynomial, we have

$$\begin{aligned} \varPsi \left( m\right) =\sum _{k=0}^{m}\left( {\begin{array}{c}m\\ k\end{array}}\right) k!L_{k}^{\left\langle {\alpha }\right\rangle }\left( y\right) \left( m-k\right) !L_{m-k}^{\left\langle {\alpha }\right\rangle }\left( y\right) =m!L_{m}^{\left\langle {1+2\alpha }\right\rangle }\left( 2y\right) . \end{aligned}$$

This gives rise to the following determinant identity.

Proposition 12

(Determinant evaluation)

$$\begin{aligned}&\underset{0\le i,j\le n}{\det }\left[ \left\{ \sum _{k=0}^{j}\left( -1\right) ^{k} \left( {\begin{array}{c}x_{i}\\ k\end{array}}\right) \left( {\begin{array}{c}\rho -x_{i}\\ j-k\end{array}}\right) _{k}k!L_{k}^{\left\langle {\alpha }\right\rangle }\left( y\right) \left( j-k\right) !L_{j-k}^{\left\langle {\alpha }\right\rangle }\left( y\right) \right\} ^{2}\right] \\&=\prod \limits _{0\le i<j\le n}\left( x_{i}-x_{j}\right) \left( \rho -x_{i}-x_{j}\right) \times \prod _{m=0}^{n}\Big \{L_{m}^{\left\langle {1+2\alpha }\right\rangle }\left( 2y\right) \Big \}^2. \end{aligned}$$

Finally, if we take \(\sigma _{k}=k!P_{k}\left( y\right) ,\) where \(P_{n}\left( y\right) \) stands for the Legendre polynomial, then

$$\begin{aligned} \varPsi \left( m\right) =\sum _{k=0}^{m}\left( {\begin{array}{c}m\\ k\end{array}}\right) k!P_{k}\left( y\right) \left( m-k\right) !P_{m-k}\left( y\right) =m!U_{m}\left( 2y\right) , \end{aligned}$$

where \(U_{m}(y)\) are the Chebyshev polynomials of the second kind. This leads us to the following determinant identity.

Proposition 13

(Determinant evaluation)

$$\begin{aligned}&\underset{0\le i,j\le n}{\det }\left[ \left\{ \sum _{k=0}^{j}\left( -1\right) ^{k}\left( {\begin{array}{c}x_{i}\\ k\end{array}}\right) \left( {\begin{array}{c}\rho -x_{i}\\ j-k\end{array}}\right) k!P_{k}\left( y\right) \left( j-k\right) !P_{j-k}\left( y\right) \right\} ^{2}\right] \\&=\prod \limits _{0\le i<j\le n}\left( x_{i}-x_{j}\right) \left( \rho -x_{i}-x_{j}\right) \times \prod \limits _{m=0}^{n}U_{m}^{2}\left( 2y\right) . \end{aligned}$$