1 Introduction

In recent decades, the interplay between random matrix theory and integrable systems attracted much attention due to the development of both fields. A crucial observation in this connection is that the partition functions of different random matrix models can act as the \(\tau \)-functions of corresponding integrable hierarchies upon appropriate deformations. Such an observation was found by making use of semiclassical orthogonal polynomials for different integrable hierarchies, such as Painlevé hierarchy [12, 49] and Toda hierarchy [3, 10].

It is well known that a sequence of orthogonal polynomials \(\{p_n(x)\}_{n\in {\mathbb {N}}}\) can be characterized by an analytic, nonnegative weight \(\omega (x)\) such that

$$\begin{aligned} \int _{{\mathbb {R}}} p_n(x)p_m(x)\omega (x)dx=\delta _{n,m}. \end{aligned}$$
(1.1)

According to Favard’s theorem [21, 26], the orthogonal relation (1.1) can be equivalently expressed by a three-term recurrence relation

$$\begin{aligned} xp_n(x)=a_np_{n+1}(x)+b_np_n(x)+a_{n-1}p_{n-1}(x),\quad p_{-1}(x)=0,\quad p_0(x)=1, \end{aligned}$$
(1.2)

for a sequence of coefficients \(\{a_n,b_n\}_{n\in {\mathbb {N}}}\), providing a Jacobi matrix form

$$\begin{aligned} L=\left( \begin{array}{ccccc} b_0&{}a_0&{}&{}&{}\\ a_0&{}b_1&{}a_1&{}&{}\\ &{}a_1&{}b_2&{}a_2&{}\\ &{}&{}\ddots &{}\ddots &{}\ddots \end{array} \right) ,\quad x\Psi (x)=L\Psi (x),\quad \Psi (x)=(p_0(x),p_1(x),\cdots )^\top . \end{aligned}$$

Semiclassical orthogonal polynomials were firstly considered by Shobat [52] and later by Freud [29]. In such case, time parameters \(\textbf{t}=(t_1,t_2,\cdots )\) were introduced into the weight such that

$$\begin{aligned} \partial _{t_n}\omega (x;\textbf{t})=x^n\omega (x;\textbf{t}). \end{aligned}$$

Therefore, orthogonal polynomials with semiclassical weight are time-dependent and result in the formula

$$\begin{aligned} \partial _{t_1}p_n(x;\textbf{t})=-\frac{1}{2}b_np_n(x;\textbf{t})-a_np_{n-1}(x;\textbf{t}). \end{aligned}$$
(1.3)

In studies, there are several ways to derive integrable lattices from semiclassical orthogonal polynomials. One is a direct method by using the compatibility condition of (1.2) and (1.3), from which one gets

$$\begin{aligned} \begin{aligned} \partial _{t_1}a_n=\frac{1}{2}a_n(b_n-b_{n-1}),\quad \partial _{t_1}b_n=a_{n-1}^2-a_n^2. \end{aligned} \end{aligned}$$

This is the nonlinear form for the Toda lattice. Details and related discussions can be found in monographs [23, 28]. Another way is to express orthogonal polynomials by \(\tau \)-functions, and integrable hierarchies could be obtained by the recurrence relation. It is known that by solving the orthogonal relation (1.1), a determinantal expression for \(p_n(x;\textbf{t})\) is given by

$$\begin{aligned} p_n(x;\textbf{t})=\frac{1}{\sqrt{\tau _n(\textbf{t})\tau _{n+1}(\textbf{t})}}\det \left( \begin{array}{cccc} m_0&{}m_1&{}\cdots &{}m_n\\ \vdots &{}\vdots &{}&{}\vdots \\ m_{n-1}&{}m_n&{}\cdots &{}m_{2n-1}\\ 1&{}x&{}\cdots &{}x^n \end{array} \right) , \end{aligned}$$

where

$$\begin{aligned} \tau _n(\textbf{t})=\det (m_{i+j})_{i,j=0}^{n-1},\quad m_i=\int _{{\mathbb {R}}}x^i\omega (x;\textbf{t})dx. \end{aligned}$$

Shifting \(\textbf{t}\) backwards by \([x^{-1}]\) in the \(\tau \)-function yields a polynomial in x. We have

$$\begin{aligned} p_n(x;\textbf{t})=x^n\frac{\tau _n(\textbf{t}-[x^{-1}])}{\sqrt{\tau _n(\textbf{t})\tau _{n+1}(\textbf{t})}},\quad [x^{-1}]=\left( \frac{x^{-1}}{1},\frac{x^{-2}}{2},\cdots \right) . \end{aligned}$$
(1.4)

Moreover, if we substitutes such formula into the recurrence relation (1.2), then a Toda lattice hierarchy with neighboring points could be obtained. If one considers the Cauchy transform of orthogonal polynomials

$$\begin{aligned} \int _{{\mathbb {R}}}\frac{p_n(x;\textbf{t})}{z-x}\omega (x;\textbf{t})dx=z^{-n-1}\frac{\tau _{n+1}(\textbf{t}+[z^{-1}])}{\sqrt{\tau _n(\textbf{t})\tau _{n+1}(\textbf{t})}}, \end{aligned}$$

then from the orthogonality, one has the formula

$$\begin{aligned} \begin{aligned} 0=\int _{{\mathbb {R}}}p_n(x;\textbf{t})p_{n-1}(x;\textbf{t}')\omega (x;t)dx=\frac{1}{2\pi i}\oint _{C_\infty } \tau _n(\textbf{t}-[z^{-1}])\tau _{n}(\textbf{t}'+[z^{-1}])e^{\xi (\textbf{t},z)-\xi (\textbf{t}',z)}dz, \end{aligned} \end{aligned}$$
(1.5)

where \(\xi (\textbf{t},z)=\sum _{i=1}^\infty t_iz^i\). This formula is valid for all \(t,\,t'\in {\mathbb {C}}\) and it gives a bilinear identity of KP hierarchy [3, 36].

Relations between orthogonal polynomials and integrable systems are clearly depicted by considering different generalizations of the orthogonal relation (1.1), which, in fact, is given by a symmetric, positive definite, and real bilinear form

$$\begin{aligned} \langle \cdot ,\cdot \rangle :{\mathbb {R}}[x]\times {\mathbb {R}}[x]\rightarrow {\mathbb {R}} \end{aligned}$$

such that \(\langle x^i,x^j\rangle =\langle x^j,x^i\rangle \). Therefore, the generalizations of orthogonality are equivalent to the extensions of the bilinear form. A non-symmetric generalization to the bilinear form admits

$$\begin{aligned} \langle x^i,x^j\rangle =\int _{{\mathbb {R}}}x^{i+\theta j}\omega (x)dx,\quad \theta \in {\mathbb {R}}_+. \end{aligned}$$

This bilinear form is related to the random matrix models with additional interaction proposed by Muttalib and Borodin, and corresponding polynomials were referred to as bi-orthogonal polynomials [16, 46]. There is another kind of bi-orthogonality by considering a bilinear form acting on \({\mathbb {R}}[x]\times {\mathbb {R}}[y]\), such that

$$\begin{aligned} \langle x^i,y^j\rangle =\int _{{\mathbb {R}}^2} x^iy^j{\mathbb {K}}(x,y)\omega _1(x)\omega _2(y)dxdy, \end{aligned}$$
(1.6)

where \({\mathbb {K}}(x,y)\) is a kernel function and \(\omega _1\), \(\omega _2\) are weights with respect to x and y, respectively. Such bi-orthogonal polynomials were introduced by considering matrices coupled in a chain [25] and Cauchy two-matrix models [13]. Specifically, skew symmetric kernels arisen from orthogonal and symplectic invariant ensemble in random matrix models are of particular interest. The above-mentioned orthogonal polynomials are all related to integrable systems if appropriate time deformations are assumed. Examples include Gelfand–Dickey hierarchy (Muttalib–Borodin case) [55], 2d-Toda hierarchy (coupled chain case) [4], CKP hierarchy (Cauchy two-matrix model case) [41], Pfaff lattice/DKP hierarchy (orthogonal/symplectic ensemble case) [1, 38] and BKP hierarchy (Bures ensemble case) [33].

Multiple orthogonal polynomials (MOPs) as a generalization of orthogonal polynomials are a sequence of polynomials orthogonal with several different weights originated in the study of what is termed Hermite–Padé approximation. This is the simultaneous rational approximation of a family of functions \(\{ f_j \}\) which allow for a decaying Laurent expansion at infinity. Such functions can be written as

$$\begin{aligned} f_j(z) = \int _{I_j} {d \mu _j(x) \over z - x}, \end{aligned}$$
(1.7)

for several measures \(\{\mu _j\}\). It is these measures which directly relate to the orthogonality of MOPs; see, e.g., the brief survey [44]. A relatively recent application of MOPs is in the field of random matrices. The Gaussian unitary ensemble is the set of \(N \times N\) random complex Hermitian matrices \(\{H\}\), chosen with a probability density function (PDF) proportional to \(e^{- \textrm{Tr} \, H^2}\). In particular, the diagonal entries are all independent real normal random variables with mean zero and standard deviation \(1/\sqrt{2}\) (denoted N\([0,1/\sqrt{2}]\)), while the upper triangular entries of H are similarly independent and identically distributed, with complex normal distribution N\([0,1/2] + i \textrm{N}[0,1/2]\). Modifying this ensemble so that the entries have a nonzero mean, the corresponding PDF becomes proportional to \(e^{- \textrm{Tr} \, (H - A)^2}\), where A is a fixed complex Hermitian matrix. The new ensemble is referred to as the Gaussian unitary ensemble with a source [18]. Let the eigenvalues of A be denoted \(\{a_j\}\). A result of Bleher and Kuijlaars [15] gives that the average characteristic polynomial \(\langle x {\mathbb {I}} - (H - A) \rangle \) can be expressed in terms of a particular type II MOPsFootnote 1—referred to as multiple Hermite polynomials—where the family of measures are proportional to \(\{ e^{-x^2 + 2a_j x}\}_{j=1}^N\). This same random matrix model, and thus the relevance of the multiple Hermite polynomials, relates to non-intersecting Brownian bridges [9]. Moreover, in [24] the chiral generalization of the Gaussian unitary ensemble with a source is related to particular type I and type II Laguerre MOPs. With type I and type II MOPs closely related to non-intersecting Brownian motions, a generalized MOP called mixed type MOPs was proposed in [22] to make further assumptions on paths, and their applications into integrable systems were considered in [6, 7, 11].

In this paper, we focus on a generalization of skew-orthogonal polynomials called multiple skew-orthogonal polynomials (MSOPs) and consider associated integrable hierarchies. Skew-orthogonal polynomials arise when the integral kernel in (1.6) is assumed to be skew symmetric. Therefore, to give a proper definition of MSOPs, we firstly consider a bi-orthogonal generalization of MOPs in Sect. 2.2. Symmetric and skew symmetric reductions are considered in Sect. 2.3 to give a determinant expressions for MSOPs. Section 3 is devoted to the 2-component MSOPs, which are skew orthogonal with weights \(\omega _1\) and \(\omega _2\). Proposition 3.1 states that 2-component MSOPs admit Pfaffian expressions as well, from which 2-component Pfaffian \(\tau \)-functions could be involved. Then, we introduce two different sets of time variables \(\textbf{t}=(t_1,t_2,\cdots )\) and \(\textbf{s}=(s_1,s_2,\cdots )\) into weights \(\omega _1\) and \(\omega _2\), respectively, and prove some deformation identities by making use of Pfaffian notations. Such identities are helpful in deriving integrable systems. Analogous to the standard orthogonal polynomials and Toda lattice hierarchy, we apply three different methods to derive integrable lattices from easy to difficult. The first one is shown in Sect. 3 by simply comparing the coefficients from the deformation identity, and several simple equations are demonstrated. Furthermore, a systematic study in the derivation of integrable lattice hierarchy is carried out in Sect. 4 from two different perspectives. One is to show that the above mentioned Pfaffian expressions can be alternatively expressed by \(\tau \)-functions with time evolutions. By substituting \(\tau \)-functions expressions into identities satisfied by MSOPs, we get an integrable hierarchy for neighboring \(\tau \)-functions. A shortage in this strategy is that only neighboring \(\tau \)-functions are involved in resulting integrable hierarchy. We improve this method by considering a Cauchy transform method. In Sect. 4.2, we utilize the Cauchy transform of MSOPs and show that Takasaki’s Pfaff–Toda hierarchy is equivalent to our 2-component Pfaff lattice hierarchy. Our Sect. 5 is devoted to a combinatorial explanation for the above-discussed 2-component Pfaffian \(\tau \)-function, as a generating function of non-intersecting paths considered by Stembridge.

2 Multiple Skew-Orthogonal Polynomials

In this part, we intend to introduce the concept of multiple skew-orthogonal polynomials, which are skew orthogonal with respect to several different weights. Multiple skew orthogonality is originated from the multiple orthogonality, and thus, a brief review of the latter is firstly given to make the paper self-consistent.

2.1 A Brief Review of MOPs

Multiple orthogonal polynomials (MOPs) are defined as polynomials of one variable that satisfy orthogonality conditions with respect to several weights [35, Chap. 23]. Given a multi-index \(\vec {v}\in {\mathbb {N}}^{p}\) with length \(|\vec {v}|=\sum _{i=1}^p v_i\), and p different weight functions \((\omega _1,\cdots ,\omega _p)\) supported on the real line, there are two types of MOPs. Type I MOPs are collected in a vector of p polynomials \((A_{\vec {v},1},\cdots ,A_{\vec {v},p})\), where each \( A_{\vec {v},i} \) has degree at most \( v_i-1 \), satisfying the orthogonality relations

$$\begin{aligned} \int _{{\mathbb {R}}} x^k\left( \sum _{i=1}^p A_{\vec {v},i}(x)\omega _i(x) \right) dx=\delta _{k,|\vec {v}|-1},\quad 0\le k\le |\vec {v}|-1. \end{aligned}$$
(2.1)

By assuming

$$\begin{aligned} A_{\vec {v},i}(x)=\xi _{i,v_i-1}x^{v_i-1}+\cdots +\xi _{i,0},\quad i=1,\cdots ,p, \end{aligned}$$

the above relations give rise to a linear system of \( |\vec {v}| \) equations for \( |\vec {v}| \) unknown coefficients \(\{\xi _{i,j},j=0,\cdots ,v_i-1,i=1,\cdots ,p\}\)

$$\begin{aligned} \left( \begin{array}{ccccccc} m_0^{(1)}&{}\cdots &{}m_{v_1-1}^{(1)}&{}\cdots &{}m_{0}^{(p)}&{}\cdots &{}m_{v_p-1}^{(p)}\\ \vdots &{}&{}\vdots &{}&{}\vdots &{}&{}\vdots \\ m_{v_1-1}^{(1)}&{}\cdots &{}m_{2v_1-2}^{(1)}&{}\cdots &{}m_{v_1-1}^{(p)}&{}\cdots &{}m_{v_1+v_p-2}^{(p)}\\ \vdots &{}&{}\vdots &{}&{}\vdots &{}&{}\vdots \\ m_{|\vec {v}|-v_p}^{(1)}&{}\cdots &{}m_{|\vec {v}|+v_1-v_p-1}^{(1)}&{}\cdots &{}m_{|\vec {v}|-v_p}^{(p)}&{}\cdots &{}m_{|\vec {v}|-1}^{(p)}\\ \vdots &{}&{}\vdots &{}&{}\vdots &{}&{}\vdots \\ m_{|\vec {v}|-1}^{(1)}&{}\cdots &{}m_{|\vec {v}|+v_1-2}^{(1)}&{}\cdots &{}m_{|\vec {v}|-1}^{(p)}&{}\cdots &{}m_{|\vec {v}|+v_p-2}^{(p)}\end{array} \right) \left( \begin{array}{c} \xi _{1,0}\\ \vdots \\ \xi _{1,v_1-1}\\ \vdots \\ \xi _{p,0}\\ \vdots \\ \xi _{p,v_p-1} \end{array} \right) =\left( \begin{array}{c} 0\\ \vdots \\ 0\\ \vdots \\ 0\\ \vdots \\ 1 \end{array} \right) , \end{aligned}$$
(2.2)

where moments are defined by \(m_j^{(i)}=\int _{\mathbb {R}} x^j \omega _i(x)dx\).

The polynomials \(\{ A_{\vec {v},i}, i=1,\dots ,p \}\) are uniquely determined if and only if the linear system has a unique solution, which requires the determinants of moment matrices to be nonzero. This condition gives some restrictions on the weights \( \omega _1,\cdots ,\omega _p \). In general, there is no guarantee that for a given multi-index, the corresponding MOPs exist. A multi-index \(\vec {v}\) is said to be normal for type I MOPs if \(\{ A_{\vec {v},i}, i=1,\dots ,p \}\) exists and is unique. If all multi-indices are normal, then the system of weights \((\omega _1,\cdots ,\omega _p)\) is said to be a perfect system. There are two well-known perfect systems: One is the Angelesco system, and the other is the Nikishin system, where the perfectness of the former is given by the properties of zeros of orthogonal polynomials, and that of the latter is due to the analytic property of weights. For details, please refer to [27, 47].

By considering the dual construction, type II MOPs \(\{P_{\vec {v}}(x)\}\) are defined as scalar polynomials with degree \(|\vec {v}|\) by the orthogonal relation

$$\begin{aligned} \int _{\mathbb {R}} P_{\vec {v}}(x) x^j \omega _{i}(x)dx=0,\quad j=0,\cdots ,v_i-1,\quad i=1,\cdots , p. \end{aligned}$$
(2.3)

If we assume \(P_{\vec {v}}(x)\) to be monic as a normalization condition, then a linear system of \( |\vec {v}| \) equations is read from orthogonal relations. By assuming that \( P_{\vec {v}}(x) =x^{|\vec {v}|}+\eta _{|\vec {v}|,|\vec {v}|-1}x^{|\vec {v}|-1}+\cdots +\eta _{|\vec {v}|,0}\), we have

$$\begin{aligned} \begin{aligned} \left( \begin{array}{ccc} m_0^{(1)}&{}{}\cdots &{}{}m_{|\vec {v}|-1}^{(1)}\\ \vdots &{}{}&{}{}\vdots \\ m_{v_1-1}^{(1)}&{}{}\cdots &{}{}m_{|\vec {v}|+v_1-2}^{(1)}\\ \vdots &{}{}&{}{}\vdots \\ m_0^{(p)}&{}{}\cdots &{}{}m_{|\vec {v}|-1}^{(p)}\\ \vdots &{}{}&{}{}\vdots \\ m_{v_p-1}^{(p)}&{}{}\cdots &{}{}m_{|\vec {v}|+v_p-2}^{(p)}\end{array} \right) \left( \begin{array}{c} \eta _{|\vec {v}|,0}\\ \vdots \\ \eta _{|\vec {v}|,v_1-1}\\ \vdots \\ \eta _{|\vec {v}|,|\vec {v}|-v_p+1}\\ \vdots \\ \eta _{|\vec {v}|,|\vec {v}|-1}\end{array} \right) =-\left( \begin{array}{c} m_{|\vec {v}|}^{(1)}\\ \vdots \\ m_{|\vec {v}|+v_1-1}^{(1)}\\ \vdots \\ m_{|\vec {v}|}^{(p)}\\ \vdots \\ m_{|\vec {v}|+v_p-1}^{(p)}\end{array} \right) . \end{aligned} \end{aligned}$$
(2.4)

Similar to the type I case, we say that \( \vec {v} \) is a normal index for type II MOPs if the linear system has a unique solution. By noting that the coefficient matrix in (2.4) is the transpose of that for type I in (2.2), we know that a multi-index is normal for type II if and only if it is normal for type I. Moreover, let’s denote \(u=(u_1,\cdots ,u_{p_1})\) and \(v=(v_1,\cdots ,v_{p_2})\) as two multi-indices, \(\vec {\omega }=(\omega _1,\cdots ,\omega _{p_1})\) as a set of weights, and define type I function

$$\begin{aligned} Q_{\vec {u}}(x)=\sum _{i=1}^{p_1}A_{\vec {u},i}(x)\omega _i(x), \end{aligned}$$

and type II MOP \(P_{\vec {v}}(x)\) with regard to weight \(\vec {\omega }\). Then, there is a bi-orthogonality property [35, Thm. 23.1.6]

$$\begin{aligned} \int _{{\mathbb {R}}}P_{\vec {v}}(x)Q_{\vec {u}}(x)dx=\left\{ \begin{array}{ll} 0&{} \text {if }\vec {u}\le \vec {v}\text {,}\\ 0&{} \text {if }|\vec {v}|\le |\vec {u}|-2\text {,}\\ 1&{} \text {if }|\vec {v}|=|\vec {u}|-1\text {.}\\ \end{array} \right. \end{aligned}$$
(2.5)

Except for type I and type II MOPs, a family of mixed MOPs was proposed in the study of non-intersecting Brownian motions [22]. Let’s consider a non-intersecting Brownian motion on \({\mathbb {R}}\), with \(u_{\alpha }\) paths starting at \(a_\alpha \in {\mathbb {R}}\) \((\alpha =1,\cdots ,p_1)\), and with \(v_{\beta }\) paths ending at points \(b_\beta \in {\mathbb {R}}\) \((\beta =1,\cdots ,p_2)\). Since the total number of paths is conserved, we require

$$\begin{aligned} \sum _{\alpha =1}^{p_1}u_{\alpha }=\sum _{\beta =1}^{p_2}v_{\beta }. \end{aligned}$$
(2.6)

This equation plays an important role in the definition of mixed MOPs and will be explained later. Applications of mixed MOPs in recent years were proposed in integrable system and random walks [6, 7, 17, 22]. A feature of mixed MOPs is that they are orthogonal with two different sets of weights. Assume that \(\vec {u}=(u_{1},\cdots ,u_{p_1})\) and \(\vec {v}=(v_{1},\cdots ,v_{p_2})\) are two multi-indices, and \(\vec {\omega }_{1}=(\omega _{1,1},\cdots ,\omega _{1,p_1})\) and \(\vec {\omega }_{2}=(\omega _{2,1},\cdots ,\omega _{2,p_2})\) are two sets of weights, then a family of polynomials \(A_{1},\cdots ,A_{p_1}\) with deg \(A_i\le u_i-1\) could be defined by orthogonal relations

$$\begin{aligned} \int _{{\mathbb {R}}} \left( \sum _{i=1}^{p_1}A_{i}(x)\omega _{1,i}(x) \right) \omega _{2,j}(x)x^kdx=0,\quad k=0,\cdots ,v_{j}-1,\quad j=1,\cdots ,p_2. \end{aligned}$$
(2.7)

Polynomials \( A_{1},\cdots ,A_{p_1} \) are called MOPs of mixed type since the function

$$\begin{aligned} P_{\vec {u},\vec {v}}(x)=\sum _{i=1}^{p_1}A_{i}(x)\omega _{1,i}(x) \end{aligned}$$

is a linear form of the first set of weights as in type I multiple orthogonality (c.f. equation (2.1)) and has the same type of orthogonality with respect to the second set of weights as in type II multiple orthogonality (c.f. equation (2.3)). Given another pair of indices \( \vec {u}'=(u_1',\dots ,u_{p_1}') \) and \( \vec {v}'=(v_1',\dots ,v_{p_2}') \), one can also consider a family of polynomials \( B_1,\dots ,B_{p_2} \) with deg \( B_i\le v_i'-1 \) such that the linear form

$$\begin{aligned} Q_{\vec {u}',\vec {v}'}(x)=\sum _{i=1}^{p_2}B_{i}(x)\omega _{2,i}(x) \end{aligned}$$

satisfies the orthogonal relations

$$\begin{aligned} \int _{{\mathbb {R}}} x^k\omega _{1,j}(x)Q_{\vec {u}',\vec {v}'}(x) dx=0,\quad k=0,\cdots ,u_{j}'-1,\quad j=1,\cdots ,p_1. \end{aligned}$$
(2.8)

As a simple observation, the orthogonality (2.7) and (2.8) can be established equivalently by the formula

$$\begin{aligned} \int _{{\mathbb {R}}}P_{\vec {u},\vec {v}}(x)Q_{\vec {u}',\vec {v}'}(x)dx=0\text { for }\vec {u}\le \vec {u}'\text { or }\vec {v}\ge \vec {v}'\text {}. \end{aligned}$$
(2.9)

The partial order relation \( \vec {u}\le \vec {u}' \) means that \( u_i\le u_i' \) for every \( i \in \{1,2,\dots ,p_1\}\). If we denote moments

$$\begin{aligned} m_j^{(l,k)}=\int _{\mathbb {R}} x^j \omega _{1,l}(x)\omega _{2,k}(x)dx \end{aligned}$$

and assume that \(A_i(x)=\xi _{i,u_{i}-1}x^{u_{i}-1}+\cdots +\xi _{i,0}\), then orthogonal conditions (2.7) result in the following linear system

$$\begin{aligned} \left( \begin{array}{ccccccc} m_0^{(1,1)}&{}\cdots &{}m_{u_{1}-1}^{(1,1)}&{}\cdots &{}m_0^{(p_1,1)}&{}\cdots &{}m_{u_{p_1-1}}^{(p_1,1)}\\ \vdots &{}&{}\vdots &{}&{}\vdots &{}&{}\vdots \\ m_{v_{1}-1}^{(1,1)}&{}\cdots &{}m_{u_{1}+v_{1}-2}^{(1,1)}&{}\cdots &{}m_{v_{1}-1}^{(p_1,1)}&{}\cdots &{}m_{u_{p_1}+v_{1}-2}^{(p_1,1)}\\ \vdots &{}&{}\vdots &{}&{}\vdots &{}&{}\vdots \\ m_0^{(1,p_2)}&{}\cdots &{}m_{u_{1}-1}^{(1,p_2)}&{}\cdots &{}m_{0}^{(p_1,p_2)}&{}\cdots &{}m_{u_{p_1}-1}^{(p_1,p_2)}\\ \vdots &{}&{}\vdots &{}&{}\vdots &{}&{}\vdots \\ m_{v_{p_2}-1}^{(1,p_2)}&{}\cdots &{}m_{u_{1}+v_{p_2}-2}^{(1,p_2)}&{}\cdots &{}m_{v_{p_2}-1}^{(p_1,p_2)}&{}\cdots &{}m_{u_{p_1}+v_{p_2}-2}^{(p_1,p_2)}\end{array} \right) \left( \begin{array}{c} \xi _{1,0}\\ \vdots \\ \xi _{1,u_{1}-1}\\ \vdots \\ \xi _{p_1,0}\\ \vdots \\ \xi _{p_1,u_{p_1}-1} \end{array} \right) =0 \end{aligned}$$

with \(|\vec {v}|\) equations and \(|\vec {u}|\) unknowns. Therefore, to ensure a nonzero solution of the linear system, one needs to assume that \(|\vec {u}|=|\vec {v}|+1\). (For \( Q_{\vec {u}',\vec {v}'} \), we require \( |\vec {u}'|+1=|\vec {v}'| \).) By solving the linear equations directly using the Cramer’s rule, we see that the linear forms \( P_{\vec {u},\vec {v}}(x) \) and \( Q_{\vec {u}',\vec {v}'}(x) \) are proportional to determinants

$$\begin{aligned}&P_{\vec {u},\vec {v}}(x)=\sum _{i=1}^{p_1}A_i(x)\omega _{1,i}(x)\sim \det \left( \begin{array}{ccc} A^{(1,1)}_{u_1,v_1}&{}\cdots &{}A^{(p_1,1)}_{u_{p_1},v_1}\\ \vdots &{}&{}\vdots \\ A^{(1,p_2)}_{u_1,v_{p_2}}&{}\cdots &{}A^{(p_1,p_2)}_{u_{p_1},v_{p_2}}\\ \psi _1(x)&{}\cdots &{}\psi _{p_1}(x) \end{array} \right) ,\\&Q_{\vec {u}',\vec {v}'}(x)=\sum _{i=1}^{p_1}B_i(x)\omega _{2,i}(x)\sim \det \left( \begin{array}{cccc} A^{(1,1)}_{u_1',v_1'}&{}\cdots &{}A^{(p_1,1)}_{u_{p_1}',v_1'}&{}\varphi _1(x)\\ \vdots &{}&{}\vdots &{}\vdots \\ A^{(1,p_2)}_{u_1',v_{p_2}'}&{}\cdots &{}A^{(p_1,p_2)}_{u_{p_1}',v_{p_2}'}&{}\varphi _{p_2}(x)\\ \end{array} \right) , \end{aligned}$$

where

$$\begin{aligned} \psi _i(x)&=\omega _{1,i}(x)(1,x,\cdots ,x^{u_{i}-1}),\quad \\ \varphi _i(x)&=\omega _{2,i}(x)(1,x,\cdots ,x^{v_{i}'-1})',\quad \\ A^{(a,b)}_{u_i,v_j}&=\left( m_{l+k}^{(a,b)}\right) _{\begin{array}{c} {k=0,\cdots ,v_{j}-1}\\ {l=0,\cdots ,u_{i}-1} \end{array}}. \end{aligned}$$

Such formula implies that one can regard the block moment matrix as non-abelian moment matrix. Therefore, MOPs of type I, type II and mixed type are special non-abelian orthogonal polynomials discussed in [8, 40]. Moreover, if the polynomials \( \{A_j\}_{j=1}^{p_1} \) and \( \{B_j\}_{j=1}^{p_2} \) are unique up to a multiplicative constant, then we call \( (\vec {u},\vec {v}) \) a normal pair of indices for the sets of weights \( \vec {\omega }_{1} \) and \( \vec {\omega }_{2} \). Therefore, it is always possible to choose a proper normalization to uniquely define MOPs of mixed type with regard to normal pair of indices. In agreement with formula (2.6), we require that \(|\vec {v}|=|\vec {u}|\), and \(P_{\vec {u}+\vec {e}_a,\vec {v}}(x)\) and \(Q_{\vec {u},\vec {v}+\vec {e}_b}(x)\) are desired formula satisfying orthonormal condition

$$\begin{aligned} \int _{{\mathbb {R}}}P_{\vec {u}+\vec {e}_a,\vec {v}}(x)Q_{\vec {u},\vec {v}+\vec {e}_b}(x)dx=1. \end{aligned}$$

In the above formula,

$$\begin{aligned} \vec {e}_k=(0,\dots ,1,\dots ,0) \quad \text {where 1 is in the } k \text {th position} \end{aligned}$$

is the unit vector, and \(1\le a\le p_1\) and \(1\le b\le p_2\) are fixed integers.

If we further assume that \(P_{\vec {u}+\vec {e}_a,\vec {v}}(x)\) and \(Q_{\vec {u},\vec {v}+\vec {e}_b}\) have the same coefficients for the term \(x^{u_a}\omega _{1,a}(x)\) and \(x^{v_b}\omega _{2,b}(x)\), then by solving the linear system, we have

$$\begin{aligned} \begin{aligned}&P_{\vec {u}+\vec {e}_a,\vec {v}}(x)=\frac{(-1)^{\sum _{i=b+1}^{p_2}v_i}}{c_{\vec {u},\vec {v}}^{(a,b)}}\det \left( \begin{array}{ccccc} A^{(1,1)}_{u_1,v_1}&{}{}\cdots &{}{}A_{u_a+1,v_1}^{(a,1)}&{}{}\cdots &{}{}A^{(p_1,1)}_{u_{p_1},v_1}\\ \vdots &{}{}&{}{}\vdots &{}{}&{}{}\vdots \\ A^{(1,p_2)}_{u_1,v_{p_2}}&{}{}\cdots &{}{}A^{(a,p_2)}_{u_a+1,v_{p_2}}&{}{}\cdots &{}{}A^{(p_1,p_2)}_{u_{p_1},v_{p_2}}\\ \psi _1(x)&{}{}\cdots &{}{}{\tilde{\psi }}_{a}(x)&{}{}\cdots &{}{}\psi _{p_1}(x) \end{array} \right) ,\\ {}&Q_{\vec {u},\vec {v}+\vec {e}_b}(y)=\frac{(-1)^{\sum _{j=a+1}^{p_1}u_j}}{c_{\vec {u},\vec {v}}^{(a,b)}}\det \left( \begin{array}{cccccc} A^{(1,1)}_{u_1,v_1}&{}{}\cdots &{}{}A^{(p_1,1)}_{u_{p_1},v_1}&{}{}\varphi _1(x)\\ \vdots &{}{}&{}{}\vdots &{}{}\vdots \\ A^{(1,b)}_{u_1,v_b+1}&{}{}\cdots &{}{}A^{(p_1,b)}_{u_{p_1},v_b+1}&{}{}{\tilde{\varphi }}_b(x)\\ \vdots &{}{}&{}{}\vdots &{}{}\vdots \\ A^{(1,p_2)}_{u_1,v_{p_2}}&{}{}\cdots &{}{}A^{(p_1,p_2)}_{u_{p_1},v_{p_2}}&{}{}\varphi _{p_2}(x)\\ \end{array} \right) , \end{aligned} \end{aligned}$$

where \(A_{u_i,v_j}^{(a,b)}\) was defined before,

$$\begin{aligned}&\psi _i(x)=\omega _{1,i}(x)(1,x,\cdots ,x^{u_{i}-1}),\,\,\,(i\ne a),&{\tilde{\psi }}_a(x)=\omega _{1,a}(x)(1,x,\cdots ,x^{u_{a}}), \\&\varphi _j(x)=\omega _{2,j}(x)(1,x,\cdots ,x^{v_{j}-1})',\,(j\ne b),&{\tilde{\varphi }}_b(x)=\omega _{2,b}(x)(1,x,\cdots ,x^{v_{b}})', \end{aligned}$$

and

$$\begin{aligned} c_{\vec {u},\vec {v}}^{(a,b)}=\left( \det \left[ \begin{array}{ccc} A^{(1,1)}_{u_1,v_1}&{}\cdots &{}A^{(p_1,1)}_{u_{p_1},v_1}\\ \vdots &{}&{}\vdots \\ A^{(1,p_2)}_{u_1,v_{p_2}}&{}\cdots &{}A^{(p_1,p_2)}_{u_{p_1},v_{p_2}} \end{array} \right] \det \left[ \begin{array}{ccccc} A^{(1,1)}_{u_1,v_1}&{}\dots &{}A^{(a,1)}_{u_a+1,v_1}&{}\dots &{}A^{(p_1,1)}_{u_{p_1},v_1}\\ \vdots &{}&{}\vdots &{}&{}\vdots \\ A_{u_1,v_{b}+1}^{(1,b)}&{}\dots &{}A^{(a,b)}_{u_a+1,v_b+1}&{}\dots &{}A_{u_{p_1},v_{b}+1}^{(p_1,b)}\\ \vdots &{}&{}\vdots &{}&{}\vdots \\ A^{(1,p_2)}_{u_1,v_{p_2}}&{}\dots &{}A^{(a,p_2)}_{u_a+1,v_{p_2}}&{}\dots &{}A^{(p_1,p_2)}_{u_{p_1},v_{p_2}}\end{array} \right] \right) ^{1/2}. \end{aligned}$$

2.2 A Bi-Orthogonal Generalization of MOPs

This part is devoted to the bi-orthogonal generalization of MOPs with inner product (1.6). Let’s consider two pairs of different multi-indices \(\vec {u}=(u_1,\cdots ,u_{p_1})\), \(\vec {v}=(v_1,\cdots ,v_{p_2})\) and \(\vec {u}'=(u_1',\cdots ,u_{p_1}')\), \(\vec {v}'=(v_1',\cdots ,v_{p_2}')\), together with weights \(\vec {\omega }_{1}=(\omega _{1,1},\cdots ,\omega _{1,p_1})\) and \(\vec {\omega }_{2}=(\omega _{2,1},\cdots ,\omega _{2,p_2})\) supported on contours \(\gamma _1\) and \(\gamma _2\), respectively. Then, one can introduce a coupling function

$$\begin{aligned} \mathbb {S}(x,y):\gamma _1\times \gamma _2\rightarrow {\mathbb {R}}, \end{aligned}$$

such that bi-moments

$$\begin{aligned} m_{k,l}^{(a,b)}=\int _{\gamma _1\times \gamma _2}x^ky^l \mathbb {S}(x,y)\omega _{1,a}(x)\omega _{2,b}(y)dxdy \end{aligned}$$

exist and are finite for any \(1\le a\le p_1\), \(1\le b\le p_2\). Therefore, we can define polynomials \( \{A_i\}_{i=1}^{p_1} \) together with its counterpart \(\{B_j\}_{j=1}^{p_2} \) such that they satisfy the orthogonal relations

$$\begin{aligned}&\int _{\gamma _1\times \gamma _2}\left( \sum _{i=1}^{p_1}A_i(x)\omega _{1,i}(x) \right) \mathbb {S}(x,y)y^k\omega _{2,j}(y)dxdy=0,\quad k=0,\cdots ,v_j-1,\\&\quad j=1,\cdots ,p_2,\\&\int _{\gamma _1\times \gamma _2}x^k\omega _{1,j}(x)\mathbb {S}(x,y)\left( \sum _{i=1}^{p_2}B_i(y)\omega _{2,i}(y) \right) dxdy=0,\quad k=0,\cdots ,u_j'-1,\\&\quad j=1,\cdots ,p_1. \end{aligned}$$

To uniquely determine these multiple bi-orthogonal polynomials (MBOPs), we follow our discussions about MOPs of mixed type, and the formal definition is given below.

Definition 2.1

Suppose we have two pairs of multi-indices \(\vec {u}=(u_1,\cdots ,u_{p_1})\), \(\vec {v}=(v_1,\cdots ,v_{p_2})\) and \(\vec {u}'=(u_1',\cdots ,u_{p_1}')\), \(\vec {v}'=(v_1',\cdots ,v_{p_2}')\) with \(|\vec {u}|=|\vec {v}|\) and \( |\vec {u}'|=|\vec {v}'| \), together with two sets of weights \(\vec {\omega _{1}}\) and \(\vec {\omega _{2}}\), which are supported on contours \(\gamma _1\) and \(\gamma _2\), respectively. Fix integers \( 1\le a\le p_1 \) and \( 1\le b\le p_2 \). If \(\mathbb {S}(x,y)\) is a nice enough function from \(\gamma _1\times \gamma _2\) to \({\mathbb {R}}\) so that all moments exist and are finite, then there are unique multiple bi-orthogonal functions

$$\begin{aligned} P_{\vec {u}+\vec {e}_a,\vec {v}}(x)=\sum _{i=1}^{p_1}A_i(x)\omega _{1,i}(x), \text { where deg } A_i(x)\le u_i-1 (i\ne a) \text { and deg } A_a(x)\le u_a \text {},\\ Q_{\vec {u}',\vec {v}'+\vec {e}_b}(y)=\sum _{i=1}^{p_2}B_i(y)\omega _{2,i}(y), \text { where deg } B_i(y)\le v_i'-1 (i\ne b)\text { and deg } B_b(y)\le v_b' \text {} \end{aligned}$$

satisfying multiple orthogonal relations

$$\begin{aligned} \begin{aligned} \int _{\gamma _1\times \gamma _2}P_{\vec {u}+\vec {e}_a,\vec {v}}(x)\mathbb {S}(x,y)Q_{\vec {u}',\vec {v}'+\vec {e}_b}(y)dxdy=\left\{ \begin{array}{ll} 0 &{} \text { if } \vec {u}+\vec {e}_a\le \vec {u}',\\ 0 &{} \text { if } \vec {v}\ge \vec {v}'+\vec {e}_b,\\ 1 &{} \text { if } \vec {u}=\vec {u}' \text { and } \vec {v}=\vec {v}'. \end{array} \right. \end{aligned} \end{aligned}$$
(2.10)

It is required that \(P_{\vec {u}+\vec {e}_a,\vec {v}}(x)\) and \(Q_{\vec {u},\vec {v}+\vec {e}_b}(y)\) have the same normalization factor.

By introducing

$$\begin{aligned}&\psi _{i}(x)=\omega _{1,i}(x)(1,x,\cdots ,x^{u_i-1}),&{\tilde{\psi }}_{i}(x)=\omega _{1,i}(x)(1,x,\cdots ,x^{u_i}),{} & {} i=1,\cdots ,p_1, \\&\varphi _i(x)=\omega _{2,i}(x)(1,x,\cdots ,x^{v_i-1})',&{\tilde{\varphi }}_i(x)=\omega _{2,i}(x)(1,x,\cdots ,x^{v_i})',{} & {} i=1,\cdots ,p_2, \end{aligned}$$

and solving the orthogonal relations (2.10), we know that

$$\begin{aligned}&P_{\vec {u}+\vec {e}_a,\vec {v}}(x)=\frac{(-1)^{\sum _{i=b+1}^{p_2}v_i}}{c_{\vec {u},\vec {v}}^{(a,b)}}\det \left( \begin{array}{ccccc} A^{(1,1)}_{u_1,v_1}&{}\dots &{}A^{(a,1)}_{u_a+1,v_1}&{}\dots &{}A^{(p_1,1)}_{u_{p_1},v_1}\\ \vdots &{}&{}\vdots &{}&{}\vdots \\ A^{(1,p_2)}_{u_1,v_{p_2}}&{}\cdots &{}A^{(a,p_2)}_{u_a+1,v_{p_2}}&{}\dots &{}A^{(p_1,p_2)}_{u_{p_1},v_{p_2}}\\ \psi _{1}(x)&{}\cdots &{}{\tilde{\psi }}_{a}(x)&{}\dots &{}\psi _{p_1}(x)\end{array} \right) ,\\&Q_{\vec {u},\vec {v}+\vec {e}_b}(y)=\frac{(-1)^{\sum _{j=a+1}^{p_1}u_j}}{c_{\vec {u},\vec {v}}^{(a,b)}}\det \left( \begin{array}{cccc} A^{(1,1)}_{u_1,v_1}&{}\cdots &{}A^{(p_1,1)}_{u_{p_1},v_1}&{}\varphi _{1}(y)\\ \vdots &{}&{}\vdots &{}\vdots \\ A^{(1,b)}_{u_1,v_b+1}&{}\cdots &{}A^{(p_1,b)}_{u_{p_1},v_b+1}&{}{\tilde{\varphi }}_{b}(y)\\ \vdots &{}&{}\vdots &{}\vdots \\ A^{(1,p_2)}_{u_1,v_{p_2}}&{}\cdots &{}A^{(p_1,p_2)}_{u_{p_1},v_{p_2}}&{}\varphi _{p_2}(y)\end{array} \right) , \end{aligned}$$

where \( A_{u_i,v_j}^{(i,j)}=(m_{l,k}^{(i,j)})_{\begin{array}{c} k=0,\dots ,v_j-1\\ l=0,\dots ,u_i-1 \end{array}} \) and

$$\begin{aligned} c_{\vec {u},\vec {v}}^{(a,b)}=\left( \det \left[ \begin{array}{ccc} A^{(1,1)}_{u_1,v_1}&{}\cdots &{}A^{(p_1,1)}_{u_{p_1},v_1}\\ \vdots &{}&{}\vdots \\ A^{(1,p_2)}_{u_1,v_{p_2}}&{}\cdots &{}A^{(p_1,p_2)}_{u_{p_1},v_{p_2}} \end{array} \right] \det \left[ \begin{array}{ccccc} A^{(1,1)}_{u_1,v_1}&{}\dots &{}A^{(a,1)}_{u_a+1,v_1}&{}\dots &{}A^{(p_1,1)}_{u_{p_1},v_1}\\ \vdots &{}&{}\vdots &{}&{}\vdots \\ A_{u_1,v_{b}+1}^{(1,b)}&{}\dots &{}A^{(a,b)}_{u_a+1,v_b+1}&{}\dots &{}A_{u_{p_1},v_{b}+1}^{(p_1,b)}\\ \vdots &{}&{}\vdots &{}&{}\vdots \\ A^{(1,p_2)}_{u_1,v_{p_2}}&{}\dots &{}A^{(a,p_2)}_{u_a+1,v_{p_2}}&{}\dots &{}A^{(p_1,p_2)}_{u_{p_1},v_{p_2}}\end{array} \right] \right) ^{1/2}. \end{aligned}$$

Remark 2.2

When \(\vec {u}\) and \(\vec {v}\) have only one index, multiple bi-orthogonal polynomials degenerate to normal bi-orthogonal polynomials, which have been well investigated. For example, the case \(\mathbb {S}(x,y)=e^{-cxy}\) (where c is a coupling constant) is related to a coupled Hermitian matrix model and was studied in [4, 45]. Moreover, the case \(\mathbb {S}(x,y)=(x+y)^{-1}\) gives rise to the so-called Cauchy bi-orthogonal polynomials, which has attracted attention in different fields like random matrix, peakon systems and approximation theory [13, 14, 41, 43].

2.3 Multiple Symmetric Bi-Orthogonal Polynomials and Multiple Skew-Orthogonal Polynomials

In this part, we prepare to give a definition for multiple skew-orthogonal polynomials. Due to the difficulties in skew orthogonality, we firstly take a look at the multiple symmetric bi-orthogonal polynomials and then move to the skew symmetric case.

2.3.1 Multiple Symmetric Bi-Orthogonal Polynomials

In the symmetric case, we need to assume that multi-indices \(\vec {u}\) and \(\vec {v}\) as well as weights \(\vec {\omega }_1\) and \(\vec {\omega }_2\) are the same; that is, we have only one multiple index \(\vec {v}=(v_1,\cdots ,v_{p})\) and one family of weights \((\omega _{1},\cdots ,\omega _{p})\) supported on \(\gamma \). Moreover, the coupling function \(\mathbb {S}(x,y):\gamma \times \gamma \rightarrow {\mathbb {R}}\) is a symmetric function, i.e., \(\mathbb {S}(x,y)=\mathbb {S}(y,x)\). Therefore, moments under this setting could be written as

$$\begin{aligned} m_{k,l}^{(i,j)}:=\int _{\gamma \times \gamma } x^ky^l\mathbb {S}(x,y)\omega _i(x)\omega _j(y)dxdy, \end{aligned}$$

and obviously \(m_{k,l}^{(i,j)}=m_{l,k}^{(j,i)}\). Let \(b\in {\mathbb {Z}}\) and \( 1\le b\le p \), we have a sequence of symmetric MBOPs \(\{A_i(x)\}_{i=1}^p\), where deg \(A_i\le v_i-1\) \((i=1,\cdots ,p, i\ne b)\) and deg \(A_b\le v_b\), such that corresponding linear form \( P_{\vec {v}}(x)=\sum _{i=1}^{p}A_i(x)\omega _i(x) \) satisfies the orthogonal relation

$$\begin{aligned} \int _{\gamma \times \gamma }P_{\vec {v}}(x)\mathbb {S}(x,y)P_{\vec {v}'}(y)dxdy=\left\{ \begin{array}{ll} 0 &{} \text { if } \vec {v}+\vec {e}_b\le \vec {v}' \text { or } \vec {v}\ge \vec {v}'+\vec {e}_b \text {},\\ 1 &{} \text { if } \vec {v}=\vec {v}' \text {}. \end{array} \right. \end{aligned}$$
(2.11)

In order to solve the relations, it is useful to write the following equivalent form

$$\begin{aligned}&\int _{\gamma \times \gamma }\left( \sum _{i=1}^{p}A_i(x)\omega _i(x) \right) \mathbb {S}(x,y)y^k\omega _j(y)dxdy=0,\quad k=0,\cdots ,v_j-1,\quad j=1,\cdots ,p \end{aligned}$$
(2.12a)
$$\begin{aligned}&\int _{\gamma \times \gamma }\left( \sum _{i=1}^{p}A_i(x)\omega _i(x) \right) \mathbb {S}(x,y)y^{v_b}\omega _b(y)dxdy=h_{\vec {v}}^{(b)}\ne 0. \end{aligned}$$
(2.12b)

If we assume that \(A_i(x)=a_{i,v_i-1}x^{v_i-1}+\cdots +a_{i,0}\) \((i\ne b, 1\le i\le p)\) and \(A_b(x)=a_{b,v_b}x^{v_b}+\cdots +a_{b,0}\), then the above linear system is equivalent to

$$\begin{aligned} \left( \begin{array}{ccccc} A_{v_1,v_1}^{(1,1)}&{}\cdots &{}A_{v_b+1,v_1}^{(b,1)}&{}\cdots &{}A_{v_p,v_1}^{(p,1)}\\ \vdots &{}&{}\vdots &{}&{}\vdots \\ A_{v_1,v_b+1}^{(1,b)}&{}\cdots &{}A_{v_b+1,v_b+1}^{(b,b)}&{}\cdots &{}A_{v_p,v_b+1}^{(p,b)}\\ \vdots &{}&{}\vdots &{}&{}\vdots \\ A_{v_1,v_p}^{(1,p)}&{}\cdots &{}A_{v_b+1,v_p}^{(b,p)}&{}\cdots &{}A_{v_p,v_p}^{(p,p)}\end{array} \right) \left( \begin{array}{c} \alpha ^{(1)}\\ \vdots \\ \alpha ^{(b)}\\ \vdots \\ \alpha ^{(p)}\end{array} \right) =\left( \begin{array}{c} 0\\ \vdots \\ {\vec {e}_b}^\top \\ \vdots \\ 0 \end{array} \right) , \end{aligned}$$
(2.13)

where

$$\begin{aligned} \alpha ^{(i)}=(a_{i,0},\cdots ,a_{i,v_i-1})',\,(i\ne b, 1\le i\le p),\quad \alpha ^{(b)}=(a_{b,0},\cdots ,a_{b,v_b})' \end{aligned}$$

and \({\vec {e}_b}^\top \) is the transpose of \(\vec {e}_b\). Therefore, we can obtain the following determinant form

$$\begin{aligned} P_{\vec {v}}(x)=\sum _{i=1}^p A_i(x)\omega _i(x)=\frac{(-1)^{\sum _{i=b+1}^{p}v_i}}{c_{\vec {v}}^{(b)}}\det \left( \begin{array}{ccccc} A_{v_1,v_1}^{(1,1)}&{}\cdots &{}A_{v_b+1,v_1}^{(b,1)}&{}\cdots &{}A_{v_p,v_1}^{(p,1)}\\ \vdots &{}&{}\vdots &{}&{}\vdots \\ A_{v_1,v_p}^{(1,p)}&{}\cdots &{}A_{v_b+1,v_p}^{(b,p)}&{}\cdots &{}A_{v_p,v_p}^{(p,p)}\\ \psi _1(x)&{}\cdots &{}\psi _b(x)&{}\cdots &{}\psi _p(x) \end{array} \right) , \end{aligned}$$

where \(\psi _i(x)=\omega _i(x)(1,\cdots ,x^{v_i-1})\) \((i\ne b, 1\le i\le p)\) and \(\psi _b(x)=\omega _b(x)(1,\cdots ,\)\(x^{v_b})\). If we denote

$$\begin{aligned} \tau _{(v_1,\cdots ,v_p)}=\det \left( \begin{array}{ccc} A_{v_1,v_1}^{(1,1)}&{}\cdots &{}A_{v_p,v_1}^{(p,1)}\\ \vdots &{}&{}\vdots \\ A_{v_1,v_p}^{(1,p)}&{}\cdots &{}A_{v_p,v_p}^{(p,p)} \end{array} \right) ,\quad A_{\alpha ,\beta }^{(i,j)}=\left( m_{l+k}^{(i,j)}\right) _{\begin{array}{c} {k=0,\cdots ,\alpha -1}\\ {l=0,\cdots ,\beta -1} \end{array}}=A_{\beta ,\alpha }^{(j,i)}, \end{aligned}$$

then we have \(c_{\vec {v}}^{(b)}=(\tau _{(v_1,\cdots ,v_p)}\tau _{(v_1,\cdots ,v_b+1,\cdots ,v_p)})^{1/2}\) and \(h_{\vec {v}}^{(b)}\) in (2.12b) could be expressed by \((\tau _{(v_1,\cdots ,v_b+1,\cdots ,v_p)}/\tau _{(v_1,\cdots ,v_p)})^{1/2}\).

2.3.2 Multiple Skew-Orthogonal Polynomials

Let’s consider a skew symmetric kernel \(\mathbb {S}(x,y)=-\mathbb {S}(y,x)\) such that

$$\begin{aligned} m_{k,l}^{(a,b)}&:=\int _{\gamma \times \gamma } x^ky^l\mathbb {S}(x,y)\omega _a(x)\omega _b(y)dxdy=-\int _{\gamma \times \gamma }x^ly^k\mathbb {S}(x,y)\omega _b(x)\omega _a(y)dxdy\\&=-m_{l,k}^{(b,a)}. \end{aligned}$$

Then for a multi-index \(\vec {v}=(v_1,\cdots ,v_p)\) and a sequence of weights \((\omega _1,\cdots ,\omega _p)\), we can define corresponding polynomials \((R_1(x),\cdots ,R_p(x))\) where deg \(R_i\le v_i-1\) (\(i=1,\cdots ,p\)). Since our primary consideration is to seek for the linear form \(\sum _{i=1}^p R_i(x)\omega _i(x)\), which is simultaneously skew orthogonal with respect to several weights, we first consider the multiple skew-orthogonal relations

$$\begin{aligned} \int _{\gamma \times \gamma } \left( \sum _{i=1}^p R_i(x)\omega _i(x) \right) \mathbb {S}(x,y)y^k\omega _j(y)dxdy=0,\quad k=0,\cdots ,v_j-1,\quad j=1,\cdots ,p. \end{aligned}$$
(2.14)

If we denote \(R_i(x)=a_{i,v_i-1}x^{v_i-1}+\cdots +a_{i,0}\), then equation (2.14) implies

$$\begin{aligned} \left( \begin{array}{ccc} A_{v_1,v_1}^{(1,1)}&{}\cdots &{}A_{v_p,v_1}^{(p,1)}\\ \vdots &{}&{}\vdots \\ A_{v_1,v_p}^{(1,p)}&{}\cdots &{}A_{v_p,v_p}^{(p,p)} \end{array} \right) \left( \begin{array}{c} \alpha ^{(1)}\\ \vdots \\ \alpha ^{(p)} \end{array} \right) =0, \end{aligned}$$

where \( A_{u_i,v_j}^{(i,j)}=(m_{l,k}^{(i,j)})_{\begin{array}{c} k=0,\dots ,v_j-1\\ l=0,\dots ,u_i-1 \end{array}} \) and \( \alpha ^{(i)}=(a_{i,0},\cdots ,a_{i,v_i-1})' \). Since the matrix is skew symmetric, one knows that non-trivial solutions for \(\alpha ^{(i)}\) always exist when \(v_1+\cdots +v_p\) is odd (i.e., \(|\vec {v}|\) is odd). Therefore, it is a key observation that MSOPs are valid only for \(|\vec {v}|\) being odd. The normalization condition is then given by

$$\begin{aligned} \int _{\gamma \times \gamma }\left( \sum _{i=1}^p R_i(x)\omega _i(x) \right) \mathbb {S}(x,y)y^{v_b}\omega _b(y)dxdy=h_{\vec {v}}^{(b)}\ne 0, \end{aligned}$$

where b is a fixed integer between 1 and p. To conclude, we have the following definition for multiple skew-orthogonal polynomials.

Definition 2.3

Given a multi-index \(\vec {v}=(v_1,\cdots ,v_p)\) such that \(|\vec {v}|=v_1+\cdots +v_p\) is odd, if there are p different weights \((\omega _1,\cdots ,\omega _p)\) supported on \(\gamma \) and \(\mathbb {S}(x,y)\) is a skew symmetric function from \(\gamma \times \gamma \) to \({\mathbb {R}}\) so that all moments are finite, then for a fixed integer \(b\in \{1,2,\dots ,p\}\), there exist multiple skew-orthogonal polynomials \(R_1(x),\cdots ,R_p(x)\) and \({\tilde{R}}_b(x)\), such that

$$\begin{aligned} \begin{aligned}&\int _{\gamma \times \gamma }\left( \sum _{i=1}^{p}R_i(x)\omega _i(x) \right) \mathbb {S}(x,y)y^j\omega _k(y)dxdy=0,\quad j=0,\cdots ,v_k-1,\quad k=1,\cdots ,p,\\&\int _{\gamma \times \gamma }\left( \sum _{i=1}^{p}R_i(x)\omega _i(x) \right) \mathbb {S}(x,y)\left( \sum _{\begin{array}{c} i=1\\ i\ne b \end{array}}^{p}R_i(y)\omega _i(y)+{\tilde{R}}_b(y)\omega _b(y) \right) dxdy=1, \end{aligned} \end{aligned}$$
(2.15)

where deg \(R_i(x)\le v_i-1\) \((i=1,\cdots ,p)\), and deg \({\tilde{R}}_b(x)\le v_b\). Here, we assume that coefficients in the highest-order terms of \(R_b\) and \({\tilde{R}}_b\) are the same.

Remark 2.4

Different from the orthogonal relations for symmetric MBOPs (2.11), the skew inner product of MSOPs and itself is equal to zero, i.e.,

$$\begin{aligned} \int _{\gamma \times \gamma } \left( \sum _{i=1}^{p}R_i(x)\omega _i(x)\right) \mathbb {S}(x,y)\left( \sum _{i=1}^{p}R_i(y)\omega _i(y) \right) dxdy=0. \end{aligned}$$

Note that the skew orthogonality is not affected by the transformation \({\tilde{R}}_b(y)\rightarrow {\tilde{R}}_b(y)+\alpha R_b(y)\) for all \(\alpha \in {\mathbb {R}}\). Therefore, for later convenience, we denote \((R_1(x),\cdots ,\)\(R_p(x),{\tilde{R}}_b(x))\) as a family of multiple skew-orthogonal polynomials and set the coefficient of \(x^{v_b-1}\omega _b(x)\) in \(\sum _{\begin{array}{c} i=1\\ i\ne b \end{array}}^pR_i(x)\omega _i(x)+{\tilde{R}}_b(x)\omega _b(x)\) as 0.

By assuming that coefficients in the highest-order terms of \(R_b(x)\) and \({\tilde{R}}_b(x)\) are the same, equation (2.15) has a unique solution and we have

$$\begin{aligned} R_{\vec {v}}^{(b)}(x)&:=\sum _{i=1}^pR_i(x)\omega _i(x)={\frac{(-1)^{\sum _{i=b+1}^{p}v_i}}{c_{\vec {v}}^{(b)}}} \det \left( \begin{array}{ccc} A_{v_1,v_1}^{(1,1)}&{}\cdots &{}A_{v_p,v_1}^{(p,1)}\\ \vdots &{}&{}\vdots \\ A_{v_1,v_b-1}^{(1,b)}&{}\cdots &{}A_{v_p,v_b-1}^{(p,b)}\\ \vdots &{}&{}\vdots \\ A_{v_1,v_p}^{(1,p)}&{}\cdots &{}A_{v_p,v_p}^{(p,p)}\\ \psi _1(x)&{}\cdots &{}\psi _p(x) \end{array} \right) , \end{aligned}$$
(2.16a)
$$\begin{aligned} {\tilde{R}}_{\vec {v}}^{(b)}(x)&:=\sum _{\begin{array}{c} i=1\\ i\ne b \end{array}}^{p}R_i(y)\omega _i(y)+{\tilde{R}}_b(y)\omega _b(y) \end{aligned}$$
(2.16b)
$$\begin{aligned}&={\frac{1}{c_{\vec {v}}^{(b)}}} \det \left( \begin{array}{cccccc} A_{v_1,v_1}^{(1,1)}&{}\cdots &{}A_{v_b-1,v_1}^{(b,1)}&{}\cdots &{}A_{v_p,v_1}^{(p,1)}&{}\psi _1(y)'\\ \vdots &{}&{}\vdots &{}&{}\vdots &{}\vdots \\ A_{v_1,v_b-1}^{(1,b)}&{}\cdots &{}A_{v_b-1,v_b-1}^{(b,b)}&{}\cdots &{}A_{v_p,v_b-1}^{(p,b)}&{}{\tilde{\psi }}_b(y)'\\ \vdots &{}&{}\vdots &{}&{}\vdots &{}\vdots \\ A_{v_1,v_p}^{(1,p)}&{}\cdots &{}A_{v_b-1,v_p}^{(b,p)}&{}\cdots &{}A_{v_p,v_p}^{(p,p)}&{}\psi _p(y)'\\ M_{v_1,v_b}^{(1,b)}&{}\cdots &{}M_{v_b-1,v_b}^{(b,b)}&{}\cdots &{}M_{v_p,v_b}^{(p,b)}&{}y^{v_b}\omega _b(y) \end{array} \right) , \end{aligned}$$
(2.16c)

where

$$\begin{aligned} \psi _i(x)&=\omega _i(x)(1,\cdots ,x^{v_i-1}),\,(i=1,\cdots ,p),\quad \\ {\tilde{\psi }}_b(x)&=\omega _b(x)(1,\cdots ,x^{v_b-2}), \quad \\ M_{v_i,v_j}^{(k,l)}&=(m_{i,v_j}^{(k,l)})_{i=0}^{v_i-1} \end{aligned}$$

and the normalization factor \( c^{(b)}_{\vec {v}} \) is given by

$$\begin{aligned} (c^{(b)}_{\vec {v}})^{2}=\det \left( \begin{array}{ccc} A_{v_1,v_1}^{(1,1)}&{}\cdots &{}A_{v_p,v_1}^{(p,1)}\\ \vdots &{}&{}\vdots \\ A_{v_1,v_b-1}^{(1,b)}&{}\cdots &{}A_{v_p,v_b-1}^{(p,b)}\\ M_{v_1,v_b}^{(1,b)}&{}\cdots &{}M_{v_p,v_b}^{(p,b)}\\ A_{v_1,v_{b+1}}^{(1,b+1)}&{}\cdots &{}A_{v_p,v_{b+1}}^{(p,b+1)}\\ \vdots &{}&{}\vdots \\ A_{v_1,v_p}^{(1,p)}&{}\cdots &{}A_{v_p,v_p}^{(p,p)}\end{array}\right) \det \left( \begin{array}{ccccc} A_{v_1,v_1}^{(1,1)}&{}\cdots &{}A_{v_b-1,v_1}^{(b,1)}&{}\cdots &{}A_{v_p,v_1}^{(p,1)}\\ \vdots &{}&{}\vdots &{}&{}\vdots \\ A_{v_1,v_b-1}^{(1,b)}&{}\cdots &{}A_{v_b-1,v_b-1}^{(b,b)}&{}\cdots &{}A_{v_p,v_b-1}^{(p,b)}\\ \vdots &{}&{}\vdots &{}&{}\vdots \\ A_{v_1,v_p}^{(1,p)}&{}\cdots &{}A_{v_b-1,v_p}^{(b,p)}&{}\cdots &{}A_{v_p,v_p}^{(p,p)} \end{array} \right) . \end{aligned}$$
(2.17)

3 Pfaffian form of Multiple Skew-Orthogonal Polynomials

As is known, skew-orthogonal polynomials have Pfaffian expressions which are widely used in integrable systems in terms of Pfaffian tau-functions [1, 2, 39]. In this section, we plan to express MSOPs by Pfaffian and investigate its evolution when time parameters are introduced. The 2-component case has to be a primary consideration since the multiple-component case can be easily generalized from the 2-component case. To this end, we assume an index set \(v=(v_1,v_2)\) with odd \(v_1+v_2\).

3.1 Pfaffian Expressions for MSOPs

First, we note that determinant expressions in (2.16a), (2.16b) and (2.17) can be alternatively written in terms of Pfaffians, which is stated as follows.

Proposition 3.1

For multiple skew-orthogonal polynomials \((R_1(x),R_2(x),\)\({\tilde{R}}_2(x))\), we have

$$\begin{aligned}&R_{(v_1,v_2)}^{(2)}(x):=R_1(x)\omega _1(x)+R_2(x)\omega _2(x)={\frac{1}{d^{(2)}_{\vec {v}}}}\text {Pf}\left( \begin{array}{ccc} A_{v_1,v_1}^{(1,1)}&{}A_{v_2,v_1}^{(2,1)}&{}-\psi _1(x)\\ A_{v_1,v_2}^{(1,2)}&{}A_{v_2,v_2}^{(2,2)}&{}-\psi _2(x)\\ \psi _1(x)&{}\psi _2(x)&{}0\end{array} \right) , \end{aligned}$$
(3.1a)
$$\begin{aligned}&{\tilde{R}}_{(v_1,v_2)}^{(2)}(x)=R_1(x)\omega _1(x)+R_2(x)\omega _2(x)+{\tilde{R}}_2(x)\omega _2(x)\nonumber \\&\qquad \qquad ={\frac{1}{d^{(2)}_{\vec {v}}}}\text {Pf}\left( \begin{array}{cccc} A_{v_1,v_1}^{(1,1)}&{}A_{v_2-1,v_1}^{(2,1)}&{}(M_{v_2,v_1}^{(2,1)})^\top &{}-\psi _1(x)\\ A_{v_1,v_2-1}^{(1,2)}&{}A_{v_2-1,v_2-1}^{(2,2)}&{}(M_{v_2,v_2-1}^{(2,2)})^\top &{}-{\tilde{\psi }}_2(x)\\ M_{v_1,v_2}^{(1,2)}&{}M_{v_2-1,v_2}^{(2,2)}&{}0&{}-x^{v_2}\omega _2(x)\\ \psi _1(x)&{}{\tilde{\psi }}_2(x)&{}x^{v_2}\omega _2(x)&{}0 \end{array} \right) , \end{aligned}$$
(3.1b)

with a normalization factor

$$\begin{aligned} d^{(2)}_{\vec {v}}=\left( \text {Pf}\left( \begin{array}{cc} A_{v_1,v_1}^{(1,1)}&{}A_{v_2-1,v_1}^{(2,1)}\\ A_{v_1,v_2-1}^{(1,2)}&{}A_{v_2-1,v_2-1}^{(2,2)} \end{array} \right) \text {Pf}\left( \begin{array}{cc} A_{v_1,v_1}^{(1,1)}&{}A_{v_2+1,v_1}^{(2,1)}\\ A_{v_1,v_2+1}^{(1,2)}&{}A_{v_2+1,v_2+1}^{(2,2)} \end{array} \right) \right) ^{1/2}. \end{aligned}$$

Proof

Here, we give a clear explanation for formula (2.16a), and (2.16b) can be similarly verified. By applying Jacobi determinant identityFootnote 2 to

$$\begin{aligned} \left( \begin{array}{ccccccc} m_{0,0}^{(1,1)}&{}\cdots &{}m_{v_1-1,0}^{(1,1)}&{}m_{0,0}^{(2,1)}&{}\cdots &{}m_{v_2-1,0}^{(2,1)}&{}-\omega _1(x)\\ \vdots &{}&{}\vdots &{}\vdots &{}&{}\vdots &{}\vdots \\ m_{0,v_1-1}^{(1,1)}&{}\cdots &{}m_{v_1-1,v_1-1}^{(1,1)}&{}m_{0,v_2-1}^{(1,2)}&{}\cdots &{}m_{v_2-1,v_1-1}^{(2,1)}&{}-x^{v_1-1}\omega _1(x)\\ m_{0,0}^{(1,2)}&{}\cdots &{}m_{v_1-1,0}^{(1,2)}&{}m_{0,0}^{(2,2)}&{}\cdots &{}m_{v_2-1,0}^{(2,2)}&{}-\omega _2(x)\\ \vdots &{}&{}\vdots &{}\vdots &{}&{}\vdots &{}\vdots \\ m_{0,v_2-1}^{(1,2)}&{}\cdots &{}m_{v_1-1,v_2-1}^{(1,2)}&{}m_{0,v_2-1}^{(2,2)}&{}\cdots &{}m_{v_2-1,v_2-1}^{(2,2)}&{}-x^{v_2-1}\omega _2(x)\\ \omega _1(x)&{}\cdots &{}x^{v_1-1}\omega _1(x)&{}\omega _2(x)&{}\cdots &{}x^{v_2-1}\omega _2(x)&{}0 \end{array}\right) \end{aligned}$$

for last two rows and columns, and noting that the determinant of an odd-order skew symmetric matrix is zero, we obtain

$$\begin{aligned} R_{(v_1,v_2)}^{(2)}(x)={\frac{1}{c^{(2)}_{(v_1,v_2)}}} \text {Pf}\left( \begin{array}{cc} A_{v_1,v_1}^{(1,1)}&{}A_{v_2-1,v_1}^{(2,1)}\\ A_{v_1,v_2-1}^{(1,2)}&{}A_{v_2-1,v_2-1}^{(2,2)}\end{array} \right) \text {Pf}\left( \begin{array}{ccc} A_{v_1,v_1}^{(1,1)}&{}A_{v_2,v_1}^{(2,1)}&{}-\psi _1(x)\\ A_{v_1,v_2}^{(1,2)}&{}A_{v_2,v_2}^{(2,2)}&{}-\psi _2(x)\\ \psi _1(x)&{}\psi _2(x)&{}0\end{array} \right) \end{aligned}$$

Moreover, by applying determinant identity to \(c^{(2)}_{(v_1,v_2)}\) in (2.17) for the first determinant, we have

$$\begin{aligned} c^{(2)}_{(v_1,v_2)}=\text {Pf}\left( \begin{array}{cc} A_{v_1,v_1}^{(1,1)}&{}A_{v_2-1,v_1}^{(2,1)}\\ A_{v_1,v_2-1}^{(1,2)}&{}A_{v_2-1,v_2-1}^{(2,2)}\end{array} \right) ^{3/2} \text {Pf}\left( \begin{array}{cc} A_{v_1,v_1}^{(1,1)}&{}A_{v_2+1,v_1}^{(2,1)}\\ A_{v_1,v_2+1}^{(1,2)}&{}A_{v_2+1,v_2+1}^{(2,2)}\end{array} \right) ^{1/2}. \end{aligned}$$

Thus, the proof is complete. \(\square \)

Therefore, one can use Hirota’s Pfaffian notations [31] to make these expressions more compact. If we denote

$$\begin{aligned} \text {pf}(i^{(k)}, j^{(l)})=m_{i,j}^{(k,l)}, \quad \text {pf}(i^{(k)},x)=\omega _k(x)x^i,\quad (k,l=1,2), \end{aligned}$$

then formulas (3.1a) and (3.1b) could be equivalently expressed by

$$\begin{aligned} \begin{aligned}&R^{(2)}_{(v_1,v_2)}(x)={\frac{1}{d^{(2)}_{(v_1,v_2)}}}\text {Pf}(0^{(1)},\cdots ,v_1-1^{(1)},0^{(2)},\cdots ,v_2-1^{(2)},x),\\&{\tilde{R}}^{(2)}_{(v_1,v_2)}(x)={\frac{1}{d^{(2)}_{(v_1,v_2)}}}\text {Pf}(0^{(1)},\cdots ,v_1-1^{(1)},0^{(2)},\cdots ,v_2-2^{(2)},v_2^{(2)},x), \end{aligned} \end{aligned}$$
(3.2)

where \(d^{(2)}_{(v_1,v_2)}=(\tau _{(v_1,v_2-1)}\tau _{(v_1,v_2+1)})^{1/2}\) and

$$\begin{aligned} \tau _{(v_1,v_2-1)}=\text {Pf}(0^{(1)},\cdots ,v_1-1^{(1)},0^{(2)},\cdots ,v_2-2^{(2)}). \end{aligned}$$

According to our discussions in the last section, there should be another family of multiple skew-orthogonal polynomials \((R_{(v_1,v_2)}^{(1)}(x),{\tilde{R}}_{(v_1,v_2)}^{(1)}(x))\) such that

$$\begin{aligned} \begin{aligned}&R_{(v_1,v_2)}^{(1)}(x)=\frac{1}{d_{(v_1,v_2)}^{(1)}}\text {Pf}(0^{(1)},\cdots ,v_1-1^{(1)},0^{(2)},\cdots ,v_2-1^{(2)},x),\\&{\tilde{R}}_{(v_1,v_2)}^{(1)}(x)=\frac{1}{d_{(v_1,v_2)}^{(1)}}\text {Pf}(0^{(1)},\cdots ,v_1-2^{(1)},v_1^{(1)},0^{(2)},\cdots ,v_2-1^{(2)},x), \end{aligned}\nonumber \\ \end{aligned}$$
(3.3)

where \(d_{(v_1,v_2)}^{(1)}=\left( \tau _{(v_1-1,v_2)}\tau _{(v_1+1,v_2)} \right) ^{1/2}\). Moreover, from (3.2) and (3.3), one knows that \(R_{(v_1,v_2)}^{(1)}(x)\) and \(R_{(v_1,v_2)}^{(2)}(x)\) are the same up to a normalization factor.

By using Pfaffian notations, the skew-orthogonal relations given by Definition 2.3 have the following equivalent descriptions.

Proposition 3.2

\(R_{(v_1,v_2)}^{(1)}(x)\) and \( R_{(v_1,v_2)}^{(2)}(x)\) are simultaneously skew orthogonal with \({\tilde{R}}_{(v_1,v_2)}^{(1)}(x)\) and \({\tilde{R}}_{(v_1,v_2)}^{(2)}(x)\), i.e.,

$$\begin{aligned}&\langle R_{(v_1,v_2)}^{(1)}(x),R_{(u_1,u_2)}^{(1)}(y)\rangle =0, \end{aligned}$$
(3.4a)
$$\begin{aligned}&\langle R_{(v_1,v_2)}^{(1)}(x),{\tilde{R}}_{(u_1,u_2)}^{(1)}(y)\rangle =\left\{ \begin{array}{ll} 0,&{}\text { if }u_1<v_1\text { and }u_2\le v_2\text {},\\ 1,&{}\text { if }u_1=v_1\text { and }u_2=v_2\text {},\\ \end{array}\right. \end{aligned}$$
(3.4b)
$$\begin{aligned}&\langle R_{(v_1,v_2)}^{(1)}(x),{\tilde{R}}_{(u_1,u_2)}^{(2)}(y)\rangle =\left\{ \begin{array}{ll} 0,&{}\text { if }u_1\le v_1\text { and }u_2< v_2\text {},\\ d_{(v_1,v_2)}^{(2)}/d_{(v_1,v_2)}^{(1)},&{}\text { if }u_1=v_1\text { and }u_2=v_2\text {}.\\ \end{array}\right. \end{aligned}$$
(3.4c)

Proof

Since e quations (3.4a) and (3.4b) have been shown in the last section, we prove the third equation (3.4c) by using Pfaffian notations. Taking the Pfaffian expressions (3.2) and (3.3) into the skew inner product, we have

$$\begin{aligned} \begin{aligned} {d_{(v_1,v_2)}^{(1)}d_{(u_1,u_2)}^{(2)}}&\langle R_{(v_1,v_2)}^{(1)}(x),{\tilde{R}}_{(u_1,u_2)}^{(2)}(y)\rangle \\&=\sum _{i\in I_1}\sum _{j\in I_2}(-1)^{|i|+|j|}\text {Pf}(I_1\backslash \{i\})\text {Pf}(I_2\backslash \{j\})\langle \text {pf}(i,x),\text {pf}(j,y)\rangle , \end{aligned} \end{aligned}$$
(3.5)

where \(I_1=\{0^{(1)},\cdots ,v_1-1^{(1)},0^{(2)},\cdots ,v_2-1^{(2)}\}\), \(I_2=\{0^{(1)},\cdots ,u_1-1^{(1)},0^{(2)},\)\(\cdots ,u_2-2^{(2)},u_2^{(2)}\}\), and |i| represents the position of i in the set \(I_1\). By noting that

$$\begin{aligned} \langle \text {pf}(i^{(k)},x),\text {pf}(j^{(l)},y)\rangle =\int _{\gamma \times \gamma }x^i\mathbb {S}(x,y)y^j\omega _k(x)\omega _l(y)dxdy=\text {pf}(i^{(k)},j^{(l)}), \end{aligned}$$

then the right-hand side in (3.5) is equal to

$$\begin{aligned} \sum _{j\in I_2}(-1)^{|j|}\text {Pf}(I_1,j)\text {Pf}(I_2\backslash \{j\}). \end{aligned}$$
(3.6)

It is known that a Pfaffian is equal to zero if two indices in a Pfaffian are equal. Therefore, if \(u_1\le v_1\) and \(u_2<v_2\), we know that \(j\in I_1\) and the above formula is identically zero. If \(u_1=v_1\) and \(u_2=v_2\), then only when \(j=u_2^{(2)}=v_2^{(2)}\), the term is nonzero. In such a case, equation (3.6) is equal to \( \tau _{(v_1,v_2+1)}\tau _{(v_1,v_2-1)}. \) Therefore, we have

$$\begin{aligned} \langle R_{(v_1,v_2)}^{(1)}(x),{\tilde{R}}_{(v_1,v_2)}^{(2)}(y)\rangle =\frac{\tau _{(v_1,v_2+1)}\tau _{(v_1,v_2-1)}}{d_{(v_1,v_2)}^{(1)}d_{(v_1,v_2)}^{(2)}}=\frac{d_{(v_1,v_2)}^{(2)}}{d_{(v_1,v_2)}^{(1)}}. \end{aligned}$$

\(\square \)

3.2 Semiclassical Weights and Deformed MSOPs

Let’s consider semiclassical weight functions. By introducing parameters \(\textbf{t}:=(t_1,t_2,\cdots )\) and \(\textbf{s}:=(s_1,s_2,\cdots )\) into weights \(\omega _1\) and \(\omega _2\), respectively, such that

$$\begin{aligned} \omega _1(x;\textbf{t})=\omega _1(x)\exp \left( \sum _{i=1}^\infty t_ix^i \right) ,\quad \omega _2(x;\textbf{s})=\omega _2(x)\exp \left( \sum _{i=1}^\infty s_ix^i \right) , \end{aligned}$$

we have

$$\begin{aligned} \partial _{t_i}\omega _1(x;\textbf{t})=x^i\omega _1(x;\textbf{t}),\quad \partial _{s_i}\omega _2(x;\textbf{s})=x^i\omega _2(x;\textbf{s}),\quad \partial _{t_i}\omega _2(x;\textbf{s})=\partial _{s_i}\omega _1(x;\textbf{t})=0. \end{aligned}$$

Moreover, now moments are time-dependent and they obey the following deformations.

Proposition 3.3

For moments \(\{m_{a,b}^{(k,l)},\,k,l=1,2\}\), they have the following evolutions

$$\begin{aligned}&\partial _{t_i}m_{a,b}^{(1,1)}=m_{a+i,b}^{(1,1)}+m_{a,b+i}^{(1,1)},{} & {} \partial _{t_i}m_{a,b}^{(1,2)}=m_{a+i,b}^{(1,2)},&\partial _{t_i}m_{a,b}^{(2,2)}=0,\\&\partial _{s_i}m_{a,b}^{(2,2)}=m_{a+i,b}^{(2,2)}+m_{a,b+i}^{(2,2)},{} & {} \partial _{s_i}m_{a,b}^{(1,2)}=m_{a,b+i}^{(1,2)},&\partial _{s_i}m_{a,b}^{(1,1)}=0. \end{aligned}$$

Equivalently, in Pfaffian notations we have

$$\begin{aligned}&\partial _{t_i}\text {pf}(a^{(1)},b^{(1)})=\text {pf}(a+i^{(1)},b^{(1)})+\text {pf}(a^{(1)},b+i^{(1)}),\partial _{t_i}\text {pf}(a^{(1)},b^{(2)})\\&\quad =\text {pf}(a+i^{(1)},b^{(2)}),\\&\partial _{s_i}\text {pf}(a^{(2)},b^{(2)})=\text {pf}(a+i^{(2)},b^{(2)})+\text {pf}(a^{(2)},b+i^{(2)}),\partial _{s_i}\text {pf}(a^{(1)},b^{(2)})\\&\quad =\text {pf}(a^{(1)},b+i^{(2)}),\\&\partial _{t_i}\text {pf}(a^{(2)},b^{(2)})=\partial _{s_i}\text {pf}(a^{(1)},b^{(1)})=0. \end{aligned}$$

Proof

Let’s prove \(\partial _{t_i}\text {pf}(a^{(1)},b^{(1)})=\text {pf}(a+i^{(1)},b^{(1)})+\text {pf}(a^{(1)},b+i^{(1)})\), and other cases could be similarly verified. We first have

$$\begin{aligned} \partial _{t_i}\text {pf}(a^{(1)},b^{(1)})=\partial _{t_i}\int _{\gamma \times \gamma } x^a\mathbb {S}(x,y)y^b\omega _1(x;\textbf{t})\omega _1(y;\textbf{t})dxdy. \end{aligned}$$

By noting that the moment is finite and weight \(\omega _1(x;\textbf{t})\) is smooth with respect to \(\textbf{t}\), we know that the order of derivative and integration could be exchanged. Therefore, the above formula is equal to

$$\begin{aligned} \int _{\gamma \times \gamma } x^a\mathbb {S}(x,y)y^b (x^i+y^i)\omega _1(x;\textbf{t})\omega _1(x;\textbf{t})dxdy, \end{aligned}$$

which is exactly \(\text {pf}(a+i^{(1)},b^{(1)})+\text {pf}(a^{(1)},b+i^{(1)})\). \(\square \)

With such time parameters introduced, we can use derivative formulas for Wronskian-type Pfaffians to deduce deformation relations for the linear forms of MSOPs.

Proposition 3.4

\(R_{(v_1,v_2)}^{(i)}(x;\textbf{t},\textbf{s})\) and \({\tilde{R}}_{(v_1,v_2)}^{(i)}(x;\textbf{t},\textbf{s})\) \((i=1,2)\) have the following derivative relations

$$\begin{aligned} \begin{aligned} \partial _{t_1}\left( d_{(v_1,v_2)}^{(1)}R_{(v_1,v_2)}^{(1)}(x;\textbf{t},\textbf{s})\right) =d_{(v_1,v_2)}^{(1)}{\tilde{R}}_{(v_1,v_2)}^{(1)}(x;\textbf{t},\textbf{s}),\\ \partial _{s_1}\left( d_{(v_1,v_2)}^{(2)}R_{(v_1,v_2)}^{(2)}(x;\textbf{t},\textbf{s})\right) =d_{(v_1,v_2)}^{(2)}{\tilde{R}}_{(v_1,v_2)}^{(2)}(x;\textbf{t},\textbf{s}). \end{aligned} \end{aligned}$$
(3.7)

Proof

Since \(\textbf{t}\) and \(\textbf{s}\) are dual to each other, we only prove the \(t_1\)-derivative formula. By using Pfaffian notations, it is equivalent to show that

$$\begin{aligned} \begin{aligned} \partial _{t_1}&\text {Pf}(0^{(1)},\cdots ,v_1-1^{(1)},0^{(2)},\cdots ,v_2-1^{(2)},x)\\&=\text {Pf}(0^{(1)},\cdots ,v_1-2^{(1)},v_1^{(1)},0^{(2)},\cdots ,v_2-1^{(2)},x). \end{aligned} \end{aligned}$$
(3.8)

If we introduce the index sets

$$\begin{aligned} I_1=\{0^{(1)},\cdots ,v_1-1^{(1)}\}\text {, }{\tilde{I}}_1=\{0^{(1)},\cdots ,v_1-2^{(1)},v_1^{(1)}\}\text {, }I_2=\{0^{(2)},\cdots ,v_2-1^{(2)}\}\text {}, \end{aligned}$$

then by expanding the Pfaffian, the left-hand side in (3.8) is equal to

$$\begin{aligned} \partial _{t_1}\left( \sum _{i\in I_1}(-1)^{|i|}\text {Pf}(I_1\backslash \{i\},I_2)\text {pf}(i,x)+\sum _{j\in I_2}(-1)^{|j|}\text {Pf}(I_1,I_2\backslash \{j\})\text {pf}(j,x) \right) . \end{aligned}$$

By using derivative formula for Wronskian-type Pfaffians (see Appendix for details), we know that the first term is equal to

$$\begin{aligned} \begin{aligned}&\sum _{i\in I_1\backslash \{0^{(1)}\}}(-1)^{|i|}\text {Pf}(I_1\backslash \{i-1\},I_2)\text {pf}(i,x)+\sum _{i\in {\tilde{I}}_1\backslash \{v_1^{(1)}\}}(-1)^{|i|}\text {Pf}({\tilde{I}}_1\backslash \{i\},I_2)\text {pf}(i,x)\\&\qquad \quad +\sum _{i\in I_1}(-1)^{|i|}\text {Pf}(I_1\backslash \{i\},I_2)\text {pf}(i+1,x), \end{aligned} \end{aligned}$$
(3.9)

and the second term equals

$$\begin{aligned} \sum _{j\in I_2}(-1)^{|j|}\text {Pf}({\tilde{I}}_1,I_2\backslash \{j\})\text {pf}(j,x). \end{aligned}$$

A cancellation can be applied to the first and third term in (3.9). Thus, by combining these equations, we obtain

$$\begin{aligned} \sum _{i\in {\tilde{I}}_1}(-1)^{|i|}\text {Pf}({\tilde{I}}_1\backslash \{i\},I_2)\text {pf}(i,x)+\sum _{j\in I_2}(-1)^{|j|}\text {Pf}({\tilde{I}}_1,I_2\backslash \{j\})\text {pf}(j,x), \end{aligned}$$

which is exactly the expansion of the right-hand side in (3.8). \(\square \)

In despite of time evolutions for the linear forms of MSOPs, there should be spectral problems between \(R_{(v_1,v_2)}^{(i)}(x)\) and \({\tilde{R}}_{(v_1,v_2)}^{(i)}(x)\) \((i=1,2)\), which are prominent in the derivations of integrable hierarchies. In below, we use Pfaffian identities to characterize spectral problems.

Proposition 3.5

\(R_{(v_1,v_2)}^{(i)}(x;\textbf{t},\textbf{s})\) and \({\tilde{R}}_{(v_1,v_2)}^{(i)}(x;\textbf{t},\textbf{s})\) \((i=1,2)\) satisfy the following recurrence relations

$$\begin{aligned}&\tau _{(v_1,v_2-1)}d_{(v_1+1,v_2+1)}^{(2)}R_{(v_1+1,v_2+1)}^{(2)}(x)=\tau _{(v_1+1,v_2)}d_{(v_1,v_2)}^{(2)}{\tilde{R}}^{(2)}_{(v_1,v_2)}(x) \nonumber \\&\quad -\partial _{s_1}\tau _{(v_1+1,v_2)}d_{(v_1,v_2)}^{(2)}R_{(v_1,v_2)}^{(2)}(x)+\tau _{(v_1,v_2+1)}d_{(v_1+1,v_2-1)}^{(2)}R_{(v_1+1,v_2-1)}^{(2)}(x),\end{aligned}$$
(3.10a)
$$\begin{aligned}&\tau _{(v_1-1,v_2)}d_{(v_1+1,v_2+1)}^{(1)}R_{(v_1+1,v_2+1)}^{(1)}(x)=\tau _{(v_1+1,v_2)}d_{(v_1-1,v_2+1)}^{(1)}R_{(v_1-1,v_2+1)}^{(1)}(x)\nonumber \\&\quad -\tau _{(v_1,v_2+1)}d_{(v_1,v_2)}^{(1)}{\tilde{R}}_{(v_1,v_2)}^{(1)}(x)+\partial _{t_1}\tau _{(v_1,v_2+1)}d_{(v_1,v_2)}^{(1)}R_{(v_1,v_2)}^{(1)}(x),\end{aligned}$$
(3.10b)
$$\begin{aligned}&\partial _{t_1}\tau _{(v_1,v_2-1)}d_{(v_1,v_2)}^{(1)}R_{(v_1,v_2)}^{(1)}(x)=\tau _{(v_1,v_2-1)}d_{(v_1,v_2)}^{(1)}{\tilde{R}}_{(v_1,v_2)}^{(1)}(x)\nonumber \\&\quad +\tau _{(v_1-1,v_2)}d_{(v_1+1,v_2-1)}^{(1)}R_{(v_1+1,v_2-1)}^{(1)}(x)-\tau _{(v_1+1,v_2)}d_{(v_1-1,v_2-1)}^{(1)}R_{(v_1-1,v_2-1)}^{(1)}(x),\end{aligned}$$
(3.10c)
$$\begin{aligned}&\partial _{s_1}\tau _{(v_1-1,v_2)}d_{(v_1,v_2)}^{(2)}R_{(v_1,v_2)}^{(2)}(x)=-\tau _{(v_1,v_2-1)}d_{(v_1-1,v_2+1)}^{(2)}R_{(v_1-1,v_2+1)}^{(2)}(x)\nonumber \\&\quad +\tau _{(v_1-1,v_2)}d_{(v_1,v_2)}^{(2)}{\tilde{R}}_{(v_1,v_2)}^{(2)}(x)+\tau _{(v_1,v_2+1)}d_{(v_1-1,v_2-1)}^{(2)}R_{(v_1-1,v_2-1)}^{(2)}(x). \end{aligned}$$
(3.10d)

Proof

We verify the first equation by making use of Pfaffian identity (A.1b), and the others could be similarly verified. Taking symbols

$$\begin{aligned} a_1&=v_1^{(1)},\quad a_2=v_2-1^{(2)},\quad a_3=v_2^{(2)},\quad a_4=x,\quad \\ \star&=\{0^{(1)},\cdots ,v_1-1^{(1)},0^{(2)},\cdots ,v_2-2^{(2)}\} \end{aligned}$$

in (A.1b), we arrive at the desired formula from Pfaffian expressions (3.2)–(3.3) and by realizing that

$$\begin{aligned} \partial _{s_1}\tau _{(v_1+1,v_2)}=\text {Pf}(0^{(1)},\cdots ,v_1^{(1)},0^{(2)},\cdots ,v_2-2^{(2)},v_2^{(2)}). \end{aligned}$$

\(\square \)

Several simple integrable lattices could be obtained directly by using these relations. Expanding the linear form (3.3), we have

$$\begin{aligned} d_{(v_1,v_2)}^{(1)}R_{(v_1,v_2)}^{(1)}(x)&=(-1)^{v_1-1}\omega _1(x)\left( x^{v_1-1}\tau _{(v_1-1,v_2)}-x^{v_1-2}\partial _{t_1}\tau _{(v_1-1,v_2)}+\cdots \right) \\&\quad +\omega _2(x)\left( x^{v_2-1}\tau _{(v_1,v_2-1)}-x^{v_2-2}\partial _{s_1}\tau _{(v_1,v_2-1)}+\cdots \right) . \end{aligned}$$

Moreover, if the equation (3.7) is taken into account, then equations

$$\begin{aligned} \begin{aligned}&D_{t_1}\tau _{(v_1,v_2-1)}\cdot \tau _{(v_1,v_2+1)}=D_{s_1}\tau _{(v_1+1,v_2)}\cdot \tau _{(v_1-1,v_2)},\\&D_{s_1}D_{t_1}\tau _{(v_1-1,v_2)}\cdot \tau _{(v_1-1,v_2)}=2\left( \tau _{(v_1,v_2-1)}\tau _{(v_1-2,v_2+1)}-\tau _{(v_1,v_2+1)}\tau _{(v_1-2,v_2-1)} \right) \end{aligned} \end{aligned}$$
(3.11)

are obtained by comparing the coefficients of \(x^{v_1-2}\omega _1(x)\) and \(x^{v_2-2}\omega _2(x)\), respectively. Here, \(D_t\) is the Hirota’s bilinear operator defined by [31]

$$\begin{aligned} D_t^m D_x^n f(x,t)\cdot g(x,t)=\left. \frac{\partial ^m}{\partial s^m}\frac{\partial ^n}{\partial y^n} f(t+s,x+y)g(t-s,x-y)\right| _{s=0,y=0.} \end{aligned}$$
(3.12)

Equations (3.11) have appeared as a generalization of 2D Toda lattice, see, for example, [30, 32, 50, 54, 56]. Before we proceed to further discussions about the recurrence, we demonstrate a reduction from MSOPs to skew-orthogonal polynomials (SOPs).

3.3 Reduction: from MSOPs to SOPs

As was shown in [54, Section 2.4], the hierarchy governing equations (3.11) could be reduced to the DKP hierarchy from the perspective of fermionic representation. We reconfirm the fact in this part by performing reductions of MSOPs.

By considering one index set \(v=\{0,\cdots ,2n\}\) and one weight function \(\omega (x)\), we could define skew-orthogonal polynomials \(\{p_{2n}(x),p_{2n+1}(x)\}_{n\in {\mathbb {N}}}\) by the following skew orthogonal relation

$$\begin{aligned} \langle p_{2n}(x),p_{2m}(x)\rangle =\langle p_{2n+1}(x),p_{2m+1}(x)\rangle =0,\quad \langle p_{2n}(x),p_{2m+1}(x)\rangle =\delta _{n,m}, \end{aligned}$$

where \(\langle \cdot ,\cdot \rangle \) is a skew symmetric bilinear form on \({\mathbb {R}}[x]\times {\mathbb {R}}[y]\rightarrow {\mathbb {R}}\) and

$$\begin{aligned} \langle f(x),g(x)\rangle =\int _{\gamma \times \gamma } f(x)\mathbb {S}(x,y)g(y)\omega (x)\omega (y)dxdy,\quad \mathbb {S}(x,y)=-\mathbb {S}(y,x). \end{aligned}$$

This is a reductional version compared with Proposition 3.2. Moreover, \(\{p_{2n}(x),\)\(p_{2n+1}(x)\}_{n\in {\mathbb {N}}}\) are polynomials with Pfaffian expressions [1, Thm. 3.1]

$$\begin{aligned} p_{2n}(x)=d_n^{-1}\text {Pf}(0,\cdots ,2n,x),\quad p_{2n+1}(x)=d_n^{-1}\text {Pf}(0,\cdots ,2n-1,2n+1,x), \end{aligned}$$

where \(d_n={(\tau _{2n}\tau _{2n+2})^{1/2}}\), \(\tau _{2n}=\text {Pf}(0,\cdots ,2n-1)\) and Pfaffian elements are given by

$$\begin{aligned} \text {pf}(i,j)=\langle x^i,y^j\rangle , \quad \text {pf}(i,x)=x^i. \end{aligned}$$

By introducing the time flows \(\textbf{t}=(t_1,t_2,\cdots )\) such that \(\partial _{t_i}\omega (x;\textbf{t})=x^i\omega (x;\textbf{t})\), it was found that the skew-orthogonal polynomials satisfy [1, Thm. 3.1]

$$\begin{aligned} (z+\partial _{t_1})(d_np_{2n}(z))=d_np_{2n+1}(z). \end{aligned}$$

This equation coincides with equation (3.4) in multi-component case and plays a role as spectral problem in integrable system theory.

In studies, the first study between SOPs and Pfaff lattice was carried out in [1] from a view of Lie algebra splitting. Later on, the correspondence was reformulated from different perspectives such as reductions from 2d-Toda theory [2, 41], Toda lattice and Pfaff lattice correspondence [5], symplectic matrices [39], and so on. Therefore, it is natural to ask whether there is any local recurrence for SOPs which could be applied to derive integrable systems. Unfortunately, we could not find a compact relation between \(p_{2n}(z)\) and \(p_{2n+1}(z)\) as multi-component case in Proposition 3.5. By taking \(\star =\{0,\cdots ,2n-2\}\), \(a_1=2n-1\), \(a_2=2n\), \(a_3=2n+1\) and \(a_4=x\) in the identity (A.1b) and using the equation

$$\begin{aligned} (\partial _{t_2}+\partial _{t_1}^2)\tau _{2n}=2\text {Pf}(0,1,\cdots ,2n-2,2n+1), \end{aligned}$$

one has

$$\begin{aligned} \tau _{2n+2}d_{2n-2}p_{2n-2}(x)&=\frac{1}{2}(\partial _{t_2}+\partial _{t_1}^2)\tau _{2n}d_{2n}p_{2n}(x)-\partial _{t_1}\tau _{2n}d_{2n}p_{2n+1}(z)\\ {}&\quad +\tau _{2n}\text {Pf}(0,\cdots ,2n-2,2n,2n+1,z). \end{aligned}$$

This relation is non-compact since the last term could not be written in terms of skew-orthogonal polynomials. However, due to the independency of function, integrable lattices could also be obtained by comparing the coefficients of monomials on both sides. The simplest equation arises when comparing the coefficients of \(x^{2n-2}\), and one has

$$\begin{aligned} (D_1^4-4D_1D_3+3D_2^2)\tau _{2n}\cdot \tau _{2n}=24\tau _{2n-2}\tau _{2n+2}. \end{aligned}$$

This is the first member in the DKP hierarchy.

4 Integrable Lattice Hierarchies from Identities of MSOPs

In this part, we demonstrate that MSOPs could be expressed by 2-component Pfaffian \(\tau \)-functions \(\{\tau _{(i,j)}(\textbf{t},\textbf{s})\}_{i,j\in {\mathbb {N}}}\) with \(i+j\in 2{\mathbb {N}}\). Since MSOPs are multi-component generalizations of SOPs, we call the corresponding integrable hierarchy as multiple-component Pfaff lattice hierarchy, especially a 2-component Pfaff lattice hierarchy in this paper.

There are two different ways to derive those integrable hierarchies, as mentioned in the introduction part. One is to express polynomials by \(\tau \)-functions. By substituting \(\tau \)-functions into recurrence relations, integrable hierarchy involving neighboring \(\tau \)-functions could be obtained. Another method is to make use of bilinear form and Cauchy transform. By using these methods, some famous integrable equations, such as the so-called Pfaff–Toda lattice and modified coupled KP equations, are derived. It is also shown that 2-component Pfaff lattice hierarchy derived from MSOPs is equivalent to Takasaki’s Pfaff–Toda hierarchy.

4.1 From Recurrence Relations (3.10a)–(3.10d) to integrable hierarchy

In this part, \(\tau \)-function expressions for the linear forms of MSOPs are given to characterize the corresponding integrable hierarchy. To this end, we first demonstrate an explicit connection between the linear forms of MSOPs and 2-component Pfaffian \(\tau \)-functions.

Proposition 4.1

The linear forms \(R_{(v_1,v_2)}^{(i)}(x;\textbf{t},\textbf{s})\) \((i=1,2)\) of multiple skew-orthogonal polynomials could be alternatively written by

$$\begin{aligned} \begin{aligned} d_{(v_1,v_2)}^{(i)}R_{(v_1,v_2)}^{(i)}(x;\textbf{t},\textbf{s})&= (-1)^{v_1-1}\omega _1(x;\textbf{t})x^{v_1-1}\tau _{(v_1-1,v_2)}(\textbf{t}-[x^{-1}],\textbf{s})\\\&+\omega _2(x;\textbf{s})x^{v_2-1}\tau _{(v_1,v_2-1)}(\textbf{t},\textbf{s}-[x^{-1}]), \end{aligned} \end{aligned}$$
(4.1)

where symbol \([\alpha ]\) represents the Miwa variable

$$\begin{aligned}{}[\alpha ]=\left( \alpha ,\frac{\alpha ^2}{2},\cdots ,\frac{\alpha ^n}{n},\cdots \right) . \end{aligned}$$

Proof

One could prove such a formula by column expansion to the moment matrix and make use of Schur functions acting on moments; see, e.g., [6, prop 2.2]. In our proof, we adopt the method by directly acting Schur functions to \(\tau \)-functions. Recall that the linear forms of MSOPs admit the Pfaffian expression

$$\begin{aligned} d_{(v_1,v_2)}^{(i)}R_{(v_1,v_2)}^{(i)}(x;\textbf{t},\textbf{s})=\text {Pf}(0^{(1)},\cdots ,v_1-1^{(1)},0^{(2)},\cdots ,v_2-1^{(2)},x). \end{aligned}$$

If we expand this formula from x, then we have

$$\begin{aligned} \begin{aligned} d_{(v_1,v_2)}^{(i)}R_{(v_1,v_2)}^{(i)}(x;\textbf{t},\textbf{s})&=\omega _1(x;\textbf{t})\sum _{i\in I_1} (-1)^i x^i\text {pf}(I_1\backslash \{i\},I_2)\\ {}&\quad +\omega _2(x;\textbf{s})\sum _{i\in I_2}(-1)^{v_2-1-i}x^i \text {pf}(I_1,I_2\backslash \{i\}), \end{aligned} \end{aligned}$$
(4.2)

index set \(I_1=\{0^{(1)},\cdots ,v_1-1^{(1)}\}\) and \(I_2=\{0^{(2)},\cdots ,v_2-1^{(2)}\}\). Therefore, to demonstrate the equivalence between (4.1) and (4.2), one needs to verify the formula

$$\begin{aligned} x^{v_1-1}\tau _{(v_1-1,v_2)}(\textbf{t}-[x^{-1}],\textbf{s})=\sum _{i\in I_1}(-1)^{v_1-1-i}x^i\text {pf}(I_1\backslash \{i\},I_2). \end{aligned}$$
(4.3)

It is known that the left-hand side in the above formula could be written as

$$\begin{aligned} \tau _{(v_1-1,v_2)}(\textbf{t}-[x^{-1}],\textbf{s})=e^{-\xi ({\tilde{\partial }}_t,x^{-1})}\tau _{(v_1-1,v_2)}=\sum _{k\ge 0}p_k(-{\tilde{\partial }}_t)\tau _{(v_1-1,v_2)}x^{-k}, \end{aligned}$$

where \({\tilde{\partial }}_t=(\partial _{{t}_1},\partial _{{t}_2}/2,\cdots )\), \(\xi (\textbf{t},x)=\sum _{i=1}^\infty t_ix^i\) and \(p_k\) are elementary symmetric functions defined by

$$\begin{aligned} e^{\xi (\textbf{t},x)}=\sum _{k\ge 0}p_k(\textbf{t})x^k. \end{aligned}$$
(4.4)

Moreover, due to Proposition B.1 in Appendix, we know the fact that

$$\begin{aligned} p_k(-{\tilde{\partial }}_t)\tau _{(v_1-1,v_2)}=\text {Pf}(0^{(1)},\cdots ,\widehat{v_1-k^{(1)}},\cdots ,v_1^{(1)},0^{(2)},\cdots ,v_2^{(2)}), \end{aligned}$$

where \({\hat{i}}\) means that the index i is missed, and then, equation (4.3) holds. \(\square \)

Remark 4.2

According to the proof, we know that

$$\begin{aligned} d_{(v_1,v_2)}^{(i)}R_{(v_1,v_2)}^{(i)}(x;\textbf{t},\textbf{s})&=(-1)^{v_1-1}\omega _1(x;\textbf{t})\sum _{\ell =0}^{v_1-1}\left( p_\ell (-{\tilde{\partial }}_t)\tau _{(v_1-1,v_2)}(\textbf{t},\textbf{s})\right) x^{v_1-1-\ell }\\&\quad +\omega _2(x;\textbf{s})\sum _{\ell =0}^{v_2-1}\left( p_\ell (-{\tilde{\partial }}_s)\tau _{(v_1,v_2-1)}(\textbf{t},\textbf{s}) \right) x^{v_2-1-\ell }. \end{aligned}$$

As a direct corollary, we have that

Corollary 4.3

\({\tilde{R}}_{(v_1,v_2)}^{(i)}(x;\textbf{t},\textbf{s})\) \((i=1,2)\) could be expressed in terms of \(\tau \)-functions as

$$\begin{aligned}&d_{(v_1,v_2)}^{(1)}{\tilde{R}}_{(v_1,v_2)}^{(1)}(x;\textbf{t},\textbf{s})= \partial _{t_1}\left( d_{(v_1,v_2)}^{(1)}{\tilde{R}}_{(v_1,v_2)}^{(1)}(x;\textbf{t},\textbf{s})\right) \\&\quad =(-1)^{v_1-1}\omega _1(x;\textbf{t})\sum _{\ell =0}^{v_1-1}\left( \partial _{t_1}p_\ell (-{\tilde{\partial }}_t)\tau _{(v_1-1,v_2)}(\textbf{t},\textbf{s})\right) x^{v_1-1-\ell }\\&\qquad +(-1)^{v_1-1}\omega _1(x;\textbf{t})\sum _{\ell =0}^{v_1-1}\left( p_\ell (-{\tilde{\partial }}_t)\tau _{(v_1-1,v_2)}(\textbf{t},\textbf{s})\right) x^{v_1-\ell }\\&\qquad +\omega _2(x;\textbf{s}) \sum _{\ell =0}^{v_2-1}\left( \partial _{t_1}p_\ell (-{\tilde{\partial }}_s)\tau _{(v_1,v_2-1)}(\textbf{t},\textbf{s})\right) x^{v_2-1-\ell },\\&d_{(v_1,v_2)}^{(2)}{\tilde{R}}_{(v_1,v_2)}^{(2)}(x;\textbf{t},\textbf{s})= \partial _{s_1}\left( d_{(v_1,v_2)}^{(2)}{\tilde{R}}_{(v_1,v_2)}^{(2)}(x;\textbf{t},\textbf{s})\right) \\&\quad =(-1)^{v_1-1}\omega _1(x;\textbf{t})\sum _{\ell =0}^{v_1-1}\left( \partial _{s_1}p_\ell (-{\tilde{\partial }}_t)\tau _{(v_1-1,v_2)}(\textbf{t},\textbf{s})\right) x^{v_1-1-\ell }\\&\qquad +\omega _2(x;\textbf{s})\sum _{\ell =0}^{v_2-1}\left( \partial _{s_1}p_\ell (-{\tilde{\partial }}_s)\tau _{(v_1,v_2-1)}(\textbf{t},\textbf{s})\right) x^{v_2-1-\ell }\\&\qquad +\omega _2(x;\textbf{s})\sum _{\ell =0}^{v_2-1}\left( p_\ell (-{\tilde{\partial }}_s)\tau _{(v_1,v_2-1)}(\textbf{t},\textbf{s})\right) x^{v_2-\ell }. \end{aligned}$$

By taking these expressions into (3.10a), and comparing the coefficients of \(x^{v_1-j}\omega _1(x)\) and \(x^{v_2-j}\omega _2(x)\) \((j=1,2,\cdots )\), respectively, we obtain

$$\begin{aligned}&\tau _{(v_1,v_2-1)}p_j(-{\tilde{\partial }}_t)\tau _{(v_1+1,v_2)}=-\tau _{(v_1+1,v_2)}\partial _{s_1}p_{j-1}(-{\tilde{\partial }}_t)\tau _{(v_1-1,v_2)}\nonumber \\&\quad +\partial _{s_1}\tau _{(v_1+1,v_2)}p_{j-1}(-{\tilde{\partial }}_t)\tau _{(v_1-1,v_2)}+\tau _{(v_1,v_2+1)}p_j(-{\tilde{\partial }}_t)\tau _{(v_1,v_2-1)},\end{aligned}$$
(4.5a)
$$\begin{aligned}&\tau _{(v_1,v_2-1)}p_j(-{\tilde{\partial }}_s)\tau _{(v_1+1,v_2)}=\tau _{(v_1+1,v_2)}\left( \partial _{s_1}p_{j-1}(-{\tilde{\partial }}_s)+p_j(-{\tilde{\partial }}_s) \right) \tau _{(v_1,v_2-1)} \nonumber \\&\quad -\partial _{s_1}\tau _{(v_1+1,v_2)}p_{j-1}(-{\tilde{\partial }}_s)\tau _{(v_1,v_2-1)}+\tau _{(v_1,v_2+1)}p_{j-2}(-{\tilde{\partial }}_s)\tau _{(v_1+1,v_2-2)}. \end{aligned}$$
(4.5b)

Moreover, we read from (3.10c) that

$$\begin{aligned}&\partial _{t_1}\tau _{(v_1,v_2-1)}p_{j-1}(-{\tilde{\partial }}_t)\tau _{(v_1-1,v_2)}=\tau _{(v_1,v_2-1)}\left( \partial _{t_1}p_{j-1}(-{\tilde{\partial }}_t)+p_j(-{\tilde{\partial }}_t)\right) \tau _{(v_1-1,v_2)}\nonumber \\&\quad -\tau _{(v_1-1,v_2)}p_j(-{\tilde{\partial }}_t)\tau _{(v_1,v_2-1)}+\tau _{(v_1+1,v_2)}p_{j-2}(-{\tilde{\partial }}_t)\tau _{(v_1-2,v_2-1)}, \end{aligned}$$
(4.6a)
$$\begin{aligned}&\partial _{t_1}\tau _{(v_1,v_2-1)}p_{j-1}(-{\tilde{\partial }}_s)\tau _{(v_1,v_2-1)}=\tau _{(v_1,v_2-1)}\partial _{t_1}p_{j-1}(-{\tilde{\partial }}_s)\tau _{(v_1,v_2-1)}\nonumber \\&\quad +\tau _{(v_1-1,v_2)}p_{j-2}(-{\tilde{\partial }}_s)\tau _{(v_1+1,v_2-2)}-\tau _{(v_1+1,v_2)}p_{j-2}(-{\tilde{\partial }}_s)\tau _{(v_1-1,v_2-2)}. \end{aligned}$$
(4.6b)

It should be remarked that integrable hierarchies (4.5a)–(4.5b) and (4.6a)–(4.6b) are the same if one interchanges \(v_1\) with \(v_2\) and \(\partial _t\) with \(\partial _s\). Moreover, integrable hierarchies derived from (3.10b) and (3.10d) are the same with (3.10a) and (3.10c). Therefore, it is reasonable to regard (4.5a)–(4.6b) as a 2-component Pfaff lattice hierarchy with neighboring lattices.

There are some integrable lattices obtained from those hierarchies. The first equation of Pfaff–Toda lattice in (3.11) could be obtained from (4.5a) by taking \(j=1\), and the second one could be obtained from (4.6b) by taking \(j=2\). Besides, one could obtain another non-trivial simple example in (4.5b) when \(j=2\), which reads

$$\begin{aligned} (D_{s_2}+D_{s_1}^2)\tau _{(v_1,v_2-1)}\cdot \tau _{(v_1+1,v_2)}=2\tau _{(v_1,v_2+1)}\tau _{(v_1+1,v_2-2)}. \end{aligned}$$
(4.7)

This is the bilinear form of the so-called modified coupled KP equation, playing an important role in the study of commutativity of Pfaffianization and Bäcklund transformation [34].

4.2 Bilinear Identities: from Bilinear form to Cauchy Transforms

In last subsection, we derived a 2-component Pfaff lattice hierarchy by directly using the recurrence relations of MSOPs and neighboring Pfaffian \(\tau \)-functions. In this part, we find another approach to deduce more general integrable lattice hierarchies from the perspective of Cauchy transforms. Firstly, let’s introduce a Cauchy transform with respect to a non-degenerate bilinear form.

Proposition 4.4

Given a non-degenerate bilinear form \(\langle \cdot ,\cdot \rangle :\,{\mathbb {R}}[x]\times {\mathbb {R}}[y]\rightarrow {\mathbb {R}}\) and an analytic weight function \(\psi (x)\), then for an integrable function g(x), a Cauchy transform of g(x) with respect to the bilinear form is defined by

$$\begin{aligned} \mathcal {C}_\psi g(z)=\left\langle \frac{\psi (x)}{x-z},g(y)\right\rangle . \end{aligned}$$

Moreover, for any analytic function f(x), one has

$$\begin{aligned} \langle f(x)\psi (x),g(y)\rangle =\frac{1}{2\pi i}\oint _{C_\infty }f(z)\mathcal {C}_\psi g(z)dz, \end{aligned}$$

where \(C_\infty \) is a circle around the infinity.

Proof

By assuming that f(z) is analytic, we have the expansion \(f(z)=\sum _{i=0}^\infty f_iz^i\), and thus,

$$\begin{aligned} \frac{1}{2\pi i}\oint _{C_\infty } f(z)\mathcal {C}_\psi g(z)dz&=\frac{1}{2\pi i}\oint _{C_\infty }\sum _{i=0}^\infty f_iz^i\sum _{j=0}^\infty \frac{1}{z^{j+1}}\langle x^j\psi (x),g(y)\rangle dz\\&=\sum _{i=0}^\infty f_i\langle x^i\psi (x),g(y)\rangle =\langle f(x)\psi (x),g(y)\rangle . \end{aligned}$$

\(\square \)

Therefore, by taking \(\langle \cdot ,\cdot \rangle \) as a skew symmetric bilinear form, i.e.,

$$\begin{aligned} \langle f(x),g(y)\rangle =\int _{\gamma \times \gamma }f(x)\mathbb {S}(x,y)g(y)dxdy,\quad \mathbb {S}(x,y)=-\mathbb {S}(y,x), \end{aligned}$$

one could define a corresponding Cauchy transform

$$\begin{aligned} \mathcal {C}_\psi g(z)=\int _{\gamma \times \gamma }\frac{\psi (z)}{x-z}\mathbb {S}(x,y)g(y)dxdy. \end{aligned}$$
(4.8)

Moreover, the Cauchy transforms of MSOPs admit the following closed expressions.

Proposition 4.5

If \(R_{(v_1,v_2)}^{(i)}(x;\textbf{t},\textbf{s})\) \((i=1,2)\) are linear forms of multiple skew-orthogonal polynomials defined in Proposition 3.2 with weights \(\omega _1(x;\textbf{t})\) and \(\omega _2(x;\textbf{s})\), then we have

$$\begin{aligned}&\mathcal {C}_{\omega _1} \left( d_{(v_1,v_2)}^{(i)}R_{(v_1,v_2)}^{(i)} \right) =(-1)^{v_1}z^{-(v_1+1)}\tau _{(v_1+1,v_2)}(\textbf{t}+[z^{-1}],\textbf{s}),\\&\mathcal {C}_{\omega _2} \left( d_{(v_1,v_2)}^{(i)}R_{(v_1,v_2)}^{(i)} \right) =z^{-(v_2+1)}\tau _{(v_1,v_2+1)}(\textbf{t},\textbf{s}+[z^{-1}]). \end{aligned}$$

Proof

We prove the first equation, and the second one could be similarly verified. By using (3.2) and (4.8), we have

$$\begin{aligned}&\mathcal {C}_{\omega _1} \left( d_{(v_1,v_2)}^{(i)}R_{(v_1,v_2)}^{(i)} \right) =\int _{\gamma \times \gamma }\frac{\omega _1(x;\textbf{t})}{x-z}\mathbb {S}(x,y)\text {Pf}(0^{(1)},\cdots ,v_1-1^{(1)},0^{(2)}\\&\qquad ,\cdots ,v_2-1^{(2)},y)dxdy\\&\quad =-\sum _{i=0}^\infty z^{-(i+1)}\int _{\gamma \times \gamma }x^i\omega _1(x;\textbf{t})\mathbb {S}(x,y)\text {Pf}(0^{(1)},\cdots ,v_1-1^{(1)},0^{(2)}\\&\qquad ,\cdots ,v_2-1^{(2)},y)dxdy\\&\quad =-\sum _{i=0}^\infty z^{-(i+1)}\left\langle \text {pf}(i^{(1)},x),\text {Pf}(0^{(1)},\cdots ,v_1-1^{(1)},0^{(2)},\cdots ,v_2-1^{(2)},y)\right\rangle . \end{aligned}$$

Then from the skew orthogonality, when \(i\le v_1-1\), the above skew inner product is equal to zero. Therefore, the above formula is equal to

$$\begin{aligned}&-\sum _{i=v_1}^\infty (-1)^{v_2}z^{-(i+1)}\text {Pf}(0^{(1)},\cdots ,v_1-1^{(1)},i^{(1)},0^{(2)},\cdots ,v_2-1^{(2)})\\&\quad =(-1)^{v_1}z^{-(v_1+1)}\sum _{i=0}^\infty z^{-i}p_{i}({\tilde{\partial }}_t)\tau _{(v_1+1,v_2)}, \end{aligned}$$

which is the expansion of the desired formula. \(\square \)

In the followings, we show how to derive a bilinear integrable hierarchy by Cauchy transforms.

Proposition 4.6

Two-component \(\tau \)-functions \(\{\tau _{(i,j)}(\textbf{t},\textbf{s})\}_{i,j\in {\mathbb {N}}}\) with \(i+j\in 2{\mathbb {N}}\) satisfy bilinear identity

$$\begin{aligned} {\begin{aligned}&(-1)^{u_1+v_1}\oint _{C_\infty } e^{\xi (t-t',z)}z^{v_1-u_1-2}\tau _{(v_1-1,v_2)}(\textbf{t}-[z^{-1}],\textbf{s})\tau _{(u_1+1,u_2)}(\textbf{t}'+[z^{-1}],\textbf{s}')dz\\&\qquad +(-1)^{u_1+v_1}\oint _{C_\infty }e^{\xi (t'-t,z)}z^{u_1-v_1-2}\tau _{(v_1+1,v_2)}(\textbf{t}+[z^{-1}],\textbf{s})\tau _{(u_1-1,u_2)}(\textbf{t}'\\ {}&\qquad \times -[z^{-1}],\textbf{s}')dz\\&\quad =\oint _{C_\infty } e^{\xi (s-s',z)}z^{v_2-u_2-2}\tau _{(v_1,v_2-1)}(\textbf{t},\textbf{s}-[z^{-1}])\tau _{(u_1,u_2+1)}(\textbf{t}',\textbf{s}'+[z^{-1}])dz\\&\qquad +\oint _{C_\infty }e^{\xi (s'-s,z)}z^{u_2-v_2-2}\tau _{(v_1,v_2+1)}(\textbf{t},\textbf{s}+[z^{-1}])\tau _{(u_1,u_2-1)}(\textbf{t}',\textbf{s}'-[z^{-1}])dz, \end{aligned}} \end{aligned}$$
(4.9)

which is valid for arbitrary \(t,t',s,s'\in {\mathbb {C}}\).

Proof

Since \(\langle \cdot ,\cdot \rangle \) is a skew inner product, we know that

$$\begin{aligned} \langle R_{(v_1,v_2)}^{(1)}(x;\textbf{t},\textbf{s}),R^{(1)}_{(u_1,u_2)}(y;\textbf{t}',\textbf{s}')\rangle =-\langle R_{(u_1,u_2)}^{(1)}(x;\textbf{t}',\textbf{s}'),R_{(v_1,v_2)}^{(1)}(y;\textbf{t},\textbf{s})\rangle , \end{aligned}$$

which is true for arbitrary \(t,t',s,s'\in {\mathbb {C}}\) and \(|\vec {u}|,\,|\vec {v}|\in 2{\mathbb {N}}+1\). By multiplying \(d_{(u_1,u_2)}^{(1)}d_{(v_1,v_2)}^{(1)}\) on both sides and expanding the linear forms of MSOPs in terms of \(\tau \)-function according to Prop. 4.1, we have

$$\begin{aligned}&(-1)^{v_1-1}\left\langle x^{v_1-1}\tau _{(v_1-1,v_2)}(\textbf{t}-[x^{-1}],\textbf{s})e^{\xi (t,x)}\omega _1(x),d_{(u_1,u_2)}^{(1)}R_{(u_1,u_2)}(y;\textbf{t}',\textbf{s}')\right\rangle \\&\qquad +\left\langle x^{v_2-1}\tau _{(v_1,v_2-1)}(\textbf{t},\textbf{s}-[x^{-1}])e^{\xi (s,x)}\omega _2(x),d_{(u_1,u_2)}^{(1)}R_{(u_1,u_2)}(y;\textbf{t}',\textbf{s}')\right\rangle \\&\quad =(-1)^{u_1}\left\langle x^{u_1-1}\tau _{(u_1-1,u_2)}(\textbf{t}'-[x^{-1}],\textbf{s}')e^{\xi (t',x)}\omega _1(x),d_{(v_1,v_2)}^{(1)}R_{(v_1,v_2)}(y;\textbf{t},\textbf{s})\right\rangle \\&\qquad -\left\langle x^{u_2-1}\tau _{(u_1,u_2-1)}(\textbf{t}',\textbf{s}'-[x^{-1}])e^{\xi (s',x)}\omega _2(x),d_{(v_1,v_2)}^{(1)}R_{(v_1,v_2)}(y;\textbf{t},\textbf{s})\right\rangle . \end{aligned}$$

Then by realizing that \(\omega _1(x;\textbf{t})=e^{\xi (x;\textbf{t}-\textbf{t}')}\omega _1(x;\textbf{t}')\) and according to Proposition 4.4, we have

$$\begin{aligned}&(-1)^{v_1-1}\frac{1}{2\pi i}\oint _{C_\infty } e^{\xi (t-t',z)}z^{v_1-1}\tau _{(v_1-1,v_2)}(\textbf{t}-[z^{-1}],\textbf{s})\mathcal {C}_{\omega _1}\left( d_{(u_1,u_2)}^{(1)}R_{(u_1,u_2)}^{(1)} \right) (z;\textbf{t}',\textbf{s}')dz\\&\quad +\frac{1}{2\pi i}\oint _{C_\infty }e^{\xi (s-s',z)}z^{v_2-1}\tau _{(v_1,v_2-1)}(\textbf{t},\textbf{s}-[z^{-1}])\mathcal {C}_{\omega _2}\left( d_{(u_1,u_2)}^{(1)}R_{(u_1,u_2)}^{(1)} \right) (z;\textbf{t}',\textbf{s}')dz\\&=(-1)^{u_1}\frac{1}{2\pi i}\oint _{C_\infty } e^{\xi (t'-t,z)}z^{u_1-1}\tau _{(u_1-1,u_2)}(\textbf{t}'-[z^{-1}],\textbf{s}')\mathcal {C}_{\omega _1}\left( d_{(v_1,v_2)}^{(1)}R_{(v_1,v_2)}^{(1)} \right) (z;\textbf{t},\textbf{s})dz\\&\quad -\frac{1}{2\pi i}\oint _{C_\infty } e^{\xi (s'-s,z)}z^{u_2-1}\tau _{(u_1,u_2-1)}(\textbf{t}',\textbf{s}'-[z^{-1}])\mathcal {C}_{\omega _2}\left( d_{(v_1,v_2)}^{(1)}R_{(v_1,v_2)}^{(1)} \right) (z;\textbf{t},\textbf{s})dz. \end{aligned}$$

Taking a substitution of Cauchy transform in Prop. 4.5 into the above formula, we complete the proof. \(\square \)

Remark 4.7

Bilinear identity (4.9) should coincide with [54, eq. (2.1)] if one changes z to \(z^{-1}\) and transforms the contour around the infinity into a circle around zero.

If we take the variable transformations

$$\begin{aligned} \textbf{t}\mapsto \textbf{t}-\alpha ,\,\textbf{t}'\mapsto \textbf{t}+\alpha ,\, \textbf{s}\mapsto \textbf{s}-\beta , \,\textbf{s}'\mapsto \textbf{s}+\beta \end{aligned}$$

and realize

$$\begin{aligned}&\tau _{(m,n)}(\textbf{t}+\alpha +[z^{-1}],\textbf{s}+\beta )\tau _{(u,v)}(\textbf{t}-\alpha -[z^{-1}],\textbf{s}-\beta )\\&\quad =e^{\sum _{i=1}^\infty \alpha _iD_{t_i}+\beta _iD_{s_i}+\xi ({\tilde{D}}_t,z^{-1})}\tau _{(m,n)}\tau _{(u,v)} \end{aligned}$$

for arbitrary \(m+n,u+v\in 2{\mathbb {N}}+1\), then the identity (4.9) becomes

$$\begin{aligned}&(-1)^{u_1+v_1} \oint _{C_\infty } e^{-2\xi (\alpha ,z)}z^{v_1-u_1-2}e^{\sum _{i=1}^\infty (\alpha _iD_{t_i}+\beta _iD_{s_i})-\xi ({\tilde{D}}_t,z^{-1})}\tau _{(u_1+1,u_2)}\cdot \tau _{(v_1-1,v_2)}dz\\&\qquad +(-1)^{u_1+v_1}\oint _{C_\infty } e^{2\xi (\alpha ,z)}z^{u_1-v_1-2}e^{\sum _{i=1}^\infty (\alpha _iD_{t_i}+\beta _iD_{s_i})+\xi ({\tilde{D}}_t,z^{-1})}\tau _{(u_1-1,u_2)}\cdot \tau _{(v_1+1,v_2)}\\&\quad =\oint _{C_\infty } e^{-2\xi (\beta ,z)}z^{v_2-u_2-2}e^{\sum _{i=1}^\infty (\alpha _iD_{t_i}+\beta _iD_{s_i})+\xi ({\tilde{D}}_s,z^{-1})}\tau _{(u_1,u_2+1)}\cdot \tau _{(v_1,v_2-1)}dz\\&\qquad +\oint _{C_\infty } e^{2\xi (\beta ,z)}z^{u_2-v_2-2}e^{\sum _{i=1}^\infty (\alpha _iD_{t_i}+\beta _iD_{s_i})-\xi ({\tilde{D}}_s,z^{-1})}\tau _{(u_1,u_2-1)}\cdot \tau _{(v_1,v_2+1)}dz. \end{aligned}$$

Therefore, according to the residue theorem, it is equivalent to

$$\begin{aligned}&(-1)^{u_1+v_1}\sum _{n=0}^\infty p_n(-2\alpha )p_{n+v_1-u_1-1}({\tilde{D}}_t)e^{\sum _{i=1}^\infty \alpha _iD_{t_i}+\beta _iD_{s_i}}\tau _{(u_1+1,u_2)}\cdot \tau _{(v_1-1,v_2)}\\&\qquad +(-1)^{u_1+v_1}\sum _{n=0}^\infty p_n(2\alpha )p_{n+u_1-v_1-1}(-{\tilde{D}}_t)e^{\sum _{i=1}^\infty \alpha _iD_{t_i}+\beta _iD_{s_i}}\tau _{(u_1-1,u_2)}\cdot \tau _{(v_1+1,v_2)}\\&\quad =\sum _{n=0}^\infty p_n(-2\beta )p_{n+v_2-u_2-1}({\tilde{D}}_s)e^{\sum _{i=1}^\infty \alpha _iD_{t_i}+\beta _iD_{s_i}}\tau _{(u_1,u_2+1)}\cdot \tau _{(v_1,v_2-1)}\\&\qquad +\sum _{n=0}^\infty p_n(2\beta )p_{n+u_2-v_2-1}(-{\tilde{D}}_s)e^{\sum _{i=1}^\infty \alpha _iD_{t_i}+\beta _iD_{s_i}}\tau _{(u_1,u_2-1)}\cdot \tau _{(v_1,v_2+1)}, \end{aligned}$$

where \(\{p_k\}_{k\ge 0}\) are elementary symmetric functions defined by (4.4) and \(D_t,\, D_s\) are bilinear operators given by (3.12).

Therefore, by comparing with the coefficients of \(\alpha _1^m\beta _1^n\) for \(m,n\ge 0\), we obtain the following integrable lattice hierarchies

$$\begin{aligned} \begin{aligned}&(-1)^{u_1+v_1} \frac{1}{n!}D_{s_1}^n \left( \sum _{k+l=m,k,l\ge 0}\frac{(-2)^k}{l!} p_{k+v_1-u_1-1}({\tilde{D}}_t)D_{t_1}^l \right) \tau _{(u_1+1,u_2)}\cdot \tau _{(v_1-1,v_2)}\\&\qquad +(-1)^{u_1+v_1} \frac{1}{n!}D_{s_1}^n \left( \sum _{k+l=m,k,l\ge 0}\frac{2^k}{l!} p_{k+u_1-v_1-1}(-{\tilde{D}}_t)D_{t_1}^l \right) \tau _{(u_1-1,u_2)}\cdot \tau _{(v_1+1,v_2)}\\&\quad =\frac{1}{m!}D_{t_1}^m\left( \sum _{k+l=n,k,l\ge 0} \frac{(-2)^k}{l!}p_{k+v_2-u_2-1}({\tilde{D}}_s)D_{s_1}^l \right) \tau _{(u_1,u_2+1)}\cdot \tau _{(v_1,v_2-1)}\\&\qquad +\frac{1}{m!}D_{t_1}^m\left( \sum _{k+l=n,k,l\ge 0} \frac{2^k}{l!}p_{k+u_2-v_2-1}(-{\tilde{D}}_s)D_{s_1}^l \right) \tau _{(u_1,u_2-1)}\cdot \tau _{(v_1,v_2+1)}. \end{aligned} \end{aligned}$$
(4.10)

The first equation in (3.11) is re-derived if \((u_1,u_2)=(v_1,v_2)\) and \((m,n)=(1,1)\), and the second equation in (3.11) is re-derived if \((u_1,u_2)=(v_1-2,v_2)\) and \((m,n)=(0,1)\).

To conclude, we can give molecule solutions to the 2-component Pfaff lattice hierarchy.

Proposition 4.8

The 2-component Pfaff lattice hierarchy (4.9) admits the following molecule solutions

$$\begin{aligned} \tau _{(v_1,v_2)}=\text {Pf}(0^{(1)},\cdots ,v_1-1^{(1)},0^{(2)},\cdots ,v_2-1^{(2)}),\quad v_1,v_2\in {\mathbb {N}},\, v_1+v_2\in 2{\mathbb {N}} \end{aligned}$$

with \(\tau _{(0,0)}=1\). Moreover, those Pfaffian elements satisfy following time evolutions

$$\begin{aligned}&\partial _{t_n}\text {pf}(i^{(1)},j^{(1)})=\text {pf}(i+n^{(1)},j^{(1)})+\text {pf}(i^{(1)},j+n^{(1)}),\quad \partial _{t_n}\text {pf}(i^{(1)},j^{(2)})\\&\quad =\text {pf}(i+n^{(1)},j^{(2)}),\\&\partial _{s_n}\text {pf}(i^{(2)},j^{(2)})=\text {pf}(i+n^{(2)},j^{(2)})+\text {pf}(i^{(2)},j+n^{(2)}),\quad \partial _{s_n}\text {pf}(i^{(1)},j^{(2)})\\&\quad =\text {pf}(i^{(1)},j+n^{(2)}),\\&\partial _{t_n}\text {pf}(i^{(2)},j^{(2)})=\partial _{s_n}\text {pf}(i^{(1)},j^{(1)})=0. \end{aligned}$$

In fact, the above time evolutions for Pfaffian elements completely characterize the molecule solution of the 2-component Pfaff lattice. Thus, we give the following remark to generalize the expression of 2-component Pfaffian \( \tau \)-function.

Remark 4.9

If we define Pfaffian elements as

$$\begin{aligned} {\text {pf}}(i^{(k)}, j^{(l)})=\int _{\gamma \times \gamma } \phi _{i}^{(k)}(x) \phi _{j}^{(l)}(y) {\mathbb {S}}(x, y) \omega _k(x) \omega _l(y) d x d y, \quad i,j\in {\mathbb {N}},\quad k,l=1,2, \end{aligned}$$
(4.11)

where \( \{\phi _{i}^{(1)}(x)\}_{i\in {\mathbb {N}}} \) and \( \{\phi _{i}^{(2)}(y)\}_{i\in {\mathbb {N}}} \) are t and s-dependent functions, respectively, such that

$$\begin{aligned} \partial _{t_n}\phi _{i}^{(1)}(x)=\phi _{i+n}^{(1)}(x),\quad \partial _{s_n}\phi _{j}^{(2)}(y)=\phi _{j+n}^{(2)}(y), \end{aligned}$$

then the Pfaffian

$$\begin{aligned} \tau _{(v_1,v_2)}=\text {Pf}(0^{(1)},\cdots ,v_1-1^{(1)},0^{(2)},\cdots ,v_2-1^{(2)}),\quad v_1,v_2\in {\mathbb {N}},\, v_1+v_2\in 2{\mathbb {N}} \end{aligned}$$

is a solution to the 2-component Pfaff lattice hierarchy (4.9). Therefore, we refer a Pfaffian with moments defined by (4.11) as a 2-component Pfaffian \(\tau \)-function.

5 An Application of 2-Component Pfaffian \(\tau \)-Function into Combinatorics

In this section, we consider a combinatorial explanation for the 2-component Pfaffian \(\tau \)-function considered in previous sections. In particular, we focus on the non-intersecting paths induced by Pfaffian discussed in [53].

Let \(D=(V,E)\) be an acyclic directed graph. If (uv) is a pair of vertices, let \({\mathcal {P}}(u,v)\) denote the set of all directed paths from u to v. Moreover, given any pair of r-tuples \({\textbf{u}}=(u_1,\cdots ,u_r)\) and \({\textbf{v}}=(v_1,\cdots ,v_r)\) of vertices, let \({\mathcal {P}}({\textbf{u}},{\textbf{v}})\) denote the set of r-tuples of paths \({\textbf{P}}=(P_1,\cdots ,P_r)\) with \(P_i\in {\mathcal {P}}(u_i,v_i)\). In particular, \({\mathcal {P}}({\textbf{u}},{\textbf{v}})\) is said to be non-intersecting if any two different paths \(P_i\) and \(P_j\) have no vertex in common. We denote \({\mathcal {P}}_0({\textbf{u}},{\textbf{v}})\) to be the non-intersecting paths from \({\textbf{u}}\) to \({\textbf{v}}\). We assume that \({\textbf{u}}\) and \({\textbf{v}}\) are ordered sets and say \({\textbf{u}}\) is D-compatible with \({\textbf{v}}\) if every path \(P\in {\mathcal {P}}(u_i,v_l)\) intersects with path \(Q\in {\mathcal {P}}(u_j,v_k)\) whenever \(i<j\) and \(k<l\).

Let’s denote \(w({\mathcal {P}}({\textbf{u}},{\textbf{v}}))\) as the weight of an r-tuple path \({\mathcal {P}}({\textbf{u}},{\textbf{v}})\), which is defined to be the product of the weights of its components. Moreover, one can define corresponding generating function

$$\begin{aligned} h({\textbf{u}},{\textbf{v}})=GF({\mathcal {P}}({\textbf{u}},{\textbf{v}}))=\sum _{{\textbf{P}}\in {\mathcal {P}}({\textbf{u}},{\textbf{v}})}w({\textbf{P}}). \end{aligned}$$

In particular, if u and v are two separate vertices, then h(uv) means the weight of generating function of all paths from u to v. It is known from [53, Thm. 3.2] that the generating function of non-intersecting paths of certain specified vertex sets could be written as a Pfaffian.

Proposition 5.1

Let \({\textbf{u}}=(u_1,\cdots ,u_r)\) and \({\textbf{v}}=(v_1,\cdots ,v_s)\) be sequences of vertices in an acyclic digraph D, and assume that \(r+s\) is even. If I is a totally ordered subset of V such that \({\textbf{u}}\) is D-compatible with \({\textbf{v}}\oplus I\), where \({\textbf{v}}\) and I are disjoint, then the generating function of non-intersecting paths from \({\textbf{u}}\) to points in \({\textbf{v}}\oplus I\) could be written as a Pfaffian

$$\begin{aligned} GF({\mathcal {P}}_0({\textbf{u}},{\textbf{v}};I))=\text {Pf}\left[ \begin{array}{cc} \left( Q_I(u_i,u_j) \right) _{1\le i,j\le r} &{} \left( h(u_i,v_{s+1-j}) \right) _{1\le i\le r,1\le j\le s}\\ -\left( h(v_{s+1-j},u_i) \right) _{1\le i\le r,1\le j\le s} &{} {\textbf{0}}_{s\times s} \end{array} \right] . \end{aligned}$$

In the above formula, \(Q_I(u_i,u_j)\) is defined by

$$\begin{aligned} Q_I(u_i,u_j)=\sum _{x<y\in I}h(u_i,x)h(u_j,y)-h(u_i,y)h(u_j,x). \end{aligned}$$

In fact, the generating function of such non-intersecting paths is a very special two-component Pfaffian \(\tau \)-function discussed previously. To make a brief connection, we have the following proposition.

Proposition 5.2

By assuming that

$$\begin{aligned} \begin{aligned} {\mathbb {S}}(x,y)=\left\{ \begin{array}{ll} sgn(y-x),&{}x,y\in I,\\ S_1(x,y),&{} x\in I, y\in {\textbf{u}},\\ -S_1(y,x),&{} x\in {\textbf{u}},y\in I\\ 0,&{} \text {otherwise}, \end{array} \right. \end{aligned} \end{aligned}$$
(5.1)

where \(S_1(x,y)\) is given by the formula

$$\begin{aligned} \sum _{x\in I} h(u_i,x)S_1(x,y)=\delta (u_i-y), \end{aligned}$$

and taking that \( \omega _1(x)=\sum _{x_i\in I\oplus {\textbf{v}}} \delta _{x_i},\, \omega _2(x)=\sum _{x_i\in {\textbf{u}}} \delta _{x_i}, \) then we have

$$\begin{aligned}&GF({\mathcal {P}}_0({\textbf{u}},{\textbf{v}};I))=(-1)^{(s-1)s/2}\text {Pf}\left( \begin{array}{cc} M_{r,r}^{(1,1)}&{}M_{r,s}^{(1,2)}\\ M_{s,r}^{(2,1)}&{}M_{s,s}^{(2,2)} \end{array} \right) \end{aligned}$$

with

$$\begin{aligned} \begin{aligned}&M_{r,r}^{(1,1)}=\left( \int _{x,y\in V} {\mathbb {S}}(x,y)h(u_i,x)h(u_j,y)\omega _1(x)\omega _1(y)dxdy \right) _{i,j=1,\cdots ,r},\\&M_{r,s}^{(1,2)}=\left( \int _{x,y\in V}{\mathbb {S}}(x,y)h(u_i,x)h(y,v_j)\omega _1(x)\omega _2(y)dxdy \right) _{i=1,\cdots ,r,j=1,\cdots ,s},\\&M_{s,r}^{(2,1)}=\left( \int _{x,y\in V}{\mathbb {S}}(x,y)h(x,v_i)h(u_j,y)\omega _2(x)\omega _1(y)dxdy \right) _{i=1,\cdots ,s,j=1,\cdots ,r},\\&M_{s,s}^{(2,2)}=\left( \int _{x,y\in V}{\mathbb {S}}(x,y)h(x,v_i)h(y,v_j)\omega _2(x)\omega _2(y)dxdy \right) _{i,j=1,\cdots ,s}. \end{aligned} \end{aligned}$$
(5.2)

Proof

This is a constructive proof. One could verify it by noting that

$$\begin{aligned}&\int _{x,y\in V} {\mathbb {S}}(x,y)h(u_i,x)h(u_j,y)\omega _1(x)\omega _1(y)dxdy\\&\quad =\sum _{x,y\in I}sgn(y-x)h(u_i,x)h(u_i,y)=Q_I(u_i,u_j), \end{aligned}$$

and that

$$\begin{aligned}&\int _{x,y\in V}{\mathbb {S}}(x,y)h(u_i,x)h(y,v_j)\omega _1(x)\omega _2(y)dxdy\\&\quad =\int _{y\in V} \left( \sum _{x\in I} S_1(x,y)h(u_i,x) \right) h(y,v_j)w_2(y)dy\\&\quad =\int _{y\in V} \delta (y-u_i)h(y,v_j)w_2(y)dy=h(u_i,v_j). \end{aligned}$$

Therefore, by rearranging rows and columns, we obtain the generating function of such non-intersecting paths. \(\square \)

It should be mentioned that moments given in (5.2) coincide with those in (4.11) if we define \(\phi _i^{(1)}(x)=h(u_i,x)\) and \(\phi _j^{(2)}(x)=h(x,v_j)\). In fact, the result of Stembridge has been generalized in [19]. It is said that the starting points \({\textbf{u}}\) could be generalized to a general region \(({\textbf{u}},J)\), while the ending points are still \(({\textbf{v}},I)\). The total number of vertices in \( {\textbf{u}} \) and \( {\textbf{v}} \) is still \(r+s\) which is even. Choose an ordering of the vertices in V such that for any \( u_1\in {\textbf{u}} \) (\( v_1\in {\textbf{v}} \)), \( u_2\in J \) (\( v_2\in I \)), we have \( u_1<u_2 \) (\( v_1<v_2 \)). If there are no paths from I to J, then the generating function of such non-intersecting paths is

$$\begin{aligned} GF({\mathcal {P}}_0({\textbf{u}},J;{\textbf{v}},I))=\text {Pf}\left[ \begin{array}{cc} \left( Q_I(u_i,u_j)\right) _{1\le i,j\le r}&{} \left( h(u_i,v_{j}) \right) _{1\le i\le r,1\le j\le s}\\ -\left( h(u_{j},v_i) \right) _{1\le i\le s,1\le j\le r} &{} \left( Q^{t}_J(v_i,v_j) \right) _{1\le i,j\le s} \end{array} \right] , \end{aligned}$$

where \( Q_{J}^{t} \) is defined as

$$\begin{aligned} Q_J^{t}(v_i,v_j)=\sum _{x<y\in J}h(x,v_i)h(y,v_j)-h(y,v_i)h(x,v_j). \end{aligned}$$

For this general case, we have the following proposition.

Proposition 5.3

Let \( S_1(x,y) \) be a function on \( {\textbf{v}}\times {\textbf{u}} \) that satisfies

$$\begin{aligned} \sum _{x\in {\textbf{v}}} h(u_i,x)S_1(x,y)=\delta (u_i-y). \end{aligned}$$

Then by taking

$$\begin{aligned} \begin{aligned} {\textbf{S}}(x,y)=\left\{ \begin{array}{ll} sgn(y-x),&{} x,y\in I\text { or }x,y\in J,\\ S_1(x,y),&{} x\in {\textbf{v}},\,y\in {\textbf{u}},\\ -S_1(y,x),&{}x\in {\textbf{u}},\,y\in {\textbf{v}},\\ 0,&{}\text {otherwise}, \end{array} \right. \end{aligned} \end{aligned}$$
(5.3)

and

$$\begin{aligned} w_1(x)=\sum _{x_i\in I\oplus {\textbf{v}}}\delta _{x_i},\quad w_2(x)=\sum _{x_i\in J\oplus {\textbf{u}}}\delta _{x_i}, \end{aligned}$$
(5.4)

we have

$$\begin{aligned} GF({\mathcal {P}}_0({\textbf{u}},J;{\textbf{v}},I))=\text {Pf}\left( \begin{array}{cc} M_{r,r}^{(1,1)}&{}M_{r,s}^{(1,2)}\\ M_{s,r}^{(2,1)}&{}M_{s,s}^{(2,2)} \end{array} \right) , \end{aligned}$$

where different Ms are given in (5.2) with kernels and weights defined in (5.3) and (5.4), respectively.

6 Concluding Remarks

In this paper, we develop ideas for how to properly define multiple skew-orthogonal polynomials. This concept should be appealing, as multiple orthogonal polynomials have been widely investigated in the fields of random matrices and integrable systems. As an application, we considered appropriate time deformations on multiple skew-orthogonal polynomials, which were turned out to have tight connections with Pfaff–Toda hierarchy considered earlier by Takasaki. In our paper, we called the corresponding integrable hierarchy as 2-component Pfaff lattice hierarchy because they could be viewed from the perspective of multiple skew-orthogonal polynomials. As mentioned in Takasaki’s paper [54], Pfaff lattice hierarchy and multi-component Pfaff lattice hierarchy have many common properties. However, multiple skew-orthogonal polynomials have compact recurrence relations shown in (3.10a)-(3.10d), which play important roles in the formulation of spectral problems for 2-component Pfaff lattice hierarchy. The solutions to this hierarchy are given by the Pfaffian of moment matrices which are often known as Pfaffian \( \tau \)-functions. An analogue of this 2-component Pfaffian \( \tau \)-function can be found in combinatorics concerning the generating functions of non-intersecting paths in a digraph as we discussed in the last section.

There are still interesting problems to continue. One is to seek for proper applications into random matrix theory. Both the Gaussian and chiral unitary models with a source are examples of determinantal point processes. In random matrix theory, Pfaffian point processes also arise naturally, we expect to find a random matrix model characterized by those multiple skew-orthogonal polynomials. Besides, there are several 2-component BKP hierarchies [37, 51] and whether their solutions are related to those multiple skew-orthogonal polynomials is worthy studying.