Abstract
The main objective of this work is to present some important results and formulas in the theory of Humbert matrix functions by using the concepts of matrix functional calculus. We define Humbert matrix functions assuming that not all the matrices involved are commuting. We show that these two variable Humbert matrix functions follow naturally as confluent cases of Appell matrix functions. We determine their regions of convergence, integral representations, transformation formulas, summation formulas, contiguous relations and matrix differential equations satisfied by them.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The theory of special functions and its generalizations arise widely in physics, probability theory, engineering and Lie theory, etc. Humbert functions constitute a set of seven hypergeometric functions of two variables that are confluent cases of two variable Appell hypergeometric functions and generalize the Kummer’s confluent hypergeometric function \(_{1}F_{1}\) of one variable. The class of classical Humbert functions has been recently studied for reduction formulas (Brychkov 2017; Brychkov et al. 2017) and summation formulas (Choi and Rathie 2015).
Special matrix functions play an important role in certain parts of mathematics and physics. The gamma and beta matrix functions (Jodar and Cortes 1998a) and Gauss hypergeometric matrix function (Jodar and Cortes 1998b) have been studied in detail. The matrix analogue of hypergeometric functions of two and more variables has been recently obtained in Dwivedi and Sahai (2018a, b). Some of the recent work on Appell matrix functions has been given in Altin et al. (2014), Batahan and Metwally (2009) and Dwivedi and Sahai (2019). As confluent cases of hypergeometric functions are useful in certain areas of physics and applied mathematics, we present here the confluent cases of two variable Appell matrix functions referred to as Humbert matrix functions. We remark that certain Humbert matrix functions have earlier been discussed in Rida et al. (2010), in the case when all the matrices involved were commuting. We find system of matrix differential equations satisfied by these Humbert matrix functions. We also determine the regions of convergence, integral representations and transformation formulas for these functions. The infinite summation formulas and contiguous formulas for Humbert matrix functions \(\psi _1\) and \(\psi _2\) are also presented. Our approach to Humbert matrix functions is an alternative to the one discussed in Mathai (1993), in which matrices were introduced as arguments in the Humbert functions. Others who have worked on Appell and Humbert matrix functions include Altin et al. (2014) and Rashwan et al. (2013, 2016).
The section-wise treatment is as follows. In Sect. 2, we give basic definitions and preliminaries that are needed in the subsequent sections. In Sect. 3, we introduce the Humbert matrix functions as limiting cases of the Appell matrix functions. We find their regions of convergence and give the matrix differential equations obeyed by them. In Sect. 4, we find various integral representations of Humbert matrix functions. In Sect. 5, we prove numerous transformation formulas satisfied by these matrix functions. Finally, in Sect. 6, we present infinite summation formulas and contiguous relations for Humbert matrix functions \(\psi _1\) and \(\psi _2\).
2 Preliminaries and Basic Definitions
Let \({\mathbb {C}}^{r \times r}\) denote the vector space of all r-square matrices with complex entries. For \(A\in {\mathbb {C}}^{r\times r}\), \(\sigma (A)\) is the spectrum of A. The spectral abscissa of A is given by \(\alpha (A) = \max \{\mathfrak {R}(z) \mid z\in \sigma (A)\}\), where \(\mathfrak {R}(z)\) denotes the real part of a complex number z. If \(\beta (A) = \min \{\mathfrak {R}(z) \mid z \in \sigma (A)\}\), then \(\beta (A) = -\alpha (-A)\). A square matrix A is said to be positive stable if \(\beta (A) >0\). The 2-norm of A denoted by \(\Vert A\Vert \) is defined by
where for any vector x in \(r\hbox {th}\) complex space, \(\Vert {x}\Vert _2 = (x^*x)^\frac{1}{2}\) is the Euclidean norm of x and \(A^*\) denotes the transposed conjugate of A. If f(z) and g(z) are holomorphic functions of the complex variable z, which are defined in an open set \(\Omega \) of the complex plane, and A is a matrix in \({\mathbb {C}}^{r \times r}\) with \(\sigma (A)\subset \Omega \), then from the properties of the matrix functional calculus (Dunford and Schwartz 1957), it follows that
Further if \(B\in {\mathbb {C}}^{r \times r}\) is a matrix for which \(\sigma (B)\subset \Omega \), and if \(AB = BA\), then
Let \(A\in {\mathbb {C}}^{r \times r}\) be such that \(\mathfrak {R}(z)>0\) for all eigenvalues z of A. Then \(\Gamma (A)\) can be expressed as (Jodar and Cortes 1998a)
Furthermore, if \(A+nI\) is invertible for all integers \(n\ge 0\), then the reciprocal gamma function is defined as (Jodar and Cortes 1998a)
If \(A \in {\mathbb {C}}^{r\times r}\) is a positive stable matrix and \(n\ge 1\) is an integer, then the gamma matrix function can also be defined in the form of limit as (Jodar and Cortes 1998a)
By application of the matrix functional calculus, for A in \({\mathbb {C}}^{r\times r}\), Pochhammer symbol is given by
This gives
Using the Schur decomposition of A, Golub and Van Loan (1989) and Van Loan (1977), it follows that
We shall use the notation \(\Gamma \left( \begin{array}{c} A_1, \ldots , A_p \\ B_1, \ldots , B_q \end{array}\right) \) for \(\Gamma (A_1) \cdots \Gamma (A_p) \Gamma ^{-1}(B_1) \cdots \Gamma ^{-1} (B_q)\).
Furthermore, if A, B and \(A+B\) are positive stable matrices in \({\mathbb {C}}^{r \times r}\) such that \(AB = BA\), then the beta matrix function is defined as (Jodar and Cortes 1998a, b)
Following result is needed to find certain integral representations of Humbert matrix functions.
Lemma 1
Dwivedi and Sahai (2018a) Let A, B, C be commuting matrices in \({\mathbb {C}}^{r \times r}\) such that A, B, C and \(A+B+C\) are positive stable. Then
3 Humbert Matrix Functions
Humbert (1920) defined seven confluent forms of the four Appell functions and denoted these by \(\phi _1\), \(\phi _2\), \(\phi _3\), \(\psi _1\), \(\psi _2\), \(\varXi _1\), \(\varXi _2\). The work of Humbert is described fairly in Srivastava and Karlsson (1985). We begin with finding the confluent case of the Gauss hypergeometric matrix function (Jodar and Cortes 1998b).
Consider the Gauss hypergeometric matrix function
where \((A)_0 = I\) and \((A)_m = A (A+I) \cdots (A+(m-1)I), m \ge 1\) for a matrix \(A \in {\mathbb {C}}^{r\times r}\). Thus \(\left( \frac{1}{\varepsilon }I\right) _m \ \varepsilon ^m = I (I+\varepsilon I) \cdots (I+(m-1)\varepsilon I)\). On letting \(\varepsilon \) tends to zero, we get
Therefore
The series on the right hand side of (15) is matrix analogue of confluent hypergeometric function \(_1F_1\). The matrix series \(_1F_1\) converges for all z and \(\alpha (B) - \beta (C) < 0\). Hence
The confluent matrix function \({}_1F_1 (B; \, C; \, z)\), constitutes a first degeneration of the Gauss hypergeometric matrix function \({}_2F_1 (A, \, B, \, C; \, z)\). The properties of the confluent matrix function \({}_1F_1 (B; \, C; \, z)\) can be deduced by passing to the limit in the formulas relating to the matrix function \(_2F_1\). Consider for example, the integral representation of the matrix function \({}_2F_1 \left( \frac{1}{\varepsilon }I, \, B;\ C; \ \varepsilon z\right) \) (Jodar and Cortes 1998b), given by
Letting \(\varepsilon \rightarrow 0\) leads to
We now extend this technique of finding confluent form of Gauss hypergeometric matrix function to Appell matrix functions. Let A, \(A'\), B, \(B'\), C and \(C'\) be matrices in \({\mathbb {C}}^{r\times r}\) such that each \(C+kI\), \(C'+kI\) is invertible for all integers \(k \ge 0\). Then the Humbert matrix functions are defined by Rida et al. (2010)
It can be verified using (14) that these matrix functions of two variables are confluent cases of Appell matrix functions. Indeed, we have
where, for matrices A, \(A'\), B, \(B'\), C, \(C'\in {\mathbb {C}}^{r\times r}\) such that \(C+kI\), \(C'+kI\) are invertible for all integers \(k \ge 0\), the Appell matrix functions \(F_1\), \(F_2\), \(F_3\) are defined by Altin et al. (2014) and Dwivedi and Sahai (2019)
Let \(U_x\), \(U_y\), \(U_{xx}\), \(U_{xy}\), \(U_{yy}\) denote \(\frac{\partial U}{\partial x}\), \(\frac{\partial U}{\partial y}\), \(\frac{\partial ^2 U}{\partial x^2}\), \(\frac{\partial ^2 U}{\partial x \partial y}\), \(\frac{\partial ^2 U}{\partial y^2}\), respectively. Then the matrix differential equations satisfied by Humbert matrix functions under certain conditions are given below. Suppose C is a matrix in \({\mathbb {C}}^{r \times r}\) such that \(C+kI\) is invertible for all integers \(k\ge 0\) and \(BC = CB\). Then the matrix differential equations satisfied by the Humbert matrix function \(\phi _1\) are
We remark that the second equation in (35) is obtained by putting \(B'=\frac{1}{\varepsilon }I\) and y replaced by \(\varepsilon y\) and taking the limit \(\varepsilon \rightarrow 0\) in the matrix differential equation satisfied by \(F_1\), see Altin et al. (2014, Theorem 1).
We now list the matrix differential equations satisfied by the remaining Humbert matrix functions. If \(B'C=CB'\), then the Humbert matrix function \(\phi _2\) satisfies the following matrix differential equations
The matrix differential equations satisfied by Humbert matrix function \(\phi _3\) are
Suppose B, C and \(C'\) are commuting matrices in \({\mathbb {C}}^{r \times r}\). Then the Humbert matrix function \(\psi _1\) obeys the following matrix differential equations
For \(CC' = C'C\), the matrix differential equations satisfied by the Humbert matrix function \(\psi _2\) are
If \(BC = CB\) and \(A'A = A A'\), then the matrix differential equations satisfied by the Humbert matrix function \(\varXi _1\) are
If B commutes with C, then the Humbert matrix function \(\varXi _2\) obeys the following matrix differential equations
The Appell matrix function \(F_1\), defined in (32), converges absolutely in the region \(\alpha (A) < \beta (C)\), \(\alpha (B) < 1\), \(\alpha (B') < 1\) and \(\vert x\vert < 1\), \(\vert y\vert < 1\). Following this, the convergence conditions of \(F_1\left( A, B, \frac{1}{\varepsilon }I; C; x, \varepsilon y\right) \) are \(\alpha (A)< \beta (C), \alpha (B)< 1, \vert x\vert< 1 \text { and } \vert \varepsilon y\vert < 1\). Since, \(\varepsilon \) is sufficiently small so \(\vert \varepsilon y\vert < 1\) gives \(\vert y\vert < \frac{1}{\varepsilon }\). As \(\varepsilon \rightarrow 0\), we get \(\vert y\vert < \infty \). We summarize this as
Theorem 1
For positive stable matrices A, B, C in \({\mathbb {C}}^{r\times r}\), the Humbert matrix function \(\phi _1\) defined in (18) converges absolutely for
We now give the convergence conditions for the remaining six Humbert matrix functions, viz. \(\phi _2\), \(\phi _3\), \(\psi _1\), \(\psi _2\), \(\varXi _1\) and \(\varXi _2\). Proofs of these theorems are omitted.
Theorem 2
Let B, \(B'\), C be positive stable matrices in \({\mathbb {C}}^{r\times r}\). Then the Humbert matrix function \(\phi _2\) defined in (19) converges absolutely for
Theorem 3
Let B, C be positive stable matrices in \({\mathbb {C}}^{r\times r}\). Then the Humbert matrix function \(\phi _3\) defined in (20) converges absolutely for
Theorem 4
Let A, B, C, \(C'\) be positive stable matrices in \({\mathbb {C}}^{r\times r}\). Then the Humbert matrix function \(\psi _1\) defined in (21) converges absolutely for
Theorem 5
Let A, C, \(C'\) be positive stable matrices in \({\mathbb {C}}^{r\times r}\). Then the Humbert matrix function \(\psi _2\) defined in (22) converges absolutely for
Theorem 6
Let A, \(A'\), B, C be positive stable matrices in \({\mathbb {C}}^{r\times r}\). Then the Humbert matrix function \(\varXi _1\) defined in (23) converges absolutely for
Theorem 7
Let A, B, C be positive stable matrices in \({\mathbb {C}}^{r\times r}\). Then the Humbert matrix function \(\varXi _2\) defined in (24) converges absolutely for
4 Integral Representations
Theorem 8
Let A, B, C be matrices in \({\mathbb {C}}^{r\times r}\) such that \(CA = AC\), \(CB = BC\) and A, C and \(C-A\) are positive stable. Then for \(\vert x\vert < 1\) and \(\vert y\vert < \infty \), the Humbert matrix function \(\phi _1\) defined in (18) can be represented as
Proof
For positive stable matrices A, C and \(C-A\), the integral representation of Appell matrix function \(F_1\), Altin et al. (2014) gives
Taking limit \(\varepsilon \rightarrow 0\) and using (14), we get the integral representation (49). \(\square \)
Theorem 9
Let B, \(B'\), C be commuting matrices in \({\mathbb {C}}^{r\times r}\) such that B, \(B'\), C and \(C-B-B'\) are positive stable. Then for \(\vert x\vert < \infty \) and \(\vert y\vert < \infty \), the Humbert matrix function \(\phi _2\) defined in (19) can be put in the integral form as
where \(u \ge 0, v\ge 0, u+v \le 1\).
Theorem 10
Let B, C be commuting matrices in \({\mathbb {C}}^{r\times r}\) such that B, C and \(C-B\) are positive stable. Then for \(\vert x\vert < \infty \) and \(\vert y\vert < \infty \), the Humbert matrix function \(\phi _3\) defined in (20) can be put in the integral form as
Theorem 11
Let A, B, C, \(C'\) be commuting matrices in \({\mathbb {C}}^{r\times r}\) such that A, B, C, \(C'\), \(C-B\) and \(C'-A\) are positive stable. Then for \(\vert x\vert < 1\) and \(\vert y\vert < \infty \), the Humbert matrix function \(\psi _1\) defined in (21) can be put in the integral form as
Theorem 12
Let \(A'\), B, C be commuting matrices in \({\mathbb {C}}^{r\times r}\) such that \(A'\), B, and \(C-A'-B\) are positive stable. Then for \(\vert x\vert < 1\) and \(\vert y\vert < \infty \), the Humbert matrix function \(\varXi _1\) defined in (23) can be put in the integral form as
Theorem 13
Let B, C be commuting matrices in \({\mathbb {C}}^{r\times r}\) such that B, C and \(C-B\) are positive stable. Then for \(\vert x\vert < 1\) and \(\vert y\vert < \infty \), the Humbert matrix function \(\varXi _2\) defined in (24) can be put in the integral form as
Theorem 14
Let A, B and C be matrices in \({\mathbb {C}}^{r\times r}\) such that \(AC = CA\), \(BC = CB\) and A, C, \(C-A\) are positive stable. Then, for \(\vert z\vert < 1\), \(\vert w\vert <\infty \), the Humbert matrix function \(\phi _1\) can also be represented as
Proof
It is enough to substitute \(t=\frac{u}{1+u}\) in Theorem 8. \(\square \)
Theorem 15
The following integral representations of the Humbert matrix function \(\phi _{2}\)
and
hold, where A, \(B,\ C\) are commuting matrices such that A, \(B,\ C\), \(C-A\) and \(C-B\) are positive stable.
Proof
Similarly, the other integral representation of \(\phi _2\) can be established. \(\square \)
Theorem 16
Let A, B, C and D be matrices in \({\mathbb {C}}^{r\times r}\) such that \(CD = DC\), \(BC = CB\) and C, D, \(C+D\) are positive stable. Then the following integral representation holds:
Proof
Using the series expansion of the matrix functions \(_{1}F_{1}\left( A;C;zt\right) \) and \({} _{1}F_{1}\left( B;D;w(1-t)\right) \) and from (10), the right hand side of (59) yields
\(\square \)
Theorem 17
The Humbert matrix function \(\phi _{2}\) defined in (19) has the integral representations
Proof
Using the series expansion of \({}_{1}F_{1}\), and from (10)–(11), we can easily obtain the required results (61), (62) and (63).
In the next theorem, we give the integral representations of Humbert matrix function \(\phi _3\). Since the proofs are similar to the Theorem 16, we omit them.
Theorem 18
The Humbert matrix function \(\phi _{3}\) defined in (20) has the integral representations
Theorem 19
The Humbert matrix function \(\psi _{1}\) defined in (21) can be put in the integral form as
Proof
Since B, C and \(C-B\) are positive stable matrices and \(BC = CB\), we have
Theorem 20
The Humbert matrix function \(\psi _{1}\) defined in (21) can be put in the integral form as
Proof
The proof of (69) is similar to the proof of (66) and (70) can be easily obtained using the definition of gamma matrix function given in (4). We remark that a particular case of Eq. (69) appear in Rida et al. (2010) when all involved matrices are commuting. \(\square \)
Theorem 21
The integral representation of Humbert matrix function \(\psi _{2}\) is given by
Theorem 22
An integral representation for Humbert matrix functions \(\psi _{1}\) can be given in terms of \(\psi _{2}\) as follows:
Theorem 23
An integral representation for Appell matrix function \(F_{2}\) can be given in terms of Humbert matrix function \(\psi _{1}\) as follows:
Theorem 24
The Humbert matrix function \(\Xi _{1}\) has the following integral form:
Proof
If A, D and \(D-A\) are positive stable matrices and \(AD = DA\), we have
If B, D and \(D-B\) are positive stable matrices and \(BD = DB\), we have
This completes the proof of (74). Similarly integral representations (75)–(77) can be obtained. \(\square \)
Theorem 25
The Humbert matrix functions \(\Xi _{1}\) and \(\Xi _2\), for commuting matrices C, D, E, have the following integral representations
Proof
It can be verified using the integral representation of beta matrix function, (11), in the right hand side of (82) and (83). \(\square \)
5 Transformation Formulas
In this section, some transformations for Humbert matrix functions are derived with the help of hypergeometric matrix functions.
Theorem 26
Let A, B, C be matrices in \({\mathbb {C}}^{r\times r}\). Then the Humbert matrix functions \(\phi _1\), \(\phi _2\) and \(\phi _3\) satisfy the following transformations
Note that we require \(A+kI\), \(k\ge -1\), to be invertible in (86) and (88).
Proof
Writing the series representation of \(\;_{1}{F}_{1}\) in the right-hand side of Eq. (84), we have
This completes the proof of (84). The proof of (85) follows in a similar manner. To prove (86) replace m by \((m-n)\) in (19), we get
Using the following identities
and
we have the following equation
Using the series representation of \(\;_{2}{F}_{1}\), we have the required transformation (86). The proofs of (87)–(89) are similar to (86), we omit them. \(\square \)
Note that the transformation formula of \(F_{1}\left( A,B,B^{^{\prime }},C;z;w\right) \), Abd-Elmageed et al. (2018),
gives (84), while taking \(\frac{1}{\varepsilon }I\) instead of \(B'\), \(\varepsilon w\) instead of w and taking \(\varepsilon \rightarrow 0\).
Theorem 27
Let A, B, C, D be matrices in \({\mathbb {C}}^{r\times r}\). Then the Humbert matrix functions \(\psi _1\) and \(\psi _2\) satisfy the following transformations
Proof
Using \((1-z)^{-A} = \sum _{n=0}^{\infty } \frac{(A)_n}{n!} z^n\) and \((A)_{m+n}(A+(m+n)I)_{k}=(A)_{m+n+k}\), we have
Since the matrices B, C in \({\mathbb {C}}^{r\times r}\) are commuting and C, \(C-B\) are positive stable, we have (Defez and Jodar 2002)
This completes the proof of (93). The proofs of (94) and (95) follow directly from the definition of Humbert matrix functions \(\psi _1\) and \(\psi _2\). \(\square \)
Corollary 1
The following transformation formula holds:
Proof
It follows directly from the Eq. (93) of Theorem 27. \(\square \)
Theorem 28
Let A, B and C be matrices in \({\mathbb {C}}^{r\times r}\) such that \(C+kI\) is invertible for all integers \(k\ge 0\). Then the following differential matrix transformation formulas are satisfied by the Humbert matrix functions \(\phi _1\), \(\phi _2\) and \(\phi _3:\)
Proof
The differential matrix transformation of \(F_{1}\left( A,B,B^{^{\prime }},C;z;w\right) \), Abd-Elmageed et al. (2018), is given by
where all the matrices were considered as commutative. If we allow matrix A to commute with B and keep \(B'\) and C arbitrary, then the following matrix differential transformation formula holds:
On replacing \(\frac{1}{\varepsilon }I\) instead of \(B'\)and \(\varepsilon w\) instead of w in the above equation and taking \(\varepsilon \rightarrow 0\), we get the result (102). To prove (103), we differentiate Equation (18) with respect to w to get
Proceeding similarly r-times, we get the required relation (103). Using the matrix differential formulas for Appell matrix functions \(F_1\) as given in Abd-Elmageed et al. (2018) and the way (103) is proved, we are able to prove the matrix differential formulas (104)–(115). \(\square \)
Theorem 29
The following matrix differential transformation formulas for Humbert matrix functions \(\psi _{1}\) and \(\psi _2\) hold:
Proof
From (21), we have
Repeating the above process, we eventually arrive at
Proceeding in the same manner, differential formulas (119) – (130) can be easily obtained. \(\square \)
We discussed above the differential transformation formulas for \(\psi _1\) and \(\psi _2\). Analogously, the differential transformation formulas for Humbert matrix functions \(\Xi _1\) and \(\Xi _2\) can be obtained. These results are presented in the next theorem. As the proof of these results are similar, we omit them.
Theorem 30
The following matrix differential transformation formulas are satisfied by Humbert matrix functions \(\Xi _{1}\) and \(\Xi _{2}\):
6 Infinite Summation Formulas Associated with \(\psi _1\) and \(\psi _2\)
Theorem 31
The Humbert matrix functions \(\psi _{1}\) and \(\psi _2\) satisfy the following infinite summation formulas:
Proof
The right hand side of (146) yields
This completes the proof of (146). Proceeding similarly, the other infinite summation formulas can be established. \(\square \)
The following abbreviations will be used in the contiguous matrix relations, given in the next theorem, obeyed by the Humbert matrix functions \(\psi _1\) and \(\psi _2\):
Theorem 32
If \(AB = BA\), \(CD = DC\) and the matrices \(C-I\) and \(D-I\) are invertible, then the following contiguous matrix relations are satisfied by the Humbert matrix functions \(\psi _1\) and \(\psi _2\):
Proof
From the definition of the Humbert matrix function \(\psi _{1}\) in (21), we have
This proves (152). The other three contiguous relations can be proved similarly. We remark that a particular case of Eq. (152) appear in Rida et al. (2010) when all involved matrices are commuting. \(\square \)
References
Abd-Elmageed, H., Abdalla, M., Abul-Ez, M., Saad, N.: Some results on the first Appell matrix function. Linear Multilinear Algebra (2018). https://doi.org/10.1080/03081087.2018.1502254
Altin, A., Cekim, B., Sahin, R.: On the matrix versions of Appell hypergeometric functions. Quaest. Math. 37(1), 31–38 (2014)
Batahan, R.S., Metwally, M.S.: Differential and integral operators on Appell’s matrix functions. Andal. Soci. Appl. Sci. 3, 7–25 (2009)
Brychkov, A.Y.: Reduction formulas for the Appell and Humbert functions. Integral Transforms Spec. Funct. 28(1), 22–38 (2017)
Brychkov, A.Y., Kim, Y.S., Rathie, A.K.: On new reduction formulas for the Humbert functions \(\psi _2\), \(\phi _2\) and \(\phi _3\). Integral Transforms Spec. Funct. 28(5), 350–360 (2017)
Choi, J., Rathie, A.K.: Certain summation formulas for Humbert’s double hypergeometric series \(\Psi _{2}\) and \(\Phi _{2}\). Commun. Korean Math. Soc. 30(4), 439–446 (2015)
Constantine, A.G., Muirhead, R.J.: Partial differential equations for hypergeometric functions of two argument matrices. J. Multivar. Anal. 2, 332–338 (1972)
Defez, E., Jodar, L.: Chebyshev matrix polynomials and second order matrix differential equations. Util. Math. 61, 107–123 (2002)
Dunford, N., Schwartz, J.: Linear Operators, Part-I. Addison-Wesley, New York (1957)
Dwivedi, R., Sahai, V.: On the hypergeometric matrix functions of two variables. Linear Multilinear Algebra 66(9), 1819–1837 (2018)
Dwivedi, R., Sahai, V.: On the hypergeometric matrix functions of several variables. J. Math. Phys. 59(2), 023505 (2018)
Dwivedi, R., Sahai, V.: A note on the Appell matrix functions. Quaest. Math. (2019). https://doi.org/10.2989/16073606.2019.1577309
Golub, G.H., Van Loan, C.F.: Matrix Computations. Johns Hopkins Univ. Press, Baltimore (1989)
Humbert, P.: The confluent hypergeometric functions of two variables. Proc. R. Soc. Edinburgh 41, 73–96 (1920)
James, A.T.: Special functions of matrix and single argument in statistics. Theory and application of special functions (Proc. Advanced Sem., Math. Res. Center, Univ. Wisconsin, Madison, Wis., 1975), pp. 497–520. Math. Res. Center, Univ. Wisconsin, Publ. No. 35, Academic Press, New York (1975)
Jodar, L., Cortes, J.C.: Some properties of gamma and beta matrix functions. Appl. Math. Lett. 11(1), 89–93 (1998a)
Jodar, L., Cortes, J.C.: On the hypergeometric matrix function. Proceedings of the VIIIth Symposium on orthogonal polynomials and their applications (Seville, 1997). J. Comput. Appl. Math. 99(1–2), 205–217 (1998b)
Jodar, L., Cortes, J.C.: Closed form general solution of the hypergeometric matrix differential equation. (2000)
Mathai, A.M.: Appell’s and Humbert’s functions of matrix arguments. Linear Algebra Appl. 183, 201–221 (1993)
Mathai, A.M.: Some results on functions of matrix argument. Math. Nachr. 84, 171–177 (1978)
Metwally, M.S., Mohamed, M.T., Shehata, A.: On Horn matrix function \(H_{2}\) of two complex variables under differential operator. Advances in Linear Algebra and Matrix Theory (ALAMT) 8(2), 96–110 (2018)
Mohamed, M.T., Shehata, A.: A study of Appell’s matrix functions of two complex variables and some properties. Adv. Appl. Math. Sci. 9(1), 23–33 (2011)
Rashwan, R.A., Metwally, M.S., Mohamed, M.T., Shehata, A.: Certain Kummer’s matrix function of two complex variables under certain differential and integral operators. Thai J. Math. 11(3), 725–740 (2013)
Rashwan, R.A., Metwally, M.S., Mohamed, M.T., Shehata, A.: On composite l(m, n)-Kummer’s matrix functions of two complex variables. Thai J. Math. 14(1), 69–81 (2016)
Rida, S.Z., Abul-Dahab, M., Saleem, M.A., Mohamed, M.T.: On Humbert matrix function \(\Psi _{1}(A, B; C, C^{\prime };z, w)\) of two complex variables under differential operator. Int. J. Ind. Math. 32, 167–179 (2010)
Shehata, A.: On \(p\) and \(q\)-Horn’s matrix function of two complex variables. Appl. Math. 2(12), 1437–1442 (2011)
Shehata, A.: Certain \(pl(m, n)\)-Kummer matrix function of two complex variables under differential operator. Appl. Math. 4(1), 91–96 (2013)
Shehata, A.: New kinds of hypergeometric matrix functions. Br. J. Math. Comput. Sci. 5(1), 92–103 (2015)
Srivastava, H.M., Karlsson, P.W.: Multiple Gaussian Hypergeometric Series. Ellis Horwood Limited, Chichester (1985)
Van Loan, C.: The sensitivity of the matrix exponential. SIAM J. Numer. Anal. 14(6), 971–981 (1977)
Acknowledgements
The authors thank the referee for valuable suggestions that led to a better presentation of the paper. The financial assistance provided to the second author in the form of a Senior Research Fellowship from Council of Scientific and Industrial Research, India is gratefully acknowledged.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Çekim, B., Dwivedi, R., Sahai, V. et al. Certain Integral Representations, Transformation Formulas and Summation Formulas Related to Humbert Matrix Functions. Bull Braz Math Soc, New Series 52, 213–239 (2021). https://doi.org/10.1007/s00574-020-00198-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00574-020-00198-6