Abstract
A generalized inverse of an m × n matrix A over a pseudoring means an n × m matrix G satisfying AGA = A. In this paper we give a characterization of matrices having generalized inverses. Also, we introduce and study a space decomposition of a matrix, and prove that a matrix is decomposable if and only if it has a generalized inverse. Finally, we establish necessary and sufficient conditions for a matrix to possess various types of g-inverses including Moore–Penrose inverse.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Recall that [12] a pseudoring is a triple (D,+,⋅) where D is a completely ordered set with a minimal element 0; + and ⋅ are two binary operations on D satisfying:
(D∗,⋅) is a group with identity element 1 (where D∗ = D ∖0);
“ ⋅ ” is distributive with respect to “+”.
The following algebras are all pseudorings:
-
(1)
\((\mathbb {R}\cup \{-\infty \},\max \limits ,+)\) (tropical algebra);
-
(2)
\((\mathbb {Z}\cup \{-\infty \},\max \limits ,+)\) (extended integers);
-
(3)
\((\mathbb {Q}\cup \{-\infty \},\max \limits ,+)\)(extended integers);
-
(4)
\((\mathbb {R}_{+}\cup \{0\},\max \limits ,\cdot )\) (max algebra);
-
(5)
\((\mathbb {B},+,\cdot )\) (2-element Boolean algebra).
Matrix theory over the above pseudorings has broad applications in combinatorial optimization, control theory, automata theory and many other areas of science (see [2,3,4, 6, 8]). As usual, the set of all m × n matrices over a pseudoring D is denoted by Mm×n(D). In particular, we will use Mn(D) instead of Mn×n(D). The operations “ + ” and “ ⋅ ” on D induce corresponding operations on matrices in the obvious way. It is easy to see that (Mn(D),⋅) is a semigroup. For brevity, we will write AC in place of A ⋅ C. Unless otherwise stated, we refer to matrix as a matrix over a pseudoring in the remainder of this paper.
For a matrix A in Mm×n(D), suppose that there exists a matrix G in Mn×m(D) such that AGA = A. Then A is called regular, and call G a generalized inverse of A. There are a series of papers in the literature considering the structure of regular matrices over some algebraic systems. In 1975, Rao and Rao [11] study the generalized inverses of Boolean matrices. In 1981, Hall and Katz [7] characterize the generalized inverses of nonnegative integer matrices. In 1998, Bapat [1] obtains the generalized inverses of nonnegative real matrices. And in 2015, Kang and Song [10] determine the general form of matrices over the max algebra having generalized inverses.
The main purpose of this paper is to study regular matrices and their generalized inverses over a pseudoring. The forms of regular matrices in this paper are extended the corresponding results in [10] and [11]. This paper will be divided into five sections. In Section 2 we introduce some preliminary notions and notation. A normal form of an idempotent matrix is given in Section 3. After characterizing idempotent matrices, we obtain the general form of matrices which have generalized inverses. In Section 4 we define a space decomposition of a matrix and prove that a matrix has a g-inverse if and only if it has a space decomposition. In Section 5 we establish necessary and sufficient conditions for the existence of the Moore–Penrose inverse and other types of g-inverses of a matrix.
2 Preliminaries
The following notation of linear dependence can be found in [12]. We will be interested in the space Dn consisting of n-tuples x with entries in D. We write xi for the i th component of x. The Dn admits an addition and a scaling action of D given by (x + y)i = xi + yi and (λx)i = λxi, respectively. These operations give Dn the structure of an D-pseudomodule. Each element of this pseudomodule is called a vector. A vector α in Dn is called a linear combination of a subset \(\{\alpha _{1}, \dots , \alpha _{k}\}\) of Dn, if there exist \(d_{1},\dots ,d_{k}\in D\) such that
For a subset S of Dn, let span(S) denote
where \(\mathbb N\) denotes the set of all natural numbers. The set S is called linearly dependent if there exists a vector α ∈ S such that α is a linear combination of elements in S ∖{α}. Otherwise, S is called linearly independent. A subset {αi | i ∈ I} of a subpseudomodule \(\mathcal {V}\) of Dn is called a basis of \(\mathcal {V}\) if \(\text {span}(\{\alpha _{i} ~|~ i\in I\})=\mathcal {V}\) and {αi | i ∈ I} is linearly independent.
In the sequel, the following notions and notation are needed for us.
-
In denotes the identity matrix, i.e., the diagonal entries are all 1 and the other entries are all 0.
-
An n × n matrix A is called to be invertible if there exists an n × n matrix B such that AB = BA = In. In this case, B is called an inverse of A and is denoted by A− 1.
-
An n × n matrix is called a monomial matrix if it has exactly one entry in each row and column which is not equal to 0.
-
An n × n matrix is called a permutation matrix if it is formed from the identity matrix by reordering its columns and/or rows.
-
O denotes the zero matrix, i.e., the matrix whose entries are all 0.
Proposition 2.1
A matrix A is invertible if and only if A is a monomial matrix.
Proof
Suppose that A = (aij)n×n is an invertible matrix. Then there exists a matrix B = (bij)n×n such that
Thus
It follows that there exists 1 ≤ k ≤ n, such that aikbki = 1, and so
Since for any j≠i,
we have
Since for any k≠j,
then
By (2.1), (2.2) and (2.3), we obtain that A is a monomial matrix.
It is easy to see that the converse is true. □
A permutation matrix is also a monomial matrix. And the inverse of a permutation matrix is its transpose.
Let A be an m × n matrix. The column space (row space, resp.) of an m × n matrix A is the subpseudomodule of Dm (Dn, resp.) spanned by all its columns (rows, resp.) and is denoted by Col(A) (Row(A), resp.). By Corollary 4.7 in [12], we know that every finitely generated pseudomodule has a basis. For an m × n matrix A, let ai∗ and a∗j denote the i th row and the j th column of A, respectively. As a consequence, by Theorem 5 in [12] we immediately have
Lemma 2.2
Let \(\{\mathbf {a}_{\ast i_{1}},\dots , \mathbf {a}_{\ast i_{r}}\}\) and \(\{\mathbf {a}_{\ast j_{1}}, \dots , \mathbf {a}_{\ast j_{r}}\}\) be any two bases of Col(A). Then there exists an r × r monomial matrix M such that
Lemma 2.2 tells us that the cardinalities of any two bases for Col(A) are same. The cardinality is called the column rank of A, denoted by c(A). Dually, we can define the row rank of A, denoted by r(A). The column rank and the row rank of a matrix need not be equal in general. Let A be an m × n matrix. If c(A) = r(A) = r, then r is called the rank of A. If c(A) = n and r(A) = m, then A is called nonsingular, and singular otherwise.
Let A = (aij) be an n × n matrix, the submatrix
is called a basis submatrix of A, if \(\{\mathbf {a}_{\ast i_{1}}, \dots , \mathbf {a}_{\ast i_{r}}\}\) is a basis of Col(A) and \(\{\mathbf {a}_{j_{1}\ast }, \dots , \mathbf {a}_{j_{s}\ast }\}\) is a basis of Row(A). If A1 and A2 are both bases submatrices of A, then there exist monomial matrices M1 and M2 such that A1 = M1A2M2. We denote the basis submatrix by \(\bar A\). It is easy to see that \(c(A)=c(\bar A)=\) the number of columns of \(\bar A\), and that \(r(A)=r(\bar A)=\) the number of rows of \(\bar A\). The Rao “normal form” of a matrix is to be needed for us.
Lemma 2.3
([5] Lemma 101 Rao “normal form”) Let A be an m × n matrix. Then there exists an m × m permutation matrix Q and an n × n permutation matrix P such that
where U is an r × (n − r) matrix and V is an (m − s) × s matrix.
In the following we will introduce and study the equivalences \({\mathscr{R}}^{\ast }\) and \({\mathscr{L}}^{\ast }\) on the set M(D) (\(= \cup _{m=1}^{\infty } \cup _{n=1}^{\infty } M_{m \times n}(D)\)) of all matrices. The equivalences \({\mathscr{R}}^{\ast }\) and \({\mathscr{L}}^{\ast }\) are, respectively, defined by
It is easy to see that if \(A{\mathscr{R}}^{\ast } B\) (\(A {\mathscr{L}}^{\ast } B\), resp.), then the number of rows (columns, resp.) of A is equal to that of B. In particular, the restriction of \({\mathscr{R}}^{\ast }\) and \({\mathscr{L}}^{\ast }\) to Mn(D) coincide with Green’s relations \({\mathscr{R}}\) and \({\mathscr{L}}\) on the semigroup (Mn(D),⋅), respectively, which play a key role in the algebraic theory of semigroups.
Proposition 2.4
Let A and B be matrices in M(D). Then
Proof
Suppose that \(A{\mathscr{R}}^{\ast } B\). If A is an m × n matrix and B is an m × q matrix, then by the definition of \({\mathscr{R}}^{\ast }\), there exists a q × n matrix X such that BX = A. Now, it follows that \(\text {Col}(BX)=\text {Col}(A)\subseteq \text {Col}(B)\), since the columns of BX are contained in Col(B). Dually, \(\text {Col}(B)\subseteq \text {Col}(A)\). Thus Col(A) = Col(B).
Conversely, suppose that Col(A) = Col(B). If A is an m × n matrix and B is an m × q matrix, then each column of A is in Col(A) and so is in Col(B), since the pseudomodule D has a multiplicative identity 1. This implies that each column of A can be written as a linear combination of the columns of B. Thus there exists a q × n matrix X such that BX = A. Similarly, we can prove that there exists an n × q matrix Y such that AY = B. Hence, we have shown that \(A{\mathscr{R}}^{\ast } B\).
Dually, we can show that \(A{\mathscr{L}}^{\ast } B\) if and only if Row(A) = Row(B). □
We now immediately deduce the following result.
Corollary 2.5
Let A and B be matrices in \(M(\mathbb B)\). Then
For an m × n matrix A, we will introduce various types of inverses of A. Consider an n × m matrix G in the following equations:
-
(G-1)
AGA = A;
-
(G-2)
GAG = G;
-
(G-3)
(AG)T = AG;
-
(G-4)
(GA)T = GA.
A matrix G satisfying (G-1) is called a generalized inverse (g-inverse for short) of A. If G satisfies (G-1) and (G-3) ((G-1) and (G-4), resp.), then it is called a {1,3}-g-inverse ({1,4}-g-inverse, resp.) of A. Finally, if G satisfies all from (G-1) to (G-4), then it is called a Moore–Penrose inverse of A.
We note that if G1 and G2 are any two g-inverses of A, then G1 + G2 is also a g-inverse of A, since
Also, it is well known that G is a {1,3}-g-inverse of A if and only if GT is a {1,4}-g-inverse of AT.
Proposition 2.6
Let A and G be an m × n matrix and an n × m matrix, respectively. Then the following statements are equivalent:
-
(i)
AGA = A;
-
(ii)
(AG)2 = AG and \(A{\mathscr{R}}^{\ast } AG\);
-
(iii)
(GA)2 = GA and \(A{\mathscr{L}}^{\ast } GA\).
Proof
(i) ⇒ (ii). Assume that (i) holds. Then (AG)2 = (AGA)G = AG. Also, it is clear that \(A{\mathscr{R}}^{\ast } AG\).
(ii) ⇒ (i). Assume that (ii) holds. Then A = AGX for some m × n matrix X. This implies that AGA = AGAGX = AGX = A since (AG)2 = AG. Thus (i) holds.
Similarly, we can show that (i) ⇔ (iii). □
3 Normal Form of an Idempotent Matrix
In this section, we will give the Rao normal form of an idempotent matrix which plays a very important role in the studying the maximal subgroup of the semigroup Mn(D). The following results are inspired by Kang and Song [10], in which they studied the idempotent matrices over the \(\max \limits \)-algebra.
Define the partial order ≤ on Mm×n(D) by
Lemma 3.1
Let A, B, C be m × n matrices, let X1, Y1 be n × p matrices and let X2, Y2 be p × m matrices. Then the following statements hold.
-
(i)
If A + B = C, then A ≤ C and B ≤ C.
-
(ii)
If X1 ≤ Y1, then AX1 ≤ AY1. If X2 ≤ Y2, then X2A ≤ Y2A.
Proof
(i) Suppose that A + B = C. Then we have
Hence A ≤ C. Similarly, B ≤ C.
(ii) Suppose that X1 ≤ Y1. Then X1 + Y1 = Y1. Thus it follows that
and so AX1 ≤ AY1. Similarly, if X2 ≤ Y2, then X2A ≤ Y2A. □
Lemma 3.2
If E = (eij) be an n × n idempotent matrix, then eii ≤1 for all 1 ≤ i ≤ n.
Proof
Let E = (eij)n×n be an idempotent matrix. Then for any 1 ≤ i ≤ n,
This implies that \(e_{ii}\leq e_{ii}{e^{-1}_{ii}}=\mathbf {1}\). □
Lemma 3.3
Let E = (eij) be an n × n idempotent matrix. If eii < 1 for some i ∈{1,2,…,n}, then the i-th column (row, resp.) of E is a linear combination of the remaining columns (rows, resp.). Furthermore, the matrix obtained from E by deleting the i-th column and the i-th row is an (n − 1) × (n − 1) idempotent matrix.
Proof
Let E = (eij)n×n be an idempotent matrix. Suppose that eii < 1 for some 1 ≤ i ≤ n. Without loss of generality, we assume that e11 < 1. Partition E as \(\left [ \begin {array}{ccc} e_{11} & E_{12}\\ E_{21} & E_{22} \end {array} \right ]\). Then we have
This implies that
since e11 < 1. Thus it follows that
The equation (3.1) ((3.2), resp.) tells us that the 1-th column (the 1-th row, resp.) of E is a linear combination of the remaining columns (rows, resp.). By Lemma 3.1 and (3.3), we have
Thus it follows by (3.4) and Lemma 3.1 that \(E_{21}E_{12}=E_{21}E_{12}E_{22}\leq E_{22}^{2}\), since E12E22 = E12. We therefore have
by Lemma 3.1. Thus (3.4) and (3.5) tell us that \(E_{22}^{2}=E_{22}\). □
The above lemma tells us that if E = (eij) is an n × n idempotent matrix and eii < 0 for some 1 ≤ i ≤ n, then c(E) < n and r(E) < n. Thus by Lemmas 3.2 and 3.3, we immediately have the following result.
Corollary 3.4
All main diagonal entries of a nonsingular idempotent matrix are 1.
Lemma 3.5
Let E be an n × n idempotent matrix whose main diagonal entries are all 1. Then the i-th row of E is a linear combination of the remaining rows if and only if the i-th column of E is a linear combination of the remaining columns. Furthermore, the matrix obtained from E by deleting the i-th column and the i-th row is an (n − 1) × (n − 1) idempotent matrix.
Proof
Let E = (eij) be an n × n idempotent matrix with eii = 0 for all 1 ≤ i ≤ n.
Suppose that the i-th row of A is a linear combination of the remaining rows. Without loss of generality, we assume that the 1-th row of E is a linear combination of the remaining rows. Partition E as \( \left [ \begin {array}{ccc} \mathbf {1} & E_{12}\\ E_{21} & E_{22} \end {array} \right ]\). Then we have
Thus by Lemma 3.1 we have
On the other hand, it is easy to see that
since eii = 1 for all 2 ≤ i ≤ n. Thus by Lemma 3.1, we can show that
Hence by summing (3.6) and (3.7), we have
Since \(\left [\begin {array}{ccc} \mathbf {1} & E_{12} \end {array} \right ]\) is alinear combination of the rows of \(\left [\begin {array}{ccc} E_{21}& E_{22} \end {array} \right ]\), there exists a row vector X such that \(\left [\begin {array}{ccc} \mathbf {1}& E_{12} \end {array} \right ]=X \left [\begin {array}{ccc} E_{21}& E_{22} \end {array} \right ]\). That is to say,
Thus it follows from (3.8) and (3.9) that 1 = XE21 = XE22E21 = E12E21. Therefore,
This shows that the 1-th column of E is a linear combination of the remaining columns.
Dually, we can show that the converse is true. This completes our proof. □
By Lemmas 3.3 and 3.5 we immediately have
Corollary 3.6
If E is an idempotent matrix, then c(E) = r(E).
Lemma 3.7
Let A and B be n × n matrices. If all main diagonal entries of A are 1 and ABA ≤ A, then B ≤ A.
Proof
Suppose that A = (aij) and B = (bij) are n × n matrices. Assume that ABA = (cij). If all main diagonal entries of A are 1 and ABA ≤ A, then for any 1 ≤ i, j ≤ n,
That is to say, B ≤ A. □
The following gives the normal form of an idempotent matrix.
Theorem 3.8
Let E be an n × n matrix. Then E is an idempotent matrix of rank r if and only if there exists an n × n permutation matrix P such that
where \(\bar E\) is a basis submatrix of E and is an r × r nonsingular idempotent matrix, U is an r × (n − r) matrix and V is an (n − r) × r matrix such that \(UV\leq \bar E\).
Proof
Suppose that E is an n × n matrix. If E is an idempotent matrix of rank r (r < n), then by Lemmas 3.3 and 3.5 we have that the i-th row of E is a linear combination of the remaining rows if and only if the i-th column of E is a linear combination of the remaining columns. Thus by carrying out the same row permutations and column permutations of E, we can find a matrix \(E^{\prime }\) such that the first r columns and the first r rows of \(E^{\prime }\) are linearly independent. That is to say, there exists an n × n permutation matrix P such that
where \(\bar E\) is an basis submatrix of E, X, Y and Z are matrices of appropriate sizes. Also, by Lemmas 3.3 and 3.5 we can show that \(\bar E\) is an r × r nonsingular idempotent matrix. \(\left [\begin {array}{ccc} X \\ Z \end {array} \right ]=\left [\begin {array}{ccc} \bar E \\ Y \end {array} \right ]U\) for some r × (n − r) matrix U, since each column of \(\left [\begin {array}{ccc} X \\ Z \end {array} \right ]\) is alinear combination of the columns of \(\left [\begin {array}{ccc} \bar E \\ Y \end {array} \right ]\). Dually, we can show that \(\left [\begin {array}{ccc} Y & YU \end {array} \right ]=V\left [\begin {array}{ccc} \bar E & \bar EU \end {array}\right ]\) for some (n − r) × r matrix V. Hence, we have
This is a Rao “normal form” of E. Since E2 = E, we have that \(E^{\prime } = E^{\prime 2}\). That is to say,
Hence, \(\bar E=\bar E^{2}+ \bar EUV\bar E\). Thus by Lemma 3.1, we can show that \(\bar EUV\bar E\leq \bar E\). By Corollary 3.4 and Lemma 3.7, we have that \(UV\leq \bar E\).
The converse is easily to be verified. □
Corollary 3.9
Let E be an n × n matrix. Then E is a symmetric idempotent matrix of rank r if and only if there exists an n × n permutation matrix P such that
where \(\bar E\) is a basis submatrix of E and is an r × r symmetric nonsingular idempotent matrix, U is an r × (n − r) matrix and V is an (n − r) × r matrix such that \(UV\leq \bar E\).
4 Generalized Inverses of a Regular Matrices
In this section we will give a characterization of regular matrices. Also, we will define a space decomposition of a matrix and prove that a matrix A is regular if and only if A is space decomposable.
Theorem 4.1
Let A be an m × n matrix. Then A is regular if and only if there exists an m × m permutation matrix P and an n × n permutation matrix Q such that
where F is a nonsingular idempotent matrix, M is a diagonal monomial matrix and C, V are matrices of appropriate sizes.
Proof
Suppose that A is an m × n matrix. If A is regular, then there exists an n × m matrix G such that G is a g-inverse of A. Thus AG is idempotent and \(A{\mathscr{R}}^{\ast } AG\) by Proposition 2.6. Let the rank of AG be r. Now, by Theorem 3.8, there exists an m × m permutation matrix P such that \(AG=P\left [\begin {array}{ccc} F & FU\\ VF & VFU \end {array} \right ]P^{T}\), where F is an r × r nonsingular idempotent matrix, U and V are matrices of appropriate sizes, and so
Notice that the set of the first r columns of AGP is a basis of Col(AGP). Since \(A{\mathscr{R}}^{\ast } AG{\mathscr{R}}^{\ast }(AGP)\), it follows from (4.1), Lemma 2.2 and Proposition 2.4 that there exists a permutation matrix Q such that
where M is an r × r monomial matrix and C = X + UY.
Conversely, it is easy to verify \(Q^{T}\left [\begin {array}{ccc} M^{-1}& O\\ O& O \end {array}\right ]P^{T}\) is a g-inverse of A. □
As a consequence, we have
Corollary 4.2
If A is a regular matrix, then c(A) = r(A).
Notice that for a matrix A, A is regular if and only if AT is regular, since G is a g-inverse of A if and only if GT is a g-inverse of AT.
A nonzero matrix A is said to be space decomposable if there exist matrices L and R such that
The decomposition LR will be called a space decomposition of A. Thus Col(A) = Col(L) and Row(A) = Row(R). This means the matrix A is decomposed two matrices, which satisfy a matrix preserves column space and another matrix preserves row space.
Proposition 4.3
A nonzero matrix is regular if and only if it is space decomposable.
Proof
Suppose that A is a nonzero m × n matrix.
If A is a regular matrix of rank r, then by Theorem 4.1, A is of the form
where P and Q are permutation matrices and M is a monomial matrix. Let
Then
This implies that \(A{\mathscr{R}}^{\ast } L_{A}\) and \(A{\mathscr{L}}^{\ast } R_{A}\). Thus LA and RA satisfy the condition (4.2) and so A is space decomposable.
Conversely, assume that A is space decomposable. Then it follows from (4.2) that there exist matrices L and R such that A = LR, \(A{\mathscr{R}}^{\ast } L\) and \(A{\mathscr{L}}^{\ast } R\). This implies that L = AX and R = Y A for some matrices X and Y. Hence
and so A is regular. □
Notice that a matrix of rank less than 3 is space decomposable.
Corollary 4.4
If the rank of a matrix A is less than 3, then A is regular.
In [9], Johnson and Kambites proved that the all 2 × 2 tropical matrix are regular. Corollary 4.4 extend this result.
Lemma 4.5
Let A be a nonsingular idempotent matrix. If G is a g-inverse of A, then AG = GA = A.
Proof
Suppose that A = (aij) is an n × n nonsingular idempotent matrix and that G is a g-inverse of A. Then AGA = A. It follows from Proposition 2.6 that AG is idempotent and \(A{\mathscr{R}}^{\ast } AG\). Since A is nonsingular we can show by Corollary 2.5 that AG is also nonsingular. Thus by Corollary 3.4 it follows that main diagonal entries of A and AG are all 1. This implies by Lemma 3.7 that G ≤ A, since AGA = A. Hence
by Lemma 3.1(ii). Since (AG)A(AG) = AG, by Lemma 3.7 we have
Equation (4.4) and (4.5) tell us that AG = A. Notice that AT is idempotent and ATGTAT = AT. Similarly, we have that ATGT = AT, and hence GA = A. □
Proposition 4.6
Let A be of the form in Theorem 4.1 and let LARA be the space decomposition of A in (4.3). Then for any m × s matrix L and any s × n matrix R, LR is a space decomposition of A if and only if L = LAM1, R = M2RA and FM1M2 = M1M2F = F for some r × s matrix M1 and some s × r matrix M2.
Proof
If A is an m × n regular matrix of rank r, then by Theorem 4.1, A is of the form
where P and Q are permutation matrices, M is a monomial matrix and F is a nonsingular idempotent matrix. Let LARA be the space decomposition of A in (4.3). Then it follows from (4.2) that A = LARA, \(A{\mathscr{R}}^{\ast } L_{A}\) and \(A{\mathscr{L}}^{\ast } R_{A}\).
For any m × s matrix L and any s × n matrix R, if LR is a space decomposition of A, then A = LR, \(L{\mathscr{R}}^{\ast } A\) and \(R{\mathscr{L}}^{\ast } A\). This implies that \(L{\mathscr{R}}^{\ast } L_{A}\) and \(R{\mathscr{L}}^{\ast } R_{A}\) and so L = LAM1 and R = M2RA for some r × s matrix M1 and some s × r matrix M2. Now we have
Since P and Q are permutation matrices, FM = FM1M2FM. It follows that F = FM1M2F, since M is a monomial matrix. Notice that F is a nonsingular idempotent matrix. By Lemma 4.5 we have that FM1M2 = M1M2F = F.
Conversely, assume that there exists an r × s matrix M1 and an s × r matrix M2 such that L = LAM1, R = M2RA and FM1M2 = M1M2F = F. Then we have
and
This implies that
and so LR is a space decomposition of A. □
Corollary 4.7
If LR is a space decomposition of a regular matrix A, then both L and R are regular.
Proof
Let A be an m × n regular matrix of rank r. Suppose that LR is a space decomposition of A. By Proposition 4.6 it follows that there exist matrices M1 and M2 such that \(L=P\left [\begin {array}{ccc} F \\ VF \end {array}\right ]M_{1}\), \(R=M_{2}\left [\begin {array}{ccc} FM & FC \end {array} \right ]Q\) and FM1M2 = M1M2F = F, where P and Q are permutation matrices, M is a monomial matrix. Thus we have
are g-inverses of L and R, respectively, and so both L and R are regular. □
5 Other Type of g-Inverses
Notice that G is a {1,3}-g-inverse of A if and only if AGA = A and (AG)T = AG.
Theorem 5.1
Let A be an m × n matrix. The following statements are equivalent:
-
(i)
A has a {1,3}-g-inverse.
-
(ii)
There exists an m × m permutation matrix P and an n × n permutation matrix Q such that
$$ A=P\left[\begin{array}{ccc} SM & SZ\\ VSM & VSZ \end{array} \right]Q, $$where S is a symmetric nonsingular idempotent matrix, M is a diagonal monomial matrix and V, Z are matrices such that VTV ≤ S.
-
(iii)
\(A{\mathscr{L}}^{\ast } A^{T}A\).
Proof
(i) ⇒ (ii). Let an n × m matrix G be a {1,3}-g-inverse of A. Then \(A{\mathscr{R}}^{\ast } AG\) and AG is a symmetric idempotent matrix by Proposition 2.6. Let the rank of AG is r. It follows from Corollary 3.9 that there exists an m × m permutation matrix P such that \(AG=P\left [\begin {array}{ccc} S & SV^{T}\\ VS & VSV^{T} \end {array} \right ]P^{T}\), and so
where S is a symmetric nonsingular idempotent matrix, V is a matrix such that VTV ≤ S. Notice that the set of the first r columns of AGP is a basis of Col(AGP). Since \(A{\mathscr{R}}^{\ast } AG{\mathscr{R}}^{\ast } AGP\), it follows from Lemma 2.2 and Proposition 2.4 that there exists an n × n permutation matrix Q such that \(A=AGP\left [ \begin {array}{ccc} M & X\\ O & Y \end {array} \right ]Q\), where M is a diagonal monomial matrix. Thus
where Z = X + VTY.
(ii) ⇒ (iii). Now assume that (ii) holds. Since P is a permutation matrix, N is a symmetric idempotent matrix, M is a diagonal matrix and V is a matrix such that VTV ≤ N, it follows that
Hence, since Q is a permutation matrix and M is a monomial matrix,
Thus we have that \(A{\mathscr{L}}^{\ast } A^{T}A\).
(iii) ⇒ (i). If \(A{\mathscr{L}}^{\ast } A^{T}A\), then there exists an m × n matrix G, such that A = GATA. This implies that
We also have
Therefore, GT is a {1,3}-g-inverse of A. □
In Theorem 5.1(ii), we can easily check that \(Q^{T}\left [\begin {array}{ccc} M^{-1}& M^{-1}V^{T}\\ O& O \end {array}\right ]P^{T}\) is a {1,3}-g-inverse of A.
Similarly, we have the following result.
Proposition 5.2
Let A be an m × n matrix. The following statements are equivalent:
-
(i)
A has a {1,4}-g-inverse.
-
(ii)
There exists an m × m permutation matrix P and an n × n permutation matrix Q such that
$$ A=P\left[\begin{array}{ccc} SM & SMU\\ WS & WSU \end{array}\right]Q, $$where S is a symmetric nonsingular idempotent matrix, M is a diagonal monomial matrix and U, W are matrices such that UUT ≤ S.
-
(iii)
\(A{\mathscr{R}}^{\ast } AA^{T}\).
In Proposition 5.2(ii), we can easily check that \(Q^{T}\left [\begin {array}{ccc} M^{-1}& O\\ U^{T}M^{-1}& O \end {array}\right ]P^{T}\) is a {1,4}-g-inverse of A.
In the following result, we characterize matrices having Moore–Penrose inverses. The proof depends on the above two theorems, and we omit the proof:
Corollary 5.3
Let A be an m × n matrix. The following statements are equivalent:
-
(i)
A has a Moore–Penrose inverse.
-
(ii)
There exists an m × m permutation matrix P and an n × n permutation matrix Q such that
$$ A=P\left[\begin{array}{ccc} SM & SMU\\ VSM & VSMU \end{array}\right]Q, $$where S is a symmetric nonsingular idempotent matrix, M is a diagonal monomial matrix and V, U are matrices such that VTV ≤ S and UUT ≤ S.
-
(iii)
\(A{\mathscr{L}}^{\ast } A^{T}A\) and \(A{\mathscr{R}}^{\ast } AA^{T}\).
In Corollary 5.3(ii), we can easily check that \(Q^{T}\left [\begin {array}{ccc} SM^{-1}& SM^{-1}V^{T}\\ U^{T}SM^{-1}& U^{T}SM^{-1}V^{T} \end {array}\right ]P^{T}\) is a Moore–Penrose inverse of A.
References
Bapat, R.B.: Structure of a nonnegative regular matrix and its generalized inverse. Linear Algebra Appl. 268, 31–39 (1998)
Butkovič, P.: Max-algebra: the linear algebra of combinatorics? Linear Algebra Appl. 367, 313–335 (2003)
Butkovič, P.: Max-Linear Systems: Theory and Algorithms. Springer, London (2010)
Cuninghame-Green, R.: Minimax Algebra. Lecture Notes in Economics and Mathematical Systems, vol. 166. Springer, Berlin (1979)
Gaubert, S.: Two lectures on max-plus algebra. http://amadeus.inria.fr/gaubert(1998)
Golan, J.S.: Semirings and Their Applications. Kluwer Academic, Dordrecht (1999)
Hall, F.J., Katz, I.J.: Nonnegative integral generalized inverses. Linear Algebra Appl. 39, 23–39 (1981)
Hebisch, U., Weinert, H.J.: Semirings. Algebraic Theory and Applications in Computer Science Series in Algebra, vol. 5. World Scientific, Singapore (1998)
Johnson, M., Kambites, M.: Multiplicative structure of 2 × 2 tropical matrices. Linear Algebra Appl. 435, 1612–1625 (2011)
Kang, K.-T., Song, S.-Z.: Regular matrices and their generalized inverses over the max algebra. Linear Multilinear Algebra 63, 1649–1663 (2015)
Rao, P.S.S.N.V.P., Rao, K.P.S.B.: On generalized inverses of Boolean matrices. Linear Algebra Appl. 11, 135–153 (1975)
Wagneur, E.: Moduloïds and pseudomodule 1. Dimension theory. Discrete Math. 98, 57–73 (1991)
Acknowledgements
This work is supported by the National Natural Science Foundation of China (Grant No.11861045) and the Hongliu Foundation of First-class Disciplines of Lanzhou University of Technology, China.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Yang, L., Yang, SL. On Generalized Inverses of m × n Matrices Over a Pseudoring. Vietnam J. Math. 50, 261–274 (2022). https://doi.org/10.1007/s10013-021-00502-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10013-021-00502-x