1 Introduction

Recall that [12] a pseudoring is a triple (D,+,⋅) where D is a completely ordered set with a minimal element 0; + and ⋅ are two binary operations on D satisfying:

$$ a+b=\max\{a, b\}, $$

(D,⋅) is a group with identity element 1 (where D = D0);

$$ (\forall a\in D)\quad\mathbf{0}\cdot a=a\cdot \mathbf{0}=\mathbf{0}; $$

is distributive with respect to “+”.

The following algebras are all pseudorings:

  1. (1)

    \((\mathbb {R}\cup \{-\infty \},\max \limits ,+)\) (tropical algebra);

  2. (2)

    \((\mathbb {Z}\cup \{-\infty \},\max \limits ,+)\) (extended integers);

  3. (3)

    \((\mathbb {Q}\cup \{-\infty \},\max \limits ,+)\)(extended integers);

  4. (4)

    \((\mathbb {R}_{+}\cup \{0\},\max \limits ,\cdot )\) (max algebra);

  5. (5)

    \((\mathbb {B},+,\cdot )\) (2-element Boolean algebra).

Matrix theory over the above pseudorings has broad applications in combinatorial optimization, control theory, automata theory and many other areas of science (see [2,3,4, 6, 8]). As usual, the set of all m × n matrices over a pseudoring D is denoted by Mm×n(D). In particular, we will use Mn(D) instead of Mn×n(D). The operations + and on D induce corresponding operations on matrices in the obvious way. It is easy to see that (Mn(D),⋅) is a semigroup. For brevity, we will write AC in place of AC. Unless otherwise stated, we refer to matrix as a matrix over a pseudoring in the remainder of this paper.

For a matrix A in Mm×n(D), suppose that there exists a matrix G in Mn×m(D) such that AGA = A. Then A is called regular, and call G a generalized inverse of A. There are a series of papers in the literature considering the structure of regular matrices over some algebraic systems. In 1975, Rao and Rao [11] study the generalized inverses of Boolean matrices. In 1981, Hall and Katz [7] characterize the generalized inverses of nonnegative integer matrices. In 1998, Bapat [1] obtains the generalized inverses of nonnegative real matrices. And in 2015, Kang and Song [10] determine the general form of matrices over the max algebra having generalized inverses.

The main purpose of this paper is to study regular matrices and their generalized inverses over a pseudoring. The forms of regular matrices in this paper are extended the corresponding results in [10] and [11]. This paper will be divided into five sections. In Section 2 we introduce some preliminary notions and notation. A normal form of an idempotent matrix is given in Section 3. After characterizing idempotent matrices, we obtain the general form of matrices which have generalized inverses. In Section 4 we define a space decomposition of a matrix and prove that a matrix has a g-inverse if and only if it has a space decomposition. In Section 5 we establish necessary and sufficient conditions for the existence of the Moore–Penrose inverse and other types of g-inverses of a matrix.

2 Preliminaries

The following notation of linear dependence can be found in [12]. We will be interested in the space Dn consisting of n-tuples x with entries in D. We write xi for the i th component of x. The Dn admits an addition and a scaling action of D given by (x + y)i = xi + yi and (λx)i = λxi, respectively. These operations give Dn the structure of an D-pseudomodule. Each element of this pseudomodule is called a vector. A vector α in Dn is called a linear combination of a subset \(\{\alpha _{1}, \dots , \alpha _{k}\}\) of Dn, if there exist \(d_{1},\dots ,d_{k}\in D\) such that

$$ \alpha=d_{1}\alpha_{1}+{\cdots} + d_{k}\alpha_{k}. $$

For a subset S of Dn, let span(S) denote

$$ \left\{{\sum}_{i=1}^{k}d_{i}\alpha_{i} \mid k \in \mathbb{N}, \alpha_{i} \in S, d_{i} \in D, i=1,2,\ldots,k\right\}, $$

where \(\mathbb N\) denotes the set of all natural numbers. The set S is called linearly dependent if there exists a vector αS such that α is a linear combination of elements in S ∖{α}. Otherwise, S is called linearly independent. A subset {αi | iI} of a subpseudomodule \(\mathcal {V}\) of Dn is called a basis of \(\mathcal {V}\) if \(\text {span}(\{\alpha _{i} ~|~ i\in I\})=\mathcal {V}\) and {αi | iI} is linearly independent.

In the sequel, the following notions and notation are needed for us.

  • In denotes the identity matrix, i.e., the diagonal entries are all 1 and the other entries are all 0.

  • An n × n matrix A is called to be invertible if there exists an n × n matrix B such that AB = BA = In. In this case, B is called an inverse of A and is denoted by A− 1.

  • An n × n matrix is called a monomial matrix if it has exactly one entry in each row and column which is not equal to 0.

  • An n × n matrix is called a permutation matrix if it is formed from the identity matrix by reordering its columns and/or rows.

  • O denotes the zero matrix, i.e., the matrix whose entries are all 0.

Proposition 2.1

A matrix A is invertible if and only if A is a monomial matrix.

Proof

Suppose that A = (aij)n×n is an invertible matrix. Then there exists a matrix B = (bij)n×n such that

$$ AB=BA=(c_{ij})_{n\times n}=I_{n}. $$

Thus

$$ a_{i1}b_{1i}+a_{i2}b_{2i}+\cdots+a_{in}b_{ni}=c_{ii}=\mathbf{1}. $$

It follows that there exists 1 ≤ kn, such that aikbki = 1, and so

$$ a_{ik}=(b_{ki})^{-1}\neq \mathbf{0}. $$
(2.1)

Since for any ji,

$$ \begin{array}{@{}rcl@{}} c_{ij}&=&a_{i1}b_{1j}+a_{i2}b_{2j}+\cdots+a_{ik}b_{kj}+\cdots+a_{in}b_{nj}=\mathbf{0},\\ c_{ji}&=&a_{j1}b_{1i}+a_{j2}b_{2i}+\cdots+a_{jk}b_{ki}+\cdots+a_{jn}b_{ni}=\mathbf{0}, \end{array} $$

we have

$$ b_{kj}=\mathbf{0}\quad \text{ and }\quad a_{jk}=\mathbf{0}. $$
(2.2)

Since for any kj,

$$ \begin{array}{@{}rcl@{}} c_{kj}&=&b_{k1}a_{1j}+b_{k2}a_{2j}+\cdots+b_{ki}a_{ij}+\cdots+b_{kn}a_{nj}=\mathbf{0},\\ c_{jk}&=&b_{j1}a_{1k}+b_{j2}a_{2k}+\cdots+b_{ji}a_{ik}+\cdots+b_{jn}a_{nk}=\mathbf{0}, \end{array} $$

then

$$ a_{ij}=\mathbf{0}\quad \text{ and }\quad b_{ji}=\mathbf{0}. $$
(2.3)

By (2.1), (2.2) and (2.3), we obtain that A is a monomial matrix.

It is easy to see that the converse is true. □

A permutation matrix is also a monomial matrix. And the inverse of a permutation matrix is its transpose.

Let A be an m × n matrix. The column space (row space, resp.) of an m × n matrix A is the subpseudomodule of Dm (Dn, resp.) spanned by all its columns (rows, resp.) and is denoted by Col(A) (Row(A), resp.). By Corollary 4.7 in [12], we know that every finitely generated pseudomodule has a basis. For an m × n matrix A, let ai and aj denote the i th row and the j th column of A, respectively. As a consequence, by Theorem 5 in [12] we immediately have

Lemma 2.2

Let \(\{\mathbf {a}_{\ast i_{1}},\dots , \mathbf {a}_{\ast i_{r}}\}\) and \(\{\mathbf {a}_{\ast j_{1}}, \dots , \mathbf {a}_{\ast j_{r}}\}\) be any two bases of Col(A). Then there exists an r × r monomial matrix M such that

$$ \left[ \begin{array}{ccc} \mathbf{a}_{\ast i_{1}}& \cdots& \mathbf{a}_{\ast i_{r}} \end{array} \right]=\left[ \begin{array}{ccc} \mathbf{a}_{\ast j_{1}}& {\cdots} &\mathbf{a}_{\ast j_{r}} \end{array} \right]M. $$

Lemma 2.2 tells us that the cardinalities of any two bases for Col(A) are same. The cardinality is called the column rank of A, denoted by c(A). Dually, we can define the row rank of A, denoted by r(A). The column rank and the row rank of a matrix need not be equal in general. Let A be an m × n matrix. If c(A) = r(A) = r, then r is called the rank of A. If c(A) = n and r(A) = m, then A is called nonsingular, and singular otherwise.

Let A = (aij) be an n × n matrix, the submatrix

$$ \left[ \begin{array}{cccc} \mathbf{a}_{j_{1} i_{1}}& \mathbf{a}_{j_{1} i_{2}}& \cdots& \mathbf{a}_{j_{1} i_{s}} \\ \mathbf{a}_{j_{2} i_{1}}& \mathbf{a}_{j_{2} i_{2}}& \cdots& \mathbf{a}_{j_{2} i_{s}} \\ {\vdots} & {\vdots} & & {\vdots} \\ \mathbf{a}_{j_{r} i_{1}}& \mathbf{a}_{j_{r} i_{2}}& \cdots& \mathbf{a}_{j_{r} i_{s}} \end{array} \right] $$

is called a basis submatrix of A, if \(\{\mathbf {a}_{\ast i_{1}}, \dots , \mathbf {a}_{\ast i_{r}}\}\) is a basis of Col(A) and \(\{\mathbf {a}_{j_{1}\ast }, \dots , \mathbf {a}_{j_{s}\ast }\}\) is a basis of Row(A). If A1 and A2 are both bases submatrices of A, then there exist monomial matrices M1 and M2 such that A1 = M1A2M2. We denote the basis submatrix by \(\bar A\). It is easy to see that \(c(A)=c(\bar A)=\) the number of columns of \(\bar A\), and that \(r(A)=r(\bar A)=\) the number of rows of \(\bar A\). The Rao “normal form” of a matrix is to be needed for us.

Lemma 2.3

([5] Lemma 101 Rao “normal form”) Let A be an m × n matrix. Then there exists an m × m permutation matrix Q and an n × n permutation matrix P such that

$$ A=Q\left[ \begin{array}{ccc} \bar A& \bar A U \\ V\bar A & V\bar AU \end{array} \right]P, $$

where U is an r × (nr) matrix and V is an (ms) × s matrix.

In the following we will introduce and study the equivalences \({\mathscr{R}}^{\ast }\) and \({\mathscr{L}}^{\ast }\) on the set M(D) (\(= \cup _{m=1}^{\infty } \cup _{n=1}^{\infty } M_{m \times n}(D)\)) of all matrices. The equivalences \({\mathscr{R}}^{\ast }\) and \({\mathscr{L}}^{\ast }\) are, respectively, defined by

$$ \begin{array}{@{}rcl@{}} (\forall A,~B \in M(D))~~ A\mathscr{R}^{\ast} B ~&\Longleftrightarrow& ~ (\exists X,~Y\in M(D))~~ A=BX~\text{ and }~ B=AY; \\ (\forall A,~B \in M(D))~~A\mathscr{L}^{\ast} B ~&\Longleftrightarrow& ~ (\exists X,~Y\in M(D))~~ A=XB~\text{ and }~ B=YA. \end{array} $$

It is easy to see that if \(A{\mathscr{R}}^{\ast } B\) (\(A {\mathscr{L}}^{\ast } B\), resp.), then the number of rows (columns, resp.) of A is equal to that of B. In particular, the restriction of \({\mathscr{R}}^{\ast }\) and \({\mathscr{L}}^{\ast }\) to Mn(D) coincide with Green’s relations \({\mathscr{R}}\) and \({\mathscr{L}}\) on the semigroup (Mn(D),⋅), respectively, which play a key role in the algebraic theory of semigroups.

Proposition 2.4

Let A and B be matrices in M(D). Then

$$ A\mathscr{R}^{\ast} B~ \Longleftrightarrow~ \text{Col}(A)=\text{Col}(B)\quad\text{ and }\quad A\mathscr{L}^{\ast} B~\Longleftrightarrow~\text{Row}(A)=\text{Row}(B). $$

Proof

Suppose that \(A{\mathscr{R}}^{\ast } B\). If A is an m × n matrix and B is an m × q matrix, then by the definition of \({\mathscr{R}}^{\ast }\), there exists a q × n matrix X such that BX = A. Now, it follows that \(\text {Col}(BX)=\text {Col}(A)\subseteq \text {Col}(B)\), since the columns of BX are contained in Col(B). Dually, \(\text {Col}(B)\subseteq \text {Col}(A)\). Thus Col(A) = Col(B).

Conversely, suppose that Col(A) = Col(B). If A is an m × n matrix and B is an m × q matrix, then each column of A is in Col(A) and so is in Col(B), since the pseudomodule D has a multiplicative identity 1. This implies that each column of A can be written as a linear combination of the columns of B. Thus there exists a q × n matrix X such that BX = A. Similarly, we can prove that there exists an n × q matrix Y such that AY = B. Hence, we have shown that \(A{\mathscr{R}}^{\ast } B\).

Dually, we can show that \(A{\mathscr{L}}^{\ast } B\) if and only if Row(A) = Row(B). □

We now immediately deduce the following result.

Corollary 2.5

Let A and B be matrices in \(M(\mathbb B)\). Then

$$ A\mathscr{R}^{\ast} B~\Longrightarrow~c(A)=c(B)\quad\text{ and }\quad A\mathscr{L}^{\ast} B~\Longrightarrow~r(A)=r(B). $$

For an m × n matrix A, we will introduce various types of inverses of A. Consider an n × m matrix G in the following equations:

  1. (G-1)

    AGA = A;

  2. (G-2)

    GAG = G;

  3. (G-3)

    (AG)T = AG;

  4. (G-4)

    (GA)T = GA.

A matrix G satisfying (G-1) is called a generalized inverse (g-inverse for short) of A. If G satisfies (G-1) and (G-3) ((G-1) and (G-4), resp.), then it is called a {1,3}-g-inverse ({1,4}-g-inverse, resp.) of A. Finally, if G satisfies all from (G-1) to (G-4), then it is called a Moore–Penrose inverse of A.

We note that if G1 and G2 are any two g-inverses of A, then G1 + G2 is also a g-inverse of A, since

$$ A(G_{1}+ G_{2})A=AG_{1}A+ AG_{2}A=A+ A=A. $$

Also, it is well known that G is a {1,3}-g-inverse of A if and only if GT is a {1,4}-g-inverse of AT.

Proposition 2.6

Let A and G be an m × n matrix and an n × m matrix, respectively. Then the following statements are equivalent:

  1. (i)

    AGA = A;

  2. (ii)

    (AG)2 = AG and \(A{\mathscr{R}}^{\ast } AG\);

  3. (iii)

    (GA)2 = GA and \(A{\mathscr{L}}^{\ast } GA\).

Proof

(i) ⇒ (ii). Assume that (i) holds. Then (AG)2 = (AGA)G = AG. Also, it is clear that \(A{\mathscr{R}}^{\ast } AG\).

(ii) ⇒ (i). Assume that (ii) holds. Then A = AGX for some m × n matrix X. This implies that AGA = AGAGX = AGX = A since (AG)2 = AG. Thus (i) holds.

Similarly, we can show that (i) ⇔ (iii). □

3 Normal Form of an Idempotent Matrix

In this section, we will give the Rao normal form of an idempotent matrix which plays a very important role in the studying the maximal subgroup of the semigroup Mn(D). The following results are inspired by Kang and Song [10], in which they studied the idempotent matrices over the \(\max \limits \)-algebra.

Define the partial order ≤ on Mm×n(D) by

$$ A\leq B~\Longleftrightarrow~A+B=B. $$

Lemma 3.1

Let A, B, C be m × n matrices, let X1, Y1 be n × p matrices and let X2, Y2 be p × m matrices. Then the following statements hold.

  1. (i)

    If A + B = C, then AC and BC.

  2. (ii)

    If X1Y1, then AX1AY1. If X2Y2, then X2AY2A.

Proof

(i) Suppose that A + B = C. Then we have

$$ A+ C =A+(A+ B)=A+ B=C. $$

Hence AC. Similarly, BC.

(ii) Suppose that X1Y1. Then X1 + Y1 = Y1. Thus it follows that

$$ AX_{1}+ AY_{1}=A(X_{1}+ Y_{1})=AY_{1}, $$

and so AX1AY1. Similarly, if X2Y2, then X2AY2A. □

Lemma 3.2

If E = (eij) be an n × n idempotent matrix, then eii1 for all 1 ≤ in.

Proof

Let E = (eij)n×n be an idempotent matrix. Then for any 1 ≤ in,

$$ e_{ii}\cdot e_{ii} \leq (e_{i1}\cdot e_{1i})+ {\cdots} + (e_{ii}\cdot e_{ii})+ {\cdots} + (e_{in}\cdot e_{ni}) =e_{ii}. $$

This implies that \(e_{ii}\leq e_{ii}{e^{-1}_{ii}}=\mathbf {1}\). □

Lemma 3.3

Let E = (eij) be an n × n idempotent matrix. If eii < 1 for some i ∈{1,2,…,n}, then the i-th column (row, resp.) of E is a linear combination of the remaining columns (rows, resp.). Furthermore, the matrix obtained from E by deleting the i-th column and the i-th row is an (n − 1) × (n − 1) idempotent matrix.

Proof

Let E = (eij)n×n be an idempotent matrix. Suppose that eii < 1 for some 1 ≤ in. Without loss of generality, we assume that e11 < 1. Partition E as \(\left [ \begin {array}{ccc} e_{11} & E_{12}\\ E_{21} & E_{22} \end {array} \right ]\). Then we have

$$ E^{2}=\left[ \begin{array}{ccc} e_{11}\cdot e_{11} + E_{12}E_{21} & e_{11}E_{12}+ E_{12}E_{22} \\ E_{21}e_{11}+ E_{22}E_{21} & E_{21}E_{12}+ E_{22}^{2} \end{array} \right]=\left[ \begin{array}{ccc} e_{11} & E_{12}\\ E_{21} & E_{22} \end{array} \right]. $$

This implies that

$$ \left[ \begin{array}{ccc} E_{12}E_{21} & E_{12}E_{22} \\ E_{22}E_{21} & E_{21}E_{12}+ E_{22}^{2} \end{array} \right] =\left[ \begin{array}{ccc} e_{11} & E_{12}\\ E_{21} & E_{22} \end{array} \right], $$

since e11 < 1. Thus it follows that

$$ \begin{array}{@{}rcl@{}} &&\left[ \begin{array}{ccc} e_{11} \\ E_{21} \end{array} \right]=\left[ \begin{array}{ccc} E_{12}E_{21}\\ E_{22}E_{21} \end{array} \right]=\left[ \begin{array}{ccc} E_{12}\\ E_{22} \end{array} \right]E_{21}, \end{array} $$
(3.1)
$$ \begin{array}{@{}rcl@{}} && \left[ \begin{array}{ccc} e_{11}& E_{12} \end{array} \right]=\left[ \begin{array}{ccc} E_{12}E_{21} & E_{12}E_{22} \end{array} \right]= E_{12}\left[ \begin{array}{ccc} E_{21} & E_{22} \end{array} \right], \end{array} $$
(3.2)
$$ \begin{array}{@{}rcl@{}} &&E_{21}E_{12}+ E_{22}^{2}=E_{22}. \end{array} $$
(3.3)

The equation (3.1) ((3.2), resp.) tells us that the 1-th column (the 1-th row, resp.) of E is a linear combination of the remaining columns (rows, resp.). By Lemma 3.1 and (3.3), we have

$$ E_{22}^{2}\leq E_{22}\quad\text{ and }\quad E_{21}E_{12}\leq E_{22}. $$
(3.4)

Thus it follows by (3.4) and Lemma 3.1 that \(E_{21}E_{12}=E_{21}E_{12}E_{22}\leq E_{22}^{2}\), since E12E22 = E12. We therefore have

$$ E_{22}=E_{21}E_{12}+ E_{22}^{2} \leq E_{22}^{2}+ E_{22}^{2}=E_{22}^{2}, $$
(3.5)

by Lemma 3.1. Thus (3.4) and (3.5) tell us that \(E_{22}^{2}=E_{22}\). □

The above lemma tells us that if E = (eij) is an n × n idempotent matrix and eii < 0 for some 1 ≤ in, then c(E) < n and r(E) < n. Thus by Lemmas 3.2 and 3.3, we immediately have the following result.

Corollary 3.4

All main diagonal entries of a nonsingular idempotent matrix are 1.

Lemma 3.5

Let E be an n × n idempotent matrix whose main diagonal entries are all 1. Then the i-th row of E is a linear combination of the remaining rows if and only if the i-th column of E is a linear combination of the remaining columns. Furthermore, the matrix obtained from E by deleting the i-th column and the i-th row is an (n − 1) × (n − 1) idempotent matrix.

Proof

Let E = (eij) be an n × n idempotent matrix with eii = 0 for all 1 ≤ in.

Suppose that the i-th row of A is a linear combination of the remaining rows. Without loss of generality, we assume that the 1-th row of E is a linear combination of the remaining rows. Partition E as \( \left [ \begin {array}{ccc} \mathbf {1} & E_{12}\\ E_{21} & E_{22} \end {array} \right ]\). Then we have

$$ E^{2}=\left[ \begin{array}{ccc} \mathbf{1}+ E_{12}E_{21} & E_{12}+ E_{12}E_{22} \\ E_{21}+ E_{22}E_{21} & E_{21}E_{12}+ E_{22}^{2} \end{array} \right]=\left[ \begin{array}{ccc} \mathbf{1} & E_{12}\\ E_{21} & E_{22} \end{array} \right]. $$

Thus by Lemma 3.1 we have

$$ E_{22}E_{21}\leq E_{21} \quad\text{ and }\quad E_{22}^{2}\leq E_{22}. $$
(3.6)

On the other hand, it is easy to see that

$$ I_{n-1}\leq E_{22}, $$

since eii = 1 for all 2 ≤ in. Thus by Lemma 3.1, we can show that

$$ E_{21}\leq E_{22}E_{21} \quad\text{ and }\quad E_{22}\leq E_{22}^{2}. $$
(3.7)

Hence by summing (3.6) and (3.7), we have

$$ E_{21}=E_{22}E_{21} \quad\text{ and }\quad E_{22}=E_{22}^{2}. $$
(3.8)

Since \(\left [\begin {array}{ccc} \mathbf {1} & E_{12} \end {array} \right ]\) is alinear combination of the rows of \(\left [\begin {array}{ccc} E_{21}& E_{22} \end {array} \right ]\), there exists a row vector X such that \(\left [\begin {array}{ccc} \mathbf {1}& E_{12} \end {array} \right ]=X \left [\begin {array}{ccc} E_{21}& E_{22} \end {array} \right ]\). That is to say,

$$ \mathbf{1}=XE_{21} \quad\text{ and }\quad E_{12}=XE_{22}. $$
(3.9)

Thus it follows from (3.8) and (3.9) that 1 = XE21 = XE22E21 = E12E21. Therefore,

$$ \left[\begin{array}{ccc} \mathbf{1} \\ E_{21} \end{array} \right]=\left[\begin{array}{ccc} E_{12}E_{21}\\ E_{22}E_{21} \end{array} \right]=\left[\begin{array}{ccc} E_{12}\\ E_{22} \end{array} \right]E_{21}. $$

This shows that the 1-th column of E is a linear combination of the remaining columns.

Dually, we can show that the converse is true. This completes our proof. □

By Lemmas 3.3 and 3.5 we immediately have

Corollary 3.6

If E is an idempotent matrix, then c(E) = r(E).

Lemma 3.7

Let A and B be n × n matrices. If all main diagonal entries of A are 1 and ABAA, then BA.

Proof

Suppose that A = (aij) and B = (bij) are n × n matrices. Assume that ABA = (cij). If all main diagonal entries of A are 1 and ABAA, then for any 1 ≤ i, jn,

$$ b_{ij}=\mathbf{1}\cdot b_{ij}\cdot \mathbf{1}=a_{ii}\cdot b_{ij}\cdot a_{jj}\leq {\sum}_{k=1}^{n}{\sum}_{l=1}^{n}(a_{ik}\cdot b_{kl}\cdot a_{lj}) = c_{ij} \leq a_{ij}. $$

That is to say, BA. □

The following gives the normal form of an idempotent matrix.

Theorem 3.8

Let E be an n × n matrix. Then E is an idempotent matrix of rank r if and only if there exists an n × n permutation matrix P such that

$$ E=P\left[\begin{array}{ccc} \bar E & \bar EU\\ V\bar E& V\bar EU \end{array} \right]P^{T}, $$

where \(\bar E\) is a basis submatrix of E and is an r × r nonsingular idempotent matrix, U is an r × (nr) matrix and V is an (nr) × r matrix such that \(UV\leq \bar E\).

Proof

Suppose that E is an n × n matrix. If E is an idempotent matrix of rank r (r < n), then by Lemmas 3.3 and 3.5 we have that the i-th row of E is a linear combination of the remaining rows if and only if the i-th column of E is a linear combination of the remaining columns. Thus by carrying out the same row permutations and column permutations of E, we can find a matrix \(E^{\prime }\) such that the first r columns and the first r rows of \(E^{\prime }\) are linearly independent. That is to say, there exists an n × n permutation matrix P such that

$$ P^{T}EP=E^{\prime}=\left[ \begin{array}{ccc} \bar E & X\\ Y & Z \end{array} \right], $$

where \(\bar E\) is an basis submatrix of E, X, Y and Z are matrices of appropriate sizes. Also, by Lemmas 3.3 and 3.5 we can show that \(\bar E\) is an r × r nonsingular idempotent matrix. \(\left [\begin {array}{ccc} X \\ Z \end {array} \right ]=\left [\begin {array}{ccc} \bar E \\ Y \end {array} \right ]U\) for some r × (nr) matrix U, since each column of \(\left [\begin {array}{ccc} X \\ Z \end {array} \right ]\) is alinear combination of the columns of \(\left [\begin {array}{ccc} \bar E \\ Y \end {array} \right ]\). Dually, we can show that \(\left [\begin {array}{ccc} Y & YU \end {array} \right ]=V\left [\begin {array}{ccc} \bar E & \bar EU \end {array}\right ]\) for some (nr) × r matrix V. Hence, we have

$$ E=P\left[ \begin{array}{ccc} \bar E & \bar EU\\ V\bar E & V\bar EU \end{array} \right]P^{T}. $$

This is a Rao “normal form” of E. Since E2 = E, we have that \(E^{\prime } = E^{\prime 2}\). That is to say,

$$ \left[\begin{array}{ccc} \bar E & \bar EU\\ V\bar E & V\bar EU \end{array} \right]=\left[\begin{array}{ccc} \bar E^{2} + \bar EUV\bar E & \bar E^{2}U + \bar EUV\bar EU\\ V\bar E^{2} + V\bar EUV\bar E & V\bar EU + V\bar EUV\bar EU \end{array}\right]. $$

Hence, \(\bar E=\bar E^{2}+ \bar EUV\bar E\). Thus by Lemma 3.1, we can show that \(\bar EUV\bar E\leq \bar E\). By Corollary 3.4 and Lemma 3.7, we have that \(UV\leq \bar E\).

The converse is easily to be verified. □

Corollary 3.9

Let E be an n × n matrix. Then E is a symmetric idempotent matrix of rank r if and only if there exists an n × n permutation matrix P such that

$$ E=P\left[\begin{array}{ccc} \bar E & \bar EU\\ V\bar E& V\bar EU \end{array} \right]P^{T}, $$

where \(\bar E\) is a basis submatrix of E and is an r × r symmetric nonsingular idempotent matrix, U is an r × (nr) matrix and V is an (nr) × r matrix such that \(UV\leq \bar E\).

4 Generalized Inverses of a Regular Matrices

In this section we will give a characterization of regular matrices. Also, we will define a space decomposition of a matrix and prove that a matrix A is regular if and only if A is space decomposable.

Theorem 4.1

Let A be an m × n matrix. Then A is regular if and only if there exists an m × m permutation matrix P and an n × n permutation matrix Q such that

$$ A=P\left[\begin{array}{ccc} FM & FC\\ VFM & VFC \end{array} \right]Q, $$

where F is a nonsingular idempotent matrix, M is a diagonal monomial matrix and C, V are matrices of appropriate sizes.

Proof

Suppose that A is an m × n matrix. If A is regular, then there exists an n × m matrix G such that G is a g-inverse of A. Thus AG is idempotent and \(A{\mathscr{R}}^{\ast } AG\) by Proposition 2.6. Let the rank of AG be r. Now, by Theorem 3.8, there exists an m × m permutation matrix P such that \(AG=P\left [\begin {array}{ccc} F & FU\\ VF & VFU \end {array} \right ]P^{T}\), where F is an r × r nonsingular idempotent matrix, U and V are matrices of appropriate sizes, and so

$$ AGP=P\left[\begin{array}{ccc} F & FU\\ VF & VFU \end{array} \right]. $$
(4.1)

Notice that the set of the first r columns of AGP is a basis of Col(AGP). Since \(A{\mathscr{R}}^{\ast } AG{\mathscr{R}}^{\ast }(AGP)\), it follows from (4.1), Lemma 2.2 and Proposition 2.4 that there exists a permutation matrix Q such that

$$ A=AGP\left[\begin{array}{ccc} M & X\\ -\infty & Y \end{array} \right]Q = P\left[\begin{array}{ccc} F & FU\\ VF & VFU \end{array}\right]\left[\begin{array}{ccc} M & X\\ O & Y \end{array}\right]Q =P\left[\begin{array}{ccc} FM & FC\\ VFM & VFC \end{array}\right]Q, $$

where M is an r × r monomial matrix and C = X + UY.

Conversely, it is easy to verify \(Q^{T}\left [\begin {array}{ccc} M^{-1}& O\\ O& O \end {array}\right ]P^{T}\) is a g-inverse of A. □

As a consequence, we have

Corollary 4.2

If A is a regular matrix, then c(A) = r(A).

Notice that for a matrix A, A is regular if and only if AT is regular, since G is a g-inverse of A if and only if GT is a g-inverse of AT.

A nonzero matrix A is said to be space decomposable if there exist matrices L and R such that

$$ A=LR,\quad A\mathscr{R}^{\ast} L \quad\text{and}\quad A\mathscr{L}^{\ast} R. $$
(4.2)

The decomposition LR will be called a space decomposition of A. Thus Col(A) = Col(L) and Row(A) = Row(R). This means the matrix A is decomposed two matrices, which satisfy a matrix preserves column space and another matrix preserves row space.

Proposition 4.3

A nonzero matrix is regular if and only if it is space decomposable.

Proof

Suppose that A is a nonzero m × n matrix.

If A is a regular matrix of rank r, then by Theorem 4.1, A is of the form

$$ P\left[\begin{array}{ccc} FM & FC\\ VFM & VFC \end{array}\right]Q, $$

where P and Q are permutation matrices and M is a monomial matrix. Let

$$ L_{A}=P\left[\begin{array}{ccc} F \\ VF \end{array}\right]\quad\text{ and }\quad R_{A}=\left[\begin{array}{ccc} FM & FC \end{array}\right]Q. $$
(4.3)

Then

$$ A=L_{A}R_{A},\quad L_{A}=AQ^{T} \left[\begin{array}{ccc} M^{-1}\\ O \end{array} \right]\quad\text{ and }\quad R_{A}=\left[\begin{array}{ccc} I_{r} & O \end{array}\right]P^{T}A. $$

This implies that \(A{\mathscr{R}}^{\ast } L_{A}\) and \(A{\mathscr{L}}^{\ast } R_{A}\). Thus LA and RA satisfy the condition (4.2) and so A is space decomposable.

Conversely, assume that A is space decomposable. Then it follows from (4.2) that there exist matrices L and R such that A = LR, \(A{\mathscr{R}}^{\ast } L\) and \(A{\mathscr{L}}^{\ast } R\). This implies that L = AX and R = Y A for some matrices X and Y. Hence

$$ A=LR=A(XY)A, $$

and so A is regular. □

Notice that a matrix of rank less than 3 is space decomposable.

Corollary 4.4

If the rank of a matrix A is less than 3, then A is regular.

In [9], Johnson and Kambites proved that the all 2 × 2 tropical matrix are regular. Corollary 4.4 extend this result.

Lemma 4.5

Let A be a nonsingular idempotent matrix. If G is a g-inverse of A, then AG = GA = A.

Proof

Suppose that A = (aij) is an n × n nonsingular idempotent matrix and that G is a g-inverse of A. Then AGA = A. It follows from Proposition 2.6 that AG is idempotent and \(A{\mathscr{R}}^{\ast } AG\). Since A is nonsingular we can show by Corollary 2.5 that AG is also nonsingular. Thus by Corollary 3.4 it follows that main diagonal entries of A and AG are all 1. This implies by Lemma 3.7 that GA, since AGA = A. Hence

$$ AG\leq A^{2}=A, $$
(4.4)

by Lemma 3.1(ii). Since (AG)A(AG) = AG, by Lemma 3.7 we have

$$ A\leq AG. $$
(4.5)

Equation (4.4) and (4.5) tell us that AG = A. Notice that AT is idempotent and ATGTAT = AT. Similarly, we have that ATGT = AT, and hence GA = A. □

Proposition 4.6

Let A be of the form in Theorem 4.1 and let LARA be the space decomposition of A in (4.3). Then for any m × s matrix L and any s × n matrix R, LR is a space decomposition of A if and only if L = LAM1, R = M2RA and FM1M2 = M1M2F = F for some r × s matrix M1 and some s × r matrix M2.

Proof

If A is an m × n regular matrix of rank r, then by Theorem 4.1, A is of the form

$$ P\left[\begin{array}{ccc} FM & FC\\ VFM & VFC \end{array} \right]Q, $$

where P and Q are permutation matrices, M is a monomial matrix and F is a nonsingular idempotent matrix. Let LARA be the space decomposition of A in (4.3). Then it follows from (4.2) that A = LARA, \(A{\mathscr{R}}^{\ast } L_{A}\) and \(A{\mathscr{L}}^{\ast } R_{A}\).

For any m × s matrix L and any s × n matrix R, if LR is a space decomposition of A, then A = LR, \(L{\mathscr{R}}^{\ast } A\) and \(R{\mathscr{L}}^{\ast } A\). This implies that \(L{\mathscr{R}}^{\ast } L_{A}\) and \(R{\mathscr{L}}^{\ast } R_{A}\) and so L = LAM1 and R = M2RA for some r × s matrix M1 and some s × r matrix M2. Now we have

$$ A=LR=P\left[\begin{array}{ccc} F \\ VF \end{array} \right]M_{1}M_{2}\left[ \begin{array}{ccc} FM & FC \end{array} \right]Q =P\left[ \begin{array}{ccc} FM_{1}M_{2}FM & FM_{1}M_{2}FC\\ VFM_{1}M_{2}FM & VFM_{1}M_{2}FC \end{array}\right]Q. $$

Since P and Q are permutation matrices, FM = FM1M2FM. It follows that F = FM1M2F, since M is a monomial matrix. Notice that F is a nonsingular idempotent matrix. By Lemma 4.5 we have that FM1M2 = M1M2F = F.

Conversely, assume that there exists an r × s matrix M1 and an s × r matrix M2 such that L = LAM1, R = M2RA and FM1M2 = M1M2F = F. Then we have

$$ \begin{array}{@{}rcl@{}} LM_{2}&=&L_{A}M_{1}M_{2}=P\left[\begin{array}{ccc} F \\ VF \end{array} \right]M_{1}M_{2}=P\left[ \begin{array}{ccc} F \\ VF \end{array} \right]=L_{A},\\ M_{1}R&=&M_{1}M_{2}R_{A}=M_{1}M_{2}\left[\begin{array}{ccc} FM & FC \end{array} \right]Q=\left[\begin{array}{ccc} FM & FC \end{array} \right]Q= R_{A}, \end{array} $$

and

$$ LR=L_{A}M_{1}M_{2}R_{A}=L_{A}R_{A}=A. $$

This implies that

$$ A=LR,\quad A\mathscr{R}^{\ast} L_{A}\mathscr{R}^{\ast} L \quad\text{ and }\quad A\mathscr{L}^{\ast} R_{A}\mathscr{L}^{\ast} R, $$

and so LR is a space decomposition of A. □

Corollary 4.7

If LR is a space decomposition of a regular matrix A, then both L and R are regular.

Proof

Let A be an m × n regular matrix of rank r. Suppose that LR is a space decomposition of A. By Proposition 4.6 it follows that there exist matrices M1 and M2 such that \(L=P\left [\begin {array}{ccc} F \\ VF \end {array}\right ]M_{1}\), \(R=M_{2}\left [\begin {array}{ccc} FM & FC \end {array} \right ]Q\) and FM1M2 = M1M2F = F, where P and Q are permutation matrices, M is a monomial matrix. Thus we have

$$ L_{G}=M_{2}\left[\begin{array}{ccc} I_{r}& O \end{array}\right]P^{T}\quad\text{ and }\quad R_{G}=Q^{T}\left[\begin{array}{ccc} M^{-1}\\ O \end{array}\right]M_{1} $$

are g-inverses of L and R, respectively, and so both L and R are regular. □

5 Other Type of g-Inverses

Notice that G is a {1,3}-g-inverse of A if and only if AGA = A and (AG)T = AG.

Theorem 5.1

Let A be an m × n matrix. The following statements are equivalent:

  1. (i)

    A has a {1,3}-g-inverse.

  2. (ii)

    There exists an m × m permutation matrix P and an n × n permutation matrix Q such that

    $$ A=P\left[\begin{array}{ccc} SM & SZ\\ VSM & VSZ \end{array} \right]Q, $$

    where S is a symmetric nonsingular idempotent matrix, M is a diagonal monomial matrix and V, Z are matrices such that VTVS.

  3. (iii)

    \(A{\mathscr{L}}^{\ast } A^{T}A\).

Proof

(i) ⇒ (ii). Let an n × m matrix G be a {1,3}-g-inverse of A. Then \(A{\mathscr{R}}^{\ast } AG\) and AG is a symmetric idempotent matrix by Proposition 2.6. Let the rank of AG is r. It follows from Corollary 3.9 that there exists an m × m permutation matrix P such that \(AG=P\left [\begin {array}{ccc} S & SV^{T}\\ VS & VSV^{T} \end {array} \right ]P^{T}\), and so

$$ AGP=P\left[\begin{array}{ccc} S & SV^{T}\\ VS & VSV^{T} \end{array} \right], $$

where S is a symmetric nonsingular idempotent matrix, V is a matrix such that VTVS. Notice that the set of the first r columns of AGP is a basis of Col(AGP). Since \(A{\mathscr{R}}^{\ast } AG{\mathscr{R}}^{\ast } AGP\), it follows from Lemma 2.2 and Proposition 2.4 that there exists an n × n permutation matrix Q such that \(A=AGP\left [ \begin {array}{ccc} M & X\\ O & Y \end {array} \right ]Q\), where M is a diagonal monomial matrix. Thus

$$ A=P\left[\begin{array}{ccc} S & SV^{T}\\ VS & VSV^{T} \end{array}\right]\left[\begin{array}{ccc} M & X\\ O & Y \end{array}\right]Q = P\left[\begin{array}{ccc} SM & SZ\\ VSM & VSZ \end{array}\right]Q, $$

where Z = X + VTY.

(ii) ⇒ (iii). Now assume that (ii) holds. Since P is a permutation matrix, N is a symmetric idempotent matrix, M is a diagonal matrix and V is a matrix such that VTVN, it follows that

$$ A^{T}A=Q^{T}\left[\begin{array}{ccc} MSM & MSZ\\ Z^{T}SM & Z^{T}SZ \end{array}\right]Q. $$

Hence, since Q is a permutation matrix and M is a monomial matrix,

$$ A=P\left[\begin{array}{ccc} M^{-1} & O\\ VM^{-1} & O \end{array}\right]QA^{T}A. $$

Thus we have that \(A{\mathscr{L}}^{\ast } A^{T}A\).

(iii) ⇒ (i). If \(A{\mathscr{L}}^{\ast } A^{T}A\), then there exists an m × n matrix G, such that A = GATA. This implies that

$$ AG^{T}A=(GA^{T}A)G^{T}A=G(A^{T}AG^{T})A=G(GA^{T}A)^{T}A=GA^{T}A=A. $$

We also have

$$ (AG^{T})^{T}=(GA^{T}AG^{T})^{T}=GA^{T}AG^{T}=AG^{T}. $$

Therefore, GT is a {1,3}-g-inverse of A. □

In Theorem 5.1(ii), we can easily check that \(Q^{T}\left [\begin {array}{ccc} M^{-1}& M^{-1}V^{T}\\ O& O \end {array}\right ]P^{T}\) is a {1,3}-g-inverse of A.

Similarly, we have the following result.

Proposition 5.2

Let A be an m × n matrix. The following statements are equivalent:

  1. (i)

    A has a {1,4}-g-inverse.

  2. (ii)

    There exists an m × m permutation matrix P and an n × n permutation matrix Q such that

    $$ A=P\left[\begin{array}{ccc} SM & SMU\\ WS & WSU \end{array}\right]Q, $$

    where S is a symmetric nonsingular idempotent matrix, M is a diagonal monomial matrix and U, W are matrices such that UUTS.

  3. (iii)

    \(A{\mathscr{R}}^{\ast } AA^{T}\).

In Proposition 5.2(ii), we can easily check that \(Q^{T}\left [\begin {array}{ccc} M^{-1}& O\\ U^{T}M^{-1}& O \end {array}\right ]P^{T}\) is a {1,4}-g-inverse of A.

In the following result, we characterize matrices having Moore–Penrose inverses. The proof depends on the above two theorems, and we omit the proof:

Corollary 5.3

Let A be an m × n matrix. The following statements are equivalent:

  1. (i)

    A has a Moore–Penrose inverse.

  2. (ii)

    There exists an m × m permutation matrix P and an n × n permutation matrix Q such that

    $$ A=P\left[\begin{array}{ccc} SM & SMU\\ VSM & VSMU \end{array}\right]Q, $$

    where S is a symmetric nonsingular idempotent matrix, M is a diagonal monomial matrix and V, U are matrices such that VTVS and UUTS.

  3. (iii)

    \(A{\mathscr{L}}^{\ast } A^{T}A\) and \(A{\mathscr{R}}^{\ast } AA^{T}\).

In Corollary 5.3(ii), we can easily check that \(Q^{T}\left [\begin {array}{ccc} SM^{-1}& SM^{-1}V^{T}\\ U^{T}SM^{-1}& U^{T}SM^{-1}V^{T} \end {array}\right ]P^{T}\) is a Moore–Penrose inverse of A.