1 Introduction

In [10], Mary introduced a new generalized inverse by using Green’s preorders and relations [6], named the inverse along an element, which unifies the classical generalized inverse. Later, Zhu etc. gave the definition of the one-sided inverse along an element. Recently, the characterizations and representations of the inverse along an element are studied by many authors. For example, Mary [10] proved an interesting characterization of the inverse along an element in terms of group inverses in a semigroup, and the existence criteria for the classical generalized inverses were obtained directly by this result. Besides that, Mary and Patrício [12] made use of the ring structure to characterize the existence of this inverse by means of a unit in a ring. Also, Benítez and Boasso [1] obtained a new characterization of the inverse along an element by the invertibility of certain element of the corner ring, and the set of all invertible elements along a fixed element was fully described by this result. Zhu etc. [16] gave the equivalent conditions for the existence of the one-sided inverse along an element. More results on the inverse along an element can be found in [2, 8, 9, 11, 17].

In this article, we mainly give the new existence criteria for the (inner) inverse along an element in semigroups and rings. In Sect. 2, we recall the definitions of some generalized inverses and give related notations. In Sect. 3, several new characterizations of the inverse along an element are given. In Sect. 4, necessary and sufficient conditions for the existence of the inner inverse along an element are obtained. Moreover, the equivalent conditions for the existence of the group inverse and Moore-Penrose inverse can be derived directly from this result. In Sect. 5, we give Cline’s formula for the inverse along an element, which recovers well-known Cline’s formula for the Drazin inverse. Finally, in Sect. 6, we investigate the inverse of the product along an element and the reverse order law for this inverse. In addition, commutative inverse along an element is characterized in a semigroup, which generalizes Theorems 7.1 and 7.3 in [1].

2 Preliminary Definitions and Notations

Throughout this paper, S is a semigroup, and R is a ring with unity 1. \(S^{1}\) denotes the monoid generated by S (\(R^{1}=R\)). We say a is (von Neumann) regular in S if there exists \(x\in S\) such that \(axa=a\). Such x is called an inner inverse of a and denoted by \(a^{-}\). The set of all inner inverses of a is denoted by \(a\{1\}\). An involution \(*\): S \(\rightarrow \) S is an anti-isomorphism: \((a^{*})^{*}=a\) and \((ab)^{*}=b^{*}a^{*}\), where \(a,b\in S\). In a ring, an involution \(*\): R \(\rightarrow \) R is also an anti-isomorphism: \((a^{*})^{*}=a\), \((a+b)^{*}=a^{*}+b^{*}\), and \((ab)^{*}=b^{*}a^{*}\) for all \(a,b \in R\). We call S (resp. R) a \({*}\)-semigroup (resp. \({*}\)-ring) if there exists an involution on S (resp. R).

Following Green [6], for \(a, b\in S\), Green’s preorders \(\le _{\mathcal {L}}\), \(\le _{\mathcal {R}}\), and \(\le _{\mathcal {H}}\) are defined by

$$\begin{aligned} a\le _{\mathcal {L}}b\Longleftrightarrow & {} S^{1}a \subset S^{1}b \Longleftrightarrow \exists \ x \in S^{1}, a=xb.\\ a\le _{\mathcal {R}}b\Longleftrightarrow & {} aS^{1} \subset bS^{1} \Longleftrightarrow \exists \ x \in S^{1}, a=bx.\\ a\le _{\mathcal {H}}b\Longleftrightarrow & {} a\le _{\mathcal {L}}b\ \mathrm {and}\ a\le _{\mathcal {R}}b. \end{aligned}$$

In addition, Green’s relations \(\mathcal {L}\), \(\mathcal {R}\), \(\mathcal {H}\) are defined by

$$\begin{aligned} a \mathcal {L} b\Longleftrightarrow & {} S^{1}a = S^{1}b \Longleftrightarrow \exists \ x, y \in S^{1}, a=xb\ \mathrm {and}\ b=ya.\\ a \mathcal {R} b\Longleftrightarrow & {} aS^{1}= bS^{1} \Longleftrightarrow \exists \ x, y \in S^{1}, a=bx\ \mathrm {and}\ b=ay.\\ a \mathcal {H} b\Longleftrightarrow & {} a \mathcal {L} b\ \mathrm {and}\ a \mathcal {R} b. \end{aligned}$$

These are equivalent relations on S. We denote the \(\mathcal {L}\)-class (\(\mathcal {R}\)-class, \(\mathcal {H}\)-class) of a by \(\mathcal {L}_{a}\) (\(\mathcal {R}_{a}\), \(\mathcal {H}_{a}\)).

For the readers’ convenience, we first recall the definitions of some generalized inverses. An element \(a\in S\) is said to be Moore–Penrose invertible with respect to the involution \(*\) if the following equations

$$\begin{aligned} axa=a,\ \ xax=x,\ \ (ax)^{*}=ax\ \ \mathrm {and}\ \ (xa)^{*}=xa \end{aligned}$$

have a common solution [14]. Such solution is unique if it exists, and is usually denoted by \(a^{\dagger }\). The set of all Moore-Penrose invertible elements of S will be denoted by \(S^{\dagger }\).

The Drazin inverse [4] of \(a\in S\) is the element \(x\in S\) which satisfies

$$\begin{aligned} a^{k}=a^{k+1}x, \ \ xax=x\ \ \mathrm {and} \ \ ax=xa, \ \ \text{ for } \text{ some } \ \ k\ge 1. \end{aligned}$$

The element x above is unique if it exists and is denoted by \(a^{D}\). The least such k is called the index of a, and denoted by ind(a). In particular, when ind(a)=1, the Drazin inverse \(a^{D}\) is called the group inverse of a and it is denoted by \(a^{\#}\). The set of all Drazin (resp. group) invertible elements of S will be denoted by \(S^{D}\) (resp. \(S^{\#}\)).

In [10], the element \(a\in S\) is said to be invertible along \(d\in S\) if there exists \(b\in S\) such that

$$\begin{aligned} bad=d=dab \ \ \mathrm {and} \ \ b\le _{\mathcal {H}}d. \end{aligned}$$

If such b exists, then it is unique and is said to be the inverse of a along d, which will be denoted by \(a^{\Vert d}\). This inverse generalizes the concept of invertible element, as well as the classical generalized inverses such as group inverse, Drazin inverse, and Moore–Penrose inverse. Moreover, if the inverse b of a along d verifies \(aba=a\), we say that b is an inner inverse of a along d.

Finally, an element \(a\in S\) is left (resp. right) invertible along \(d\in S\) [16] if there exists \(b\in S\) such that

$$\begin{aligned} bad=d \ (\text{ resp. } \ dab=d)\ \ \mathrm {and} \ \ b\le _{\mathcal {L}}d \ ( \text{ resp. }\ b\le _{\mathcal {R}}d). \end{aligned}$$

Given \(a\in R\), the following notations will be used:

$$\begin{aligned} a^{0}=\{x\in R: ax=0\}\ \mathrm {and}\ ^{0}a=\{x\in R: xa=0\}. \end{aligned}$$

3 Existence of the Inverse Along an Element

In this section, we will give several characterizations for the (one-sided) inverse along an element. In what follows, \(\mathbb {N}\) denotes the set of positive integers. Let \(a\in R\), by \(a_{l}^{-1}\) and \(a_{r}^{-1}\) we denote a left inverse and a right inverse of a, respectively.

First, we give some useful lemmas.

Lemma 3.1

  1. (1)

    [7, Theorem 1] Let \(a\in S\). Then \(a\in S^{\#}\) if and only if \(a \mathcal {H} a^{2}\).

  2. (2)

    [12, Corollary 3.4] Let \(a\in R\). Then \(a\in R^{\#}\) if and only if 1 is invertible along a.

Lemma 3.2

  1. (1)

    [12, page 1132] Let \(a, d\in S\). If a is invertible along d, then d is regular.

  2. (2)

    [10, Lemma 3] Let \(a, d\in S\). If a is invertible along d, then \(a^{\Vert d}aa^{\Vert d}=a^{\Vert d}\).

Lemma 3.3

Let \(a, d\in S\). Then

  1. (1)

    [16, Theorem 2.3] a is left invertible along d if and only if \(d\le _{\mathcal {L}} dad\).

  2. (2)

    [16, Theorem 2.4] a is right invertible along d if and only if \(d\le _{\mathcal {R}}dad\).

  3. (3)

    [12, Theorem 2.2] a is invertible along d if and only if \(d\le _{\mathcal {H}} dad\).

  4. (4)

    a is invertible along d with inverse y if and only if a is right invertible along d with a right inverse x and a is left invertible along d with a left inverse z. In this case, y=x=z.

Proof

(4) We only need to prove \(y=x=z\). Suppose a is invertible along d with inverse y, then \(yad=d\) and \(y\le _{\mathcal {L}} d\). From \(y\le _{\mathcal {L}} d\), it follows that there exists \(t_{1}\in S^{1}\) such that \(y=t_{1}d\). Since x is a right inverse of a along d, we get \(dax=d\) and \(x\le _{\mathcal {R}}d\), which implies \(x=dt_{2}\) for some \(t_{2}\in S^{1}\). Hence, \(y=t_{1}d=t_{1}dax=yax\), and \(x=dt_{2}=yadt_{2}=yax\). So, \(y=x\). Similarly, we have \(y=z\). \(\square \)

Lemma 3.4

Let \(a, d\in R\) with d regular. Then

  1. (1)

    [16, Corollary 3.3] a is left invertible along d if and only if \(u=da+1-dd^{-}\) is left invertible if and only if \(v=ad+1-d^{-}d\) is left invertible. In this case, \(u_{l}^{-1}d\) is a left inverse of a along d.

  2. (2)

    [16, Corollary 3.5] a is right invertible along d if and only if \(u=da+1-dd^{-}\) is right invertible if and only if \(v=ad+1-d^{-}d\) is right invertible. In this case, \(dv_{r}^{-1}\) is a right inverse of a along d.

  3. (3)

    [12, Theorem 3.2] a is invertible along d if and only if \(u=da+1-dd^{-}\) is invertible if and only if \(v=ad+1-d^{-}d\) is invertible. In this case, \(a^{\Vert d}=u^{-1}d=dv^{-1}\).

Lemma 3.5

[16, Theorem 2.16] Let S be a \(*\)-semigroup and let \(a\in S\). Then \(a\in S^{\dagger }\) if and only if a is left invertible along \(a^{*}\) if and only if a is right invertible along \(a^{*}\).

Lemma 3.6

[4, Theorem 1] Let \(a, d\in S\) be such that a is Drazin invertible. If \(da=ad\), then \(da^{D}=a^{D}d\).

Next, we present an existence criterion of the left inverse along an element and give the expression for this inverse in a semigroup.

Lemma 3.7

Let \(a, d\in S\) and \(k\in \mathbb {N}\). Then the following are equivalent:

  1. (1)

    a is left invertible along d.

  2. (2)

    \(d\le _{\mathcal {L}} (ad)^{k}\) and \((ad)^{k}\le _{\mathcal {L}} (ad)^{2k}\).

  3. (3)

    There exists \(n\in \mathbb {N}\) such that \(d\le _{\mathcal {L}} (ad)^{kn}\) and \((ad)^{k}\le _{\mathcal {L}} (ad)^{k(n+1)}\).

In this case, \(w(ad)^{k(n-1)}u(ad)^{k(n+1)-1}\) is a left inverse of a along d, where \(d=w(ad)^{kn}\) and \((ad)^{k}= u(ad)^{k(n+1)}\).

Proof

(1) \(\Rightarrow \) (2) Since a is left invertible along d, then there exists b such that \(bad=d\) and \(b \le _{\mathcal {L}} d\). Note that \(b=td\) for some \(t\in S^{1}\). Then, we obtain

$$\begin{aligned} \begin{array}{ccl} d&{}=&{}bad=tdad=tbadad=tb(ad)^{2}=ttd(ad)^{2}=t^{2}bad(ad)^{2}\\ &{}=&{}t^{2}b(ad)^{3}=\cdots =t^{k-1}b(ad)^{k}=t^{k}b(ad)^{k+1}, \end{array} \end{aligned}$$

which implies \(d\le _{\mathcal {L}} (ad)^{k}\) and \(ad=(at^{k}b)(ad)^{k+1}\), where \(k\in \mathbb {N}\). Therefore

$$\begin{aligned} (ad)^{k}=ad(ad)^{k-1}=(at^{k}b)(ad)^{k+1}(ad)^{k-1}=(at^{k}b)(ad)^{2k}, \end{aligned}$$

then \((ad)^{k}\le _{\mathcal {L}} (ad)^{2k}\).

(2) \(\Rightarrow \) (3) It is obvious.

(3) \(\Rightarrow \) (1) From the conditions \(d\le _{\mathcal {L}} (ad)^{kn}\) and \((ad)^{k}\le _{\mathcal {L}} (ad)^{k(n+1)}\), it follows that \(d=w(ad)^{kn}\) and \((ad)^{k}= u(ad)^{k(n+1)}\) for suitable \(w,u \in S^{1}\). Let \(b=w(ad)^{k(n-1)}u(ad)^{k(n+1)-1}\), then we have

$$\begin{aligned} bad=w(ad)^{k(n-1)}u(ad)^{k(n+1)-1}ad=w(ad)^{kn}=d\ \mathrm {and}\ b \le _{\mathcal {L}} d, \end{aligned}$$

which yield that b is a left inverse of a along d. \(\square \)

Dually, we have

Lemma 3.8

Let \(a, d\in S\) and \(k\in \mathbb {N}\). Then the following are equivalent:

  1. (1)

    a is right invertible along d.

  2. (2)

    \(d\le _{\mathcal {R}} (da)^{k}\) and \((da)^{k}\le _{\mathcal {R}} (da)^{2k}\).

  3. (3)

    There exists \(n\in \mathbb {N}\) such that \(d\le _{\mathcal {R}} (da)^{kn}\) and \((ad)^{k}\le _{\mathcal {R}} (da)^{k(n+1)}\). In this case, \((da)^{k(n+1)-1}y(da)^{k(n-1)}x\) is a right inverse of a along d, where \(d=(da)^{kn}x\) and \((da)^{k}= (da)^{k(n+1)}y\).

Applying Lemmas 3.7 and 3.8, we give a new characterization of the inverse along an element in terms of the Drazin inverse in a semigroup.

Theorem 3.9

Let \(a, d\in S\) and \(k\in \mathbb {N}\). Then the following are equivalent:

  1. (1)

    a is invertible along d.

  2. (2)

    \(d\le _{\mathcal {L}} (ad)^{k}\), \((ad)^{k}\le _{\mathcal {L}} (ad)^{2k}\) and \((da)^{k}\le _{\mathcal {R}} (da)^{2k}\).

  3. (3)

    \(d\le _{\mathcal {R}} (da)^{k}\), \((ad)^{k}\le _{\mathcal {L}} (ad)^{2k}\) and \((da)^{k}\le _{\mathcal {R}} (da)^{2k}\).

  4. (4)

    \(d\le _{\mathcal {L}} (ad)^{k}\) and \((ad)^{k} \in S^{\#}\).

  5. (5)

    \(d\le _{\mathcal {R}} (da)^{k}\) and \((da)^{k} \in S^{\#}\).

  6. (6)

    \(d\le _{\mathcal {L}} (ad)^{kn}\) and \((ad)^{k} \in S^{D}\), where \(n\ge \) ind \((ad)^{k}\).

  7. (7)

    \(d\le _{\mathcal {R}} (da)^{kn}\) and \((da)^{k} \in S^{D}\), where \(n\ge \) ind \((da)^{k}\). In which case,

    $$\begin{aligned} \begin{array}{ccl} a^{\Vert d}&{}=&{}d(ad)^{k-1}((ad)^{k})^{\#}=((da)^{k})^{\#}(da)^{k-1}d,\\ ((ad)^{k})^{\#}&{}=&{}a(a^{\Vert d}d^{-})^{k}a^{\Vert d}\ \mathrm {and}\ ((da)^{k})^{\#}=a^{\Vert d}(d^{-}a^{\Vert d})^{k}a. \end{array} \end{aligned}$$

Proof

(1) \(\Rightarrow \) (2) By Lemmas 3.3(4), 3.7 and 3.8.

(2) \(\Rightarrow \) (4) According to Lemma 3.1(1), we only need to prove \((ad)^{k} \le _{\mathcal {R}} (ad)^{2k}\). From the conditions \(d\le _{\mathcal {L}} (ad)^{k}\), \((ad)^{k}\le _{\mathcal {L}} (ad)^{2k}\), and \((da)^{k}\le _{\mathcal {R}} (da)^{2k}\), it follows that \(d=x(ad)^{k}\), \((ad)^{k}=y(ad)^{2k}\), and \((da)^{k}=(da)^{2k}z\), for suitable x, y, and z \(\in S^{1}\). Applying previous equalities, we obtain

$$\begin{aligned} \begin{array}{ccl} d&{}=&{}x(ad)^{k}=xy(ad)^{2k}=xya(da)^{k-1}(da)^{k}d=xya(da)^{k-1}(da)^{2k}zd\\ &{}=&{}xy(ad)^{2k}(ad)^{k-1}azd=x(ad)^{k}(ad)^{k-1}azd=d(ad)^{k-1}azd=(da)^{k}zd. \end{array} \end{aligned}$$

Then \(ad=a(da)^{k}zd=a(da)^{2k}z^{2}d=(ad)^{2k}az^{2}d,\) which implies

$$\begin{aligned} (ad)^{k}=(ad)^{k-1}ad=(ad)^{k-1}(ad)^{2k}az^{2}d=(ad)^{2k}(ad)^{k-1}az^{2}d. \end{aligned}$$

Thus \((ad)^{k} \le _{\mathcal {R}} (ad)^{2k}\).

(4) \(\Rightarrow \) (6) Since \(d\le _{\mathcal {L}} (ad)^{k}\), then there exists \(u\in S^{1}\) such that \(d=u(ad)^{k}\). Note that \((ad)^{k} \in S^{\#}\), we have \(d=u(ad)^{k}=u(((ad)^{k})^{\#})^{n-1}(ad)^{kn}\), which implies \(d\le _{\mathcal {L}} (ad)^{kn}\), where \(n\ge 1\).

(6) \(\Rightarrow \) (1) Let \(b=d(ad)^{k-1}((ad)^{k})^{D}\). We next prove \(a^{\Vert d}=b\). Since \(d\le _{\mathcal {L}} (ad)^{kn}\), then we get \(d=s(ad)^{kn}\) for some \(s\in S^{1}\). By Lemma 3.6, we have \(ad((ad)^{k})^{D}=((ad)^{k})^{D}ad\). From the assumption \(n\ge \)ind\((ad)^{k}\) and the definition of Drazin inverse, it follows that

$$\begin{aligned} bad=d(ad)^{k-1}((ad)^{k})^{D}ad=s(ad)^{kn}(ad)^{k}((ad)^{k})^{D}=s(ad)^{kn}=d. \end{aligned}$$

Similarly, we can get \(dab=d\). In addition, note that \(b=d(ad)^{k-1}(((ad)^{k})^{D})^{2}(ad)^{k}\), which yields \(b \le _{\mathcal {H}} d\). Therefore, \(a^{\Vert d}=b\).

(1) \(\Rightarrow \) (3) \(\Rightarrow \) (5) \(\Rightarrow \) (7) \(\Rightarrow \) (1) It is analogous to the previous proof. Also, we can get \(a^{\Vert d}=((da)^{k})^{D}(da)^{k-1}d\).

Next, we give the expressions of \(((ad)^{k})^{\#}\) and \(((da)^{k})^{\#}\). Since a is invertible along d, then d is regular and \(a^{\Vert d}aa^{\Vert d}=a^{\Vert d}\) by Lemma 3.2. Note that \(a^{\Vert d}ad=d\) and \(a^{\Vert d}=(a^{\Vert d}d^{-})d\) by the definition of \(a^{\Vert d}\). Using the same method as the proof of (1) \(\Rightarrow \) (2) in Lemma 3.7, we get \(d=(a^{\Vert d}d^{-})^{k-1}a^{\Vert d}(ad)^{k}\). Then, we have

$$\begin{aligned} ((ad)^{k})^{\#}= & {} (ad)^{k}\big (\big ((ad)^{k}\big )^{\#}\big )^{2}=a\big (d(ad)^{k-1}((ad)^{k})^{\#}\big )\big ((ad)^{k}\big )^{\#}\\= & {} aa^{\Vert d}\big ((ad)^{k}\big )^{\#}=aa^{\Vert d}d^{-}d\big ((ad)^{k}\big )^{\#}\\= & {} aa^{\Vert d}d^{-}\big (a^{\Vert d}d^{-}\big )^{k-1}a^{\Vert d}(ad)^{k}\big ((ad)^{k}\big )^{\#}\\= & {} aa^{\Vert d}d^{-}\big (a^{\Vert d}d^{-}\big )^{k-1}a^{\Vert d}a\big (d(ad)^{k-1}((ad)^{k})^{\#}\big )\\= & {} a\big (a^{\Vert d}d^{-}\big )^{k}a^{\Vert d}aa^{\Vert d}\\= & {} a\big (a^{\Vert d}d^{-}\big )^{k}a^{\Vert d}. \end{aligned}$$

Similarly, we can obtain \(((da)^{k})^{\#}=a^{\Vert d}(d^{-}a^{\Vert d})^{k}a\). \(\square \)

Let \(k=1\) in Theorem 3.9, we have

Corollary 3.10

Let \(a, d\in S\). Then the following are equivalent:

  1. (1)

    a is invertible along d.

  2. (2)

    \(d\le _{\mathcal {L}} ad\), \(ad\le _{\mathcal {L}} (ad)^{2}\) and \(da\le _{\mathcal {R}} (da)^{2}\).

  3. (3)

    \(d\le _{\mathcal {R}} da\), \(ad\le _{\mathcal {L}} (ad)^{2}\) and \(da\le _{\mathcal {R}} (da)^{2}\).

  4. (4)

    \(d\le _{\mathcal {L}} ad\) and \(ad \in S^{\#}\).

  5. (5)

    \(d\le _{\mathcal {R}} da\) and \(da \in S^{\#}\).

  6. (6)

    \(d\le _{\mathcal {L}} (ad)^{n}\) and \(ad \in S^{D}\), where \(n\ge \) ind(ad).

  7. (7)

    \(d\le _{\mathcal {R}} (da)^{n}\) and \(da \in S^{D}\), where \(n\ge \) ind(da). In which case,

    $$\begin{aligned} a^{\Vert d}= & {} d(ad)^{\#}=(da)^{\#}d,\\ (ad)^{\#}= & {} aa^{\Vert d}d^{-}a^{\Vert d}\ \mathrm {and}\ (da)^{\#}=a^{\Vert d}d^{-}a^{\Vert d}a. \end{aligned}$$

In order to obtain the equivalent conditions for the existence of the inverse along an element by using one-sided annihilator ideals in a ring, we give the following lemmas.

Lemma 3.11

Let \(a, d\in R\) and \(k\in \mathbb {N}\). Then the following are equivalent:

  1. (1)

    a is left invertible along d.

  2. (2)

    \(R=R(da)^{k}+^{\circ }d\).

Proof

(1) \(\Rightarrow \) (2) Since a is left invertible along d, by Lemma 3.3(1), we have \(d=xdad\) for some \(x \in R\), which implies \(d=x(xdad)ad=x^{2}(da)^{2}d=\cdots =x^{k}(da)^{k}d\). Thus \((1-x^{k}(da)^{k})d=0\), which yields \(1-x^{k}(da)^{k}\in \) \(^{\circ }d\). From the equality \(1=x^{k}(da)^{k}+(1-x^{k}(da)^{k})\), it follows that \(R=R(da)^{k}\)+\(^{\circ }d\).

(2) \(\Rightarrow \) (1) Suppose \(R=R(da)^{k}\)+\(^{\circ }d\), then \(1=u(da)^{k}+v\), where \(u\in R\) and v \(\in \) \(^{\circ }d\). Multiplying the previous equality by d from the right side, we obtain \(d=u(da)^{k}d=u(da)^{k-1}dad\). Applying Lemma 3.3(1) again, we get that a is left invertible along d.

\(\square \)

Remark 3.12

In Lemma 3.11, “\(+\)” can not be replaced by “\(\oplus \)” in general. For example, denote by \(R=CFM_{\mathbb {N}}(\mathbb {R})\) the ring of column-finite \(\mathbb {N}\times \mathbb {N}\) matrices over real number field \(\mathbb {R}\). Take \(k=1 \in \mathbb {N}\), \(a=\sum \nolimits _{i=1}^{\infty }e_{i+1,i}\) and \(d=\sum \nolimits _{i=2}^{\infty }e_{i,i}\), where \(e_{i,j}\) denotes the element of R with its (ij)-entry being 1 and 0 elsewhere. By an elemental calculation, we have \(dad=\sum \nolimits _{i=2}^{\infty }e_{i+1,i}\) and \(d=(\sum \nolimits _{i=2}^{\infty }e_{i,i+1})dad \in Rdad\). Thus, a is left invertible along d by Lemma 3.3(1). However, observe that \(e_{11}=e_{12}da\) and \(e_{11}d=0\), which imply \(Rda\cap ^{\circ }d \ne \{0\}\).

Dually, we get

Lemma 3.13

Let \(a, d\in R\) and \(k\in \mathbb {N}\). Then the following are equivalent:

  1. (1)

    a is right invertible along d.

  2. (2)

    \(R=(ad)^{k}R+d^{\circ }\).

Remark 3.14

In Lemmas 3.11 and 3.13, let \(d=a^{*}\), we have \(a\in R^{\dagger }\) if and only if \(R=R(a^{*}a)^{k}+^{\circ }(a^{*})\) if and only if \(R=(aa^{*})^{k}R+(a^{*})^{\circ }\) by Lemma 3.5, where \(k\in \mathbb {N}\).

According to Lemmas 3.11 and 3.13, the following result can be derived.

Theorem 3.15

Let \(a, d\in R\) and \(k\in \mathbb {N}\). Then the following are equivalent:

  1. (1)

    a is invertible along d.

  2. (2)

    \(R=R(da)^{k}\oplus ^{\circ }d\) and \(R=(ad)^{k}R\oplus d^{\circ }\).

  3. (3)

    \(R=R(da)^{k}+^{\circ }d\) and \(R=(ad)^{k}R+d^{\circ }\).

Proof

(1) \(\Rightarrow \) (2) Suppose a is invertible along d, by Lemma 3.11, we have \(R=R(da)^{k}+^{\circ }d\). Let \(y\in R(da)^{k}\cap ^{\circ }d\), then \(y=r(da)^{k}\) for some \(r\in R\) and \(yd=0\). Note that \(d=dads\) for some \(s\in R\) by Lemma 3.3(3), so \(d=d(ad)^{k}s^{k}\). Then, we have

$$\begin{aligned} y=rda(da)^{k-1}=rd(ad)^{k}s^{k}a(da)^{k-1}=r(da)^{k}ds^{k}a(da)^{k-1}=yds^{k}a(da)^{k-1}=0, \end{aligned}$$

which gives \(R=R(da)^{k}\oplus ^{\circ }d\). Similarly, \(R=(ad)^{k}R\oplus d^{\circ }\) holds.

(2) \(\Rightarrow \) (3) It is obvious.

(3) \(\Rightarrow \) (1) By Lemmas 3.3(4), 3.11 and 3.13. \(\square \)

Remark 3.16

In Theorem 3.15, \(R=R(da)^{k}\oplus ^{\circ }d\) is not equivalent to \(R=(ad)^{k}R\oplus d^{\circ }\) in general. For example, let R be a ring which is the same as the infinite matrix ring in Remark 3.12. Take \(k=1 \in \mathbb {N}\), \(a=\sum \nolimits _{i=1}^{\infty }e_{i+1,i}\) and \(d=1\in R\). Note that \((\sum \nolimits _{i=1}^{\infty }e_{i,i+1})a=1\) and \(^{\circ }d=\{0\}\). Then \(R=Rda\oplus ^{\circ }d\). However, there is no \(r\in R\) such that \((ad)r=1\). Thus \(R\ne adR\oplus d^{\circ }\).

It is well known that the inverse along an element recovers the group inverse, so we can obtain the following existence criterion for group inverse by Theorem 3.15.

Theorem 3.17

Let \(a\in R\) and \(k\in \mathbb {N}\). Then the following are equivalent:

  1. (1)

    \(a \in R^{\#}\).

  2. (2)

    \(R=Ra^{k}\oplus ^{\circ }a\).

  3. (3)

    \(R=a^{k}R\oplus a^{\circ }\).

  4. (4)

    \(R=Ra^{k}+^{\circ }a\) and \(R=a^{k}R+a^{\circ }\).

Proof

(1) \(\Leftrightarrow \) (4) and (1) \(\Rightarrow \) (2) Applying Theorem 3.15 and Lemma 3.1(2).

(2) \(\Rightarrow \) (3) Suppose \(R=Ra^{k}\oplus ^{\circ }a\), then \(1=r_{1}a^{k}+r_{2}\), where \(r_{1}\in R\) and \(r_{2}\in \) \(^{\circ }a\). Hence \(a=r_{1}a^{k+1}\), which implies \((a-ar_{1}a^{k})a=0\), so \(a-ar_{1}a^{k}\in \) \(^{\circ }a\). Note that \(a-ar_{1}a^{k}=r_{1}a^{k+1}-ar_{1}a^{k}\in Ra^{k}\). Thus \(a-ar_{1}a^{k}=0\), i.e., \(a=ar_{1}a^{k}\).

Next we prove \(^{\circ }a\) \(\subseteq \) \(^{\circ }(r_{1}a^{k})\). Let \(x\in \) \(^{\circ }a\), then \(xa=0\). So, \(xr_{1}a^{k+1}=0\), which gives \(xr_{1}a^{k}\) \(\in \) \(^{\circ }a\). Observe that \(xr_{1}a^{k}\) \(\in Ra^{k}\), then \(xr_{1}a^{k}=0\). Hence \(^{\circ }a\) \(\subseteq \) \(^{\circ }(r_{1}a^{k})\).

From the equality \((a-a^{2}r_{1}a^{k-1})a=0\), it follows that \((a-a^{2}r_{1}a^{k-1})\in \) \(^{\circ }a\) \(\subseteq \) \(^{\circ }(r_{1}a^{k})\). Then \((a-a^{2}r_{1}a^{k-1})(r_{1}a^{k})=0\), i.e., \(a=ar_{1}a^{k}=a^{2}r_{1}a^{k-1}r_{1}a^{k}\). Let \(t=r_{1}a^{k-1}r_{1}a^{k}\), then \(a=a^{2}t\), which yields \(a=aa^{2}tt=a^{3}t^{2}=\cdots =a^{k+1}t^{k}\), i.e., \(a(1-a^{k}t^{k})=0\). Therefore, we have \(1-a^{k}t^{k}\in a^{\circ }\), which implies \(R=a^{k}R+a^{\circ }\).

To show \(a^{k}R\) \(\cap \) \(a^{\circ }=0\), let \(h\in \) \(a^{k}R\) \( \cap \) \(a^{\circ }\), then \(h=a^{k}r_{3}\) for some \(r_{3}\in R\) and \(ah=0\). We deduce that \(h=a^{k-1}ar_{3}=a^{k-1}r_{1}a^{k+1}r_{3}=a^{k-1}r_{1}ah=0.\) Hence, \(R=a^{k}R\oplus a^{\circ }\).

(3) \(\Rightarrow \) (4) Assume \(R=a^{k}R\oplus a^{\circ }\), then we can prove \(R=Ra^{k}\oplus ^{\circ }a\), which is similar to the proof of (2) \(\Rightarrow \) (3). Therefore, (4) holds. \(\square \)

Before proving our main result, we need to prove the following result.

Proposition 3.18

Let \(a, d, p, p^{\prime }, q, q^{\prime }\in S\) and \(k\in \mathbb {N}\). If \(p^{\prime }pd=d=dqq^{\prime }\), then the following are equivalent:

  1. (1)

    a is left (resp. right) invertible along pdq.

  2. (2)

    \((qapd)^{k-1}qap\) is left (resp. right) invertible along d.

Proof

(1) \(\Rightarrow \) (2) Suppose a is left invertible along pdq, then there exists \(t\in S^{1}\) such that \(pdq=tpdqapdq\) by Lemma 3.3(1). Multiplying the previous equality by \(p^{\prime }\) from the left side and \(q^{\prime }\) from the right side, we have \(p^{\prime }pdqq^{\prime }=p^{\prime }tpdqapdqq^{\prime }\). Note that \(p^{\prime }pd=d=dqq^{\prime }\), thus we get

$$\begin{aligned} d= & {} p^{\prime }tpdqapd=p^{\prime }tpp^{\prime }tpdqapdqapd=(p^{\prime }tp)^{2}d(qapd)^{2}=\cdots \\= & {} (p^{\prime }tp)^{k}d(qapd)^{k}, \end{aligned}$$

which implies \(d\le _{\mathcal {L}} d((qapd)^{k-1}qap)d\). By Lemma 3.3(1) again, we have that \((qapd)^{k-1}qap\) is left invertible along d.

(2) \(\Rightarrow \) (1) Since \((qapd)^{k-1}qap \) is left invertible along d, using Lemma 3.3(1), we have \(d=rd(qapd)^{k}\) for some \(r\in S^{1}\), which yields

$$\begin{aligned} pdq= & {} prd(qapd)^{k}q=prd(qapd)^{k-1}(qapd)q=pr(dqap)^{k-1}d(qapd)q\\= & {} pr(dqap)^{k-1}p^{\prime }(pdq)a(pdq). \end{aligned}$$

Thus, \(pdq \le _{\mathcal {L}} (pdq)a(pdq)\), i.e., a is left invertible along pdq.

The proof of the “right” case is similar to that of the “left” case. \(\square \)

Next, we present an existence criterion for the left inverse along the product by means of the idempotent and the one-sided invertibility of an element in a ring.

Theorem 3.19

Let \(a, d, p, p^{\prime }, q, q^{\prime }\in R\) with d regular and \(k\in \mathbb {N}\). If \(p^{\prime }pd=d=dqq^{\prime }\), then the following are equivalent:

  1. (1)

    a is left invertible along pdq.

  2. (2)

    There exists an idempotent \(e\in R\) such that \(ed=0\) and \(u=(dqap)^{k}+e\) is left invertible.

  3. (3)

    There exists \(g \in R\) such that \(gd=0\) and \(v=(dqap)^{k}+g\) is left invertible. In this case, \(pv_{l}^{-1}(dqap)^{k-1}dq\) is a left inverse of a along pdq.

Proof

(1) \(\Rightarrow \) (2) Suppose a is left invertible along pdq, then \((qapd)^{k-1}qap\) is left invertible along d by Proposition 3.18. According to Lemma 3.4(1), we have that \((dqap)^{k}+1-dd^{-}\) is left invertible. Let \(e=1-dd^{-}\), then we deduce that \(e^{2}=e\), \(ed=0\) and \(u=(dqap)^{k}+e\) is left invertible.

(2) \(\Rightarrow \) (3) It is obvious.

(3) \(\Rightarrow \) (1) Suppose that \(v=(dqap)^{k}+g\) is left invertible, there exists \(s\in R\) such that \(s((dqap)^{k}+g)=1\), which implies \(d=(s(dqap)^{k}+g)d=s(dqap)^{k}d.\) Let \(x=ps(dqap)^{k-1}dq\), then \(xa(pdq)=ps(dqap)^{k-1}dqapdq=pdq\). Note that \(x=ps(dqap)^{k-1}p^{\prime }(pdq)\), which gives \(x\le _{\mathcal {L}} pdq\). Thus, x is a left inverse of a along pdq. \(\square \)

Let \(k=1\in \mathbb {N}\), and \(p=q=p^{\prime }=q^{\prime }=1\in R\) in Theorem 3.19, then we get

Corollary 3.20

Let \(a, d\in R\) with d regular. Then the following are equivalent:

  1. (1)

    a is left invertible along d.

  2. (2)

    There exists an idempotent \(e\in R\) such that \(ed=0\) and \(u=da+e\) is left invertible.

  3. (3)

    There exists \(g \in R\) such that \(gd=0\) and \(v=da+g\) is left invertible.

In this case, \(v_{l}^{-1}d\) is a left inverse of a along d.

Dually, we have the following result.

Theorem 3.21

Let \(a, d, p, p^{\prime }, q, q^{\prime }\in R\) with d regular and \(k\in \mathbb {N}\). If \(p^{\prime }pd=d=dqq^{\prime }\), then the following are equivalent:

  1. (1)

    a is right invertible along pdq.

  2. (2)

    There exists an idempotent \(f\in R\) such that \(df=0\) and \(u=(qapd)^{k}+f\) is right invertible.

  3. (3)

    There exists \(h \in R\) such that \(dh=0\) and \(v=(qapd)^{k}+h\) is right invertible.

In this case, \(pd(qapd)^{k-1}v_{r}^{-1}q\) is a right inverse of a along pdq.

Corollary 3.22

Let \(a, d\in R\) with d regular. Then the following are equivalent:

  1. (1)

    a is right invertible along d.

  2. (2)

    There exists an idempotent \(f\in R\) such that \(df=0\) and \(u=ad+f\) is right invertible.

  3. (3)

    There exists \(h \in R\) such that \(dh=0\) and \(v=ad+h\) is right invertible.

    In this case, \(dv_{r}^{-1}\) is a right inverse of a along d.

Remark 3.23

Note that the proofs of (3) \(\Rightarrow \) (1) in Theorems 3.19 and 3.21 do not need that d is regular.

In a \(*\)-ring, it is well known that if \(a=aa^{*}y=xa^{*}a\), then \(a^{\dagger }=y^{*}ax^{*}\). By Lemma 3.3, we can easily see that \(a\in R\) is left (resp. right) invertible along \(a^{*}\) if and only if \(a^{*}\) is right (resp. left) invertible along a. Thus, we have \(a\in R^{\dagger }\) if and only if \(a^{*}\) is left invertible along a if and only if \(a^{*}\) is right invertible along a by Lemma 3.5. Applying Corollarys 3.20 and 3.22, we deduce that

Corollary 3.24

Let R be a \(*\)-ring and let \(a\in R\). Then the following are equivalent:

  1. (1)

    \(a\in R^{\dagger }\).

  2. (2)

    There exists an idempotent \(e\in R\) such that \(ea=0\) and \(u=aa^{*}+e\) is left invertible.

  3. (3)

    There exists an idempotent \(f\in R\) such that \(af=0\) and \(v=a^{*}a+f\) is right invertible.

In this case, \(a^{\dagger }=(u_{l}^{-1}a)^{*}=(av_{r}^{-1})^{*}\).

Proof

We only need to prove the expressions of \(a^{\dagger }\). Since \(a=u_{l}^{-1}(aa^{*}+e)a=(u_{l}^{-1}a)a^{*}a\) and \(a=a(a^{*}a+f)v_{r}^{-1}=aa^{*}(av_{r}^{-1})\), then we have

$$\begin{aligned} a^{\dagger }= & {} (av_{r}^{-1})^{*}a(u_{l}^{-1}a)^{*}=(av_{r}^{-1})^{*}aa^{*}(u_{l}^{-1})^{*}= (aa^{*}av_{r}^{-1})^{*}(u_{l}^{-1})^{*}\\= & {} a^{*}(u_{l}^{-1})^{*}=(u_{l}^{-1}a)^{*}. \end{aligned}$$

Similarly, we can obtain \(a^{\dagger }=(av_{r}^{-1})^{*}\). \(\square \)

Combining Theorems 3.19 and 3.21, we get the following result, which is a new characterization of the inverse along an element on the basis of idempotents and invertibility of certain elements in a ring.

Theorem 3.25

Let \(a, d, p, p^{\prime }, q, q^{\prime }\in R\) and \(k\in \mathbb {N}\). If \(p^{\prime }pd=d=dqq^{\prime }\), then the following are equivalent:

  1. (1)

    a is invertible along pdq.

  2. (2)

    There exist idempotents \(e, f\in R\) such that \(ed=df=0\), \(u=(dqap)^{k}+e\) and \(v=(qapd)^{k}+f\) are both invertible.

  3. (3)

    There exist \(b, c\in R\) such that \(bd=dc=0\), \(s=(dqap)^{k}+b\) and \(t=(qapd)^{k}+c\) are both invertible.

  4. (4)

    There exist idempotents \(e, f\in R\) such that \(ed=df=0\), \(m=(dqap)^{k}+e\) is left invertible and \(n=(qapd)^{k}+f\) is right invertible.

  5. (5)

    There exist \(g, h \in R\) such that \(gd=dh=0\), \(w=(dqap)^{k}+g\) is left invertible and \(z=(qapd)^{k}+h\) is right invertible.

    In this case, \(a^{\Vert pdq}=pw_{l}^{-1}(dqap)^{k-1}dq=pd(qapd)^{k-1}z_{r}^{-1}q\).

Proof

(1) \(\Rightarrow \) (2) Suppose that a is invertible along pdq, according to Proposition 3.18, we get that \((qapd)^{k-1}qap\) is invertible along d, which implies that d is regular by Lemma 3.2(1). Applying Lemma 3.4(3), we can verify that \((dqap)^{k}+1-dd^{-}\) and \((qapd)^{k}+1-d^{-}d\) are both invertible. Let \(e=1-dd^{-}\) and \(f=1-d^{-}d\). Then the item (2) holds.

(2) \(\Rightarrow \) (3) \(\Rightarrow \) (5) and (2) \(\Rightarrow \) (4) \(\Rightarrow \) (5) It is obvious.

(5) \(\Rightarrow \) (1) Applying Theorems 3.19 and 3.21.

Finally, the expressions of \(a^{\Vert pdq}\) can be obtained by Theorems 3.19, 3.21, and Lemma 3.3(4). \(\square \)

Let \(k=1\in \mathbb {N}\), and \(p=q=p^{\prime }=q^{\prime }=1\in R\) in Theorem 3.25, then we have

Corollary 3.26

Let \(a, d\in R\). Then the following are equivalent:

  1. (1)

    a is invertible along d.

  2. (2)

    There exist idempotents \(e, f\in R\) such that \(ed=df=0\), \(u=da+e\), and \(v=ad+f\) are both invertible.

  3. (3)

    There exist \(b, c\in R\) such that \(bd=dc=0\), \(s=da+b\) and \(t=ad+c\) are both invertible.

  4. (4)

    There exist idempotents \(e, f\in R\) such that \(ed=df=0\), \(m=da+e\) is left invertible and \(n=ad+f\) is right invertible.

  5. (5)

    There exist \(g, h \in R\) such that \(gd=dh=0\), \(w=da+g\) is left invertible and \(z=ad+h\) is right invertible. In this case, \(a^{\Vert d}=w_{l}^{-1}d=dz_{r}^{-1}\).

4 Existence of the Inner Inverse Along an Element

Mary [10], as well as Benítez and Boasso [1] studied the equivalent conditions for the existence of the inner inverse along an element. In this section, we will continue to consider the characterization of this inverse by the use of idempotents, one-sided principal ideals and one-sided annihilator ideals.

First, let us recall the concept of the trace product [13]: for \(a,b\in S\), we say that ab is a trace product if \(ab\in \mathcal {R}_{a}\cap \mathcal {L}_{b}\).

The following lemmas will be very useful.

Lemma 4.1

[10, Theorem 11] Let \(a\in S. (S\) is a \(*\)-semigroup in (3).) Then

  1. (1)

    \(a^{\#}=a^{\Vert a}\).

  2. (2)

    \(a^{D}=a^{\Vert a^{m}}\) for some integer m.

  3. (3)

    \(a^{\dagger }=a^{\Vert a^{*}}\).

Note that, according to Lemma 4.1 and the definition of the inner inverse along an element, we have that \(a^{\#}\) (resp. \(a^{\dagger }\)) is equal to the inner inverse of a along a (resp. \(a^{*}\)).

Lemma 4.2

[10, Corollary 9] Let \(a,d \in S\). Then a is inner invertible along d if and only if ad and da are trace products.

Remark 4.3

From Lemma 4.2, we can see that a is inner invertible along d if and only if d is inner invertible along a. But in general, a is invertible along d is not equivalent to d is invertible along a. For example, let \(S=\mathbb {Z}_{4}\), \(a=\overline{2}\) and \(d=\overline{0}\). Then \(a^{\Vert d}=\overline{0}\). However, d is not invertible along a.

Lemma 4.4

[15, Lemma 2.9] If \(q_{1}\) and \(q_{2}\) are idempotents such that \(Rq_{1}\subseteq Rq_{2}\) and \(q_{2}R \subseteq q_{1}R\), then \(q_{1}=q_{2}\).

Lemma 4.5

[15, Lemma 2.5, Lemma 2.6] Let \(a, b\in R\).

  1. (1)

    If \(aR\subseteq bR\), then \(^{\circ }b\) \(\subseteq \) \(^{\circ }a\).

  2. (2)

    If b is regular and \(^{\circ }b\) \(\subseteq \) \(^{\circ }a\), then \(aR\subseteq bR\).

  3. (3)

    If \(Ra\subseteq Rb\), then \(b^{\circ }\subseteq a^{\circ }\).

  4. (4)

    If b is regular and \(b^{\circ }\subseteq a^{\circ }\), then \(Ra\subseteq Rb\).

Lemma 4.6

Let \(a, b\in R\).

  1. (1)

    If a is regular and aR=bR, then b is regular.

  2. (2)

    If a is regular and Ra=Rb, then b is regular.

Proof

(1) Since \(aR=bR\), then \(a=bt_{1}\) and \(b=at_{2}\) for some \(t_{1}, t_{2}\in R\). Thus, we have \(b(t_{1}a^{-})b=aa^{-}at_{2}=at_{2}=b,\) which implies that b is regular.

(2) It is similar to the proof of (1). \(\square \)

Next, we present the main result in this section, which recovers Theorems 2.11 and 2.12 in [15].

Theorem 4.7

Let \(a, d\in R\). Then the following are equivalent:

  1. (1)

    a is inner invertible along d.

  2. (2)

    There exist idempotents \(e_{1}\), \(e_{2}\) \(\in R\) such that \(dR=e_{1}R\), \(Ra=Re_{1}\), \(Rd=Re_{2}\), and \(aR=e_{2}R\).

  3. (3)

    a and d are both regular, and there exist idempotents \(e_{1}\), \(e_{2}\) \(\in R\) such that \(^{\circ }d=^{\circ }e_{1}\), \(a^{\circ }=e_{1}^{\circ }\), \(d^{\circ }=e_{2}^{\circ }\), and \(^{\circ }a=^{\circ }e_{2}\).

    In this case, the pair of idempotents \(e_{1}\) and \(e_{2}\) is unique. Moreover, \(a^{\Vert d}=e_{1}a^{-}e_{2}\) for one and hence all choices of \(a^{-}\in a\{1\}\).

Proof

(1) \(\Rightarrow \) (2) Suppose that a is inner invertible along d with inverse x, then we have

$$\begin{aligned} axa=a,\ \ xad=d=dax\ \ \text{ and }\ \ x\le _{\mathcal {H}}d, \end{aligned}$$

which imply \(x=u_{1}d=du_{2}\) for some \(u_{1}, u_{2} \in R\). Let \(e_{1}=xa\) and \(e_{2}=ax\). Then \(e_{1}=e_{1}^{2}\) and \(e_{2}=e_{2}^{2}\). From \(d=e_{1}d\) and \(e_{1}=du_{2}a\), it follows that \(dR=e_{1}R\). Also, \(a=ae_{1}\) and \(e_{1}=xa\) give \(Ra=Re_{1}\). Similarly, we can obtain \(Rd=Re_{2}\) and \(aR=e_{2}R\).

(2) \(\Rightarrow \) (1) Suppose that (2) holds, then a and d are regular by Lemma 4.6, also we have

$$\begin{aligned} d=e_{1}d=de_{2},\ \ a=ae_{1}=e_{2}a,\ \ e_{1}=dt_{1}=t_{2}a\ \ \text{ and }\ \ e_{2}=t_{3}d=at_{4}, \end{aligned}$$

for \(t_{i}\in R\) (\(i=\overline{1,4}\)). According to the previous equalities, \(e_{1}=t_{2}aa^{-}a=e_{1}a^{-}a\). Similarly, we can obtain \(e_{2}=aa^{-}e_{2}\).

Let \(b=e_{1}a^{-}e_{2}\), we next prove that b is the inner inverse of a along d. We can have the following equations

$$\begin{aligned} aba= & {} ae_{1}a^{-}e_{2}a=aa^{-}e_{2}a=e_{2}a=a,\\ bad= & {} e_{1}a^{-}e_{2}ad=e_{1}a^{-}ad=e_{1}d=d \end{aligned}$$

and

$$\begin{aligned} dab=dae_{1}a^{-}e_{2}=daa^{-}e_{2}=de_{2}=d. \end{aligned}$$

In addition, \(b=e_{1}a^{-}t_{3}d\) and \(b=dt_{1}a^{-}e_{2}\) give \(b\le _{\mathcal {H}}d\). Therefore, b is the inner inverse of a along d.

(2) \(\Leftrightarrow \) (3) Applying Lemmas 4.5 and 4.6.

The uniqueness of the pair of idempotents \(e_{1}\) and \(e_{2}\) can be obtained by Lemma 4.4. \(\square \)

By Theorem 4.7, we derive the following results.

Corollary 4.8

[15, Theorem 2.11] Let \(a\in R\). Then the following are equivalent:

  1. (1)

    \(a\in R^{\#}\).

  2. (2)

    There exists an idempotent \(q\in R\) such that \(qR=aR\) and \(Rq=Ra\).

  3. (3)

    a is regular and there exists an idempotent \(q\in R\) such that \(^{\circ }a=\) \(^{\circ }q\) and \(a^{\circ }=q^{\circ }\).

In this case, the idempotent q is unique. Moreover, \(a^{\#}=qa^{-}q\) for one and hence all choices of \(a^{-}\in a\{1\}\).

Proof

Let \(d=a\) in Theorem 4.7. Then \(a\in R^{\#}\) if and only if there exist idempotents \(e_{1}\), \(e_{2}\) \(\in R\) such that \(aR=e_{1}R=e_{2}R\) and \(Ra=Re_{1}=Re_{2}\), which yield \(e_{1}=e_{2}\) by Lemma 4.4. Thus, (1) \(\Leftrightarrow \) (2) \(\Leftrightarrow \) (3). \(\square \)

Corollary 4.9

[15, Theorem 2.12] Let R be a \(*\)-ring and let \(a\in R\). Then the following are equivalent:

  1. (1)

    \(a\in R^{\dagger }\).

  2. (2)

    There exist self-adjoint idempotents \(p, r\in R\) such that \(pR=aR\) and \(Rr=Ra\).

  3. (3)

    a is regular and there exist self-adjoint idempotents \(p, r\in R\) such that \(^{\circ }a=^{\circ }p\) and \(a^{\circ }=r^{\circ }\).

In this case, the pair of self-adjoint idempotents p and r is unique. Moreover, \(a^{\dagger }=ra^{-}p\) for one and hence all choices of \(a^{-}\in a\{1\}\).

Proof

Let \(d=a^{*}\) in Theorem 4.7. Then \(a\in R^{\dagger }\) if and only if there exist idempotents \(e_{1}\), \(e_{2}\) \(\in R\) such that \(a^{*}R=e_{1}R\), \(Ra=Re_{1}\), \(Ra^{*}=Re_{2}\), and \(aR=e_{2}R\). From \(Ra=Re_{1}\), it follows that \(a^{*}R=e_{1}^{*}R\). Thus \(e_{1}R=e_{1}^{*}R\), which implies \(e_{1}=e_{1}^{*}e_{1}\), so \(e_{1}^{*}=e_{1}\). Similarly, we have \(e_{2}^{*}=e_{2}\). Thus, (1) \(\Leftrightarrow \) (2) \(\Leftrightarrow \) (3). \(\square \)

The second characterization of the inner inverse along an element is given as follows:

Theorem 4.10

Let \(a,d \in R\). Then the following are equivalent:

  1. (1)

    a is inner invertible along d.

  2. (2)

    \(R=aR\oplus d^{\circ }\), \(R=Ra\) \(\oplus \) \(^{\circ }d\), \(R=dR\oplus a^{\circ }\) and \(R=Rd\) \(\oplus \) \(^{\circ }a\).

  3. (3)

    \(R=aR+d^{\circ }\), \(R=Ra+^{\circ }d\), \(R=dR+a^{\circ }\) and \(R=Rd+^{\circ }a\).

Proof

(1) \(\Rightarrow \) (2) Assume that a is inner invertible along d, by Theorem 4.7, there exist idempotents \(e_{1}\), \(e_{2}\) \(\in R\) such that \(dR=e_{1}R\), \(Ra=Re_{1}\), \(Rd=Re_{2}\), and \(aR=e_{2}R\). Hence, \(R=e_{2}R \oplus e_{2}^{\circ }=aR \oplus d^{\circ }\) by Lemma 4.5. Similarly, we can get \(R=Ra\) \(\oplus \) \(^{\circ }d\), \(R=dR\oplus a^{\circ }\), and \(R=Rd\) \(\oplus \) \(^{\circ }a\).

(2) \(\Rightarrow \) (3) It is obvious.

(3) \(\Rightarrow \) (1) Suppose \(R=aR+d^{\circ }\) and \(R=Rd+^{\circ }a\), then \(da \mathcal {R} d\) and \(da \mathcal {L} a\), i.e., da is a trace product. Similarly, we get ad is also a trace product. Applying Lemma 4.2, a is inner invertible along d. \(\square \)

5 Cline’s Formula for the Inverse Along an Element

In what follows, given \(a\in S\), comm(a)=\(\{x\in S:ax=xa\}\). We begin with some auxiliary results which we will rely on.

Lemma 5.1

[10, Theorem 7] Let \(a, d\in S\). Then the following are equivalent:

  1. (1)

    a is invertible along d.

  2. (2)

    ad\(\mathcal {L}\)d and \((ad)^{\#}\) exists.

  3. (3)

    da\(\mathcal {R}\)d and \((da)^{\#}\) exists.

In this case, \(a^{\Vert d}=d(ad)^{\#}=(da)^{\#}d\).

Lemma 5.2

Let \(a, d \in S\) and \(k\in \mathbb {N}\).

  1. (1)

    If \(d \in comm(ad)\), then \((ad)^{k}=a^{k}d^{k}\).

  2. (2)

    If \(d \in comm(da)\), then \((da)^{k}=d^{k}a^{k}\).

Proof

  1. (1)

    It is obvious for \(k=1\). Assume \((ad)^{k}=a^{k}d^{k}\) holds. For the \(k+1\) case, we have

    $$\begin{aligned} (ad)^{k+1}=ad(ad)^{k}=a(ad)^{k}d=aa^{k}d^{k}d=a^{k+1}d^{k+1}. \end{aligned}$$
  2. (2)

    The proof is similar to (1).

It is well known that for a semigroup, if \(ab\in S\) is Drazin invertible, then ba is Drazin invertible. In this case, \((ba)^{D}=b((ab)^{D})^{2}a\). This formula is called Cline’s formula. Here, we will consider the analogous result for the inverse along an element. Moreover, we lift the index of the product of elements b and a to any \(k\in \mathbb {N}\).

Theorem 5.3

Let \(a, b , d, x \in S\) with d \(\in \) comm(dab), \(d\le _{\mathcal {L}}d^{2}\) and let \(k\in \mathbb {N}\). If ab is invertible along d with inverse x, then \((ba)^{k}\) is invertible along \(bd^{k}a\) with inverse \(bx^{k+1}a\).

Proof

Since ab is invertible along d with inverse x, we have

$$\begin{aligned} xabd=d=dabx. \end{aligned}$$

By Lemma 5.1, we get

$$\begin{aligned} x=d(abd)^{\#}=(dab)^{\#}d. \end{aligned}$$

The condition d \(\in \) comm(dab), Lemmas 3.6 and 5.2 ensure that

$$\begin{aligned} (dab)^{k+1}=d^{k+1}(ab)^{k+1}\ \ \ \text{ and } \ \ \ (dab)^{\#}d=d(dab)^{\#}. \end{aligned}$$

Since \(d\le _{\mathcal {L}}d^{2}\), there exists \(u\in S^{1}\) such that \(d=ud^{2}\).

Let \(y=bx^{k+1}a\). Then, by the previous equalities, we have

$$\begin{aligned} \begin{array}{ccl} y(ba)^{k}(bd^{k}a)&{}=&{}bx^{k+1}a(ba)^{k}(bd^{k}a)=b((dab)^{\#}d)^{k+1}(ab)^{k+1}d^{k}a\\ &{}=&{}b((dab)^{\#})^{k+1}d^{k+1}(ab)^{k+1}d^{k}a=b((dab)^{\#})^{k+1}(dab)^{k+1}d^{k}a\\ &{}=&{}b((dab)^{\#}dab)^{k+1}d^{k}a=b(dab)^{\#}dabd^{k}a\\ &{}=&{}bxabd^{k}a=bxabdd^{k-1}a=bd^{k}a\\ \end{array} \end{aligned}$$

and

Also, we can get

$$\begin{aligned} \begin{array}{ccl} y&{}=&{}bx^{k+1}a=b((dab)^{\#}d)^{k+1}a=b((dab)^{\#})^{k+1}d^{k+1}a\\ &{}=&{}bd((dab)^{\#})^{k+1}d^{k}a=bd((dab)^{\#})^{k+2}da(bd^{k}a)\\ \end{array} \end{aligned}$$

and

$$\begin{aligned} \begin{array}{ccl} y&{}=&{}b((dab)^{\#})^{k+1}d^{k+1}a=bd^{k-1}((dab)^{\#})^{k+1}d^{2}a\\ &{}=&{}bd^{k-1}(dab)((dab)^{\#})^{k+2}d^{2}a=(bd^{k}a)b((dab)^{\#})^{k+2}d^{2}a,\\ \end{array} \end{aligned}$$

which imply \(y\le _{\mathcal {H}} bd^{k}a\). To sum up, \(y=bx^{k+1}a\) is the inverse of \((ba)^{k}\) along \(bd^{k}a\). \(\square \)

Note that, Theorem 5.3 is in general false without the condition d \(\in \) comm(dab):

Example 5.4

Let S be the algebra \(M_{2}(\mathbb {F})\) of all \(2\times 2\) matrices over a field \(\mathbb {F}\). Take

$$\begin{aligned} a=\left[ \begin{array}{ccccc} &{}0&{}&{}\quad 0&{}\\ &{}1&{}&{}\quad 1&{}\\ \end{array}\right] ,\ \ b=\left[ \begin{array}{ccccc} &{}1&{}&{}\quad 0&{}\\ &{}1&{}&{}\quad 0&{}\\ \end{array}\right] ,\ \ d=\left[ \begin{array}{ccccc} &{}1&{}&{}\quad 1&{}\\ &{}0&{}&{}\quad 0&{}\\ \end{array}\right] \ \ \mathrm {and}\ \ x=\left[ \begin{array}{ccccc} &{}\frac{1}{2}&{}&{}\quad \frac{1}{2}&{}\\ &{}0&{}&{}\quad 0&{}\\ \end{array}\right] . \end{aligned}$$

Then, \(x(ab)d=d(ab)x\) and \(x=xd=dx\), i.e., x is the inverse of ab along d. Also, \(d\le _{\mathcal {L}}d^{2}\). But \((ba)^{k}=0\), \(bd^{k}a=\left[ {\begin{matrix} &{}1&{}&{}1&{}\\ &{}1&{}&{}1&{}\\ \end{matrix}}\right] \). There is no \(y\in S\) such that \(y(ba)^{k}(bd^{k}a)=bd^{k}a\), thus \((ba)^{k}\) is not invertible along \(bd^{k}a\).

Also, we can not delete the condition \(d\le _{\mathcal {L}}d^{2}\) of Theorem 5.3.

Example 5.5

Let S be a ring which is the same as the infinite matrix ring in Remark 3.12. Take \(k=1\in \mathbb {N}\), \(a=\sum \limits _{i=1}^{\infty }e_{i+1,i}\), \(b=1\in S\) and \(d=\sum \limits _{i=1}^{\infty }e_{i,i+1}\). Then \(da=1\). Hence, \(dabd=d\), which implies ab is invertible along d by Lemma 3.3(3). Note that d \(\in \) comm(dab). However, bda \(\le _{\mathcal {R}} (bda)(ba)(bda\)) does not hold. Thus, ba is not invertible along bda.

Similar to Theorem 5.3, we get

Theorem 5.6

Let \(a, b , d, x \in S\) with d \(\in \) comm(abd), \(d\le _{\mathcal {R}}d^{2}\) and let \(k\in \mathbb {N}\). If ab is invertible along d with inverse x, then \((ba)^{k}\) is invertible along \(bd^{k}a\) with inverse \(bx^{k+1}a\).

Using Theorem 5.3, we can prove the following result:

Theorem 5.7

Let \(a, b , d, x \in S\) with d \(\in \) comm(dab) and d \(\in \) comm(abd), and let \(k\in \mathbb {N}\). If ab is invertible along d with inverse x, then \((ba)^{k}\) is invertible along \(bd^{k}a\) with inverse \(bx^{k+1}a\).

Proof

Since ab is invertible along d, then \(d\le _{\mathcal {L}} dabd\) by Lemma 3.3(3). From the assumption d \(\in \) comm(abd), then \(d\le _{\mathcal {L}} abd^{2}\), which implies \(d\le _{\mathcal {L}} d^{2}\). According to Theorem 5.3, we have \((ba)^{k}\) is invertible along \(bd^{k}a\) with inverse \(bx^{k+1}a\). \(\square \)

Let \(k=1\) in Theorem 5.7, we have

Corollary 5.8

Let \(a, b , d, x \in S\) with d \(\in \) comm(dab) and d \(\in \) comm(abd). If ab is invertible along d with inverse x, then ba is invertible along bda with inverse \(bx^{2}a\).

It is clear that \(d\in \) comm(ab) can imply \(d\in \) comm(abd) and \(d\in \) comm(dab), so if we replace the conditions \(d\in \) comm(dab) and \(d\in \) comm(abd) with the condition \(d\in \) comm(ab) in Corollary 5.8, then we have the following result:

Corollary 5.9

Let \(a, b , d, x \in S\) with d \(\in \) comm(ab). If ab is invertible along d with inverse x, then ba is invertible along bda with inverse \(bx^{2}a\).

Remark 5.10

Corollary 5.9 can also be obtained from [5, Theorem 3.1]. Indeed, Kantún-Montiel [8] showed that (dd)-inverse of a coincides with the inverse of a along d. In addition, take b=c=d in [5, Theorem 3.1], then we have Corollary 5.9. However, the following example shows that the conditions d \(\in \) comm(dab) and d \(\in \) comm(abd) of Corollary 5.8 are weaker than \(d\in \) comm(ab).

Example 5.11

Let \(S=M_{2}(\mathbb {C})\). Take

$$\begin{aligned} a=\left[ \begin{array}{ccccc} &{}0&{}&{}\quad 1&{}\\ &{}0&{}&{}\quad 2&{}\\ \end{array}\right] ,\ \ \ b=\left[ \begin{array}{ccccc} &{}0&{}&{}\quad 1&{}\\ &{}0&{}&{}\quad 1&{}\\ \end{array}\right] \ \ \ \mathrm {and} \ \ \ d=\left[ \begin{array}{ccccc} &{}0&{}&{}\quad 1&{}\\ &{}0&{}&{}\quad 0&{}\\ \end{array}\right] . \end{aligned}$$

Then, we can see that d \(\in \) comm(dab) and d \(\in \) comm(abd). But \(d\notin \) comm(ab).

Using Corollary 5.9, we can get the following result, which is Cline’s formula for the Drazin inverse (see [3]).

Corollary 5.12

Let \(a, b \in S\). If ab is Drazin invertible, then ba is Drazin invertible and \((ba)^{D}=b((ab)^{D})^{2}a\).

Proof

Since ab is Drazin invertible, according to Lemma 4.1(2), we have \((ab)^{D}=(ab)^{\Vert (ab)^{m}}\) for some integer m. In Corollary 5.9, let \(d=(ab)^{m}\), then d \(\in \) comm(ab). Thus, we can get ba is invertible along \((ba)^{m+1}\) with inverse \(b((ab)^{D})^{2}a\), which implies ba is Drazin invertible and \((ba)^{D}=b((ab)^{D})^{2}a\). \(\square \)

The following result can be proved by the definition of the inverse along an element directly.

Proposition 5.13

Let \(a, b, d, x \in S\) and \(k\in \mathbb {N}\). If \((ab)^{k+1}\) is invertible along \(d^{k+1}\) with inverse x, then \((ba)^{k}\) is invertible along \(bd^{k+1}a\) with inverse bxa.

Proof

Suppose that \((ab)^{k+1}\) is invertible along \(d^{k+1}\) with inverse x, then we have

$$\begin{aligned}&x(ab)^{k+1}d^{k+1}=d^{k+1}=d^{k+1}(ab)^{k+1}x \ \ \mathrm {and} \\&\quad x=d^{k+1}((ab)^{k+1}d^{k+1})^{\#}=(d^{k+1}(ab)^{k+1})^{\#}d^{k+1}. \end{aligned}$$

Let \(y=bxa\), using previous equalities, we can easily obtain

$$\begin{aligned} y(ba)^{k}bd^{k+1}a=bxa(ba)^{k}bd^{k+1}a=bx(ab)^{k+1}d^{k+1}a=bd^{k+1}a \end{aligned}$$

and

$$\begin{aligned} bd^{k+1}a(ba)^{k}y=bd^{k+1}a(ba)^{k}bxa=bd^{k+1}(ab)^{k+1}xa=bd^{k+1}a. \end{aligned}$$

Also,

$$\begin{aligned} y=bxa=bd^{k+1}((ab)^{k+1}d^{k+1})^{\#}a=(bd^{k+1}a)b(ab)^{k}d^{k+1}(((ab)^{k+1}d^{k+1})^{\#})^{2}a \end{aligned}$$

and

$$\begin{aligned} y=bxa=b(d^{k+1}(ab)^{k+1})^{\#}d^{k+1}a=b((d^{k+1}(ab)^{k+1})^{\#})^{2}d^{k+1}(ab)^{k}a(bd^{k+1}a), \end{aligned}$$

which give \(y\le _{\mathcal {H}} bd^{k+1}a\). Therefore, \((ba)^{k}\) is invertible along \(bd^{k+1}a\) with inverse y.

Applying Proposition 5.13 to the Drazin inverse, we can get the following result, which generalizes Cline’s formula to the case of the Drazin invertibility of the powers of the products of elements b and a. \(\square \)

Corollary 5.14

Let \(a, b\in S\) and \(k\in \mathbb {N}\). If \((ab)^{k+1}\) is Drazin invertible, then \((ba)^{k}\) is Drazin invertible and \(((ba)^{k})^{D}=b((ab)^{k+1})^{D}a\).

Proof

Since \((ab)^{k+1}\) is Drazin invertible, by Lemma 4.1(2), it follows that \(((ab)^{k+1})^{D}=((ab)^{k+1})^{\Vert ((ab)^{k+1})^{t}}\), where \(t=nk-1\ge \text{ ind }(ab)^{k+1}\) and \(n\in \mathbb {N}\). In Proposition 5.13, let \(d=(ab)^{t}\), we have \(bd^{k+1}a=b((ab)^{t})^{k+1}a=((ba)^{k})^{nk+n-1}\). Hence, \((ba)^{k}\) is Drazin invertible and \(((ba)^{k})^{D}=b((ab)^{k+1})^{D}a\). \(\square \)

Note that, in Corollary 5.14, if S is a monoid, let \(b=1\), then we can get a well-known result for Drazin inverse, that is to say if \(a^{n}\) is Drazin invertible for some \(n\in \mathbb {N}\), then a is Drazin invertible.

6 The Inverse of the Product Along an Element

In this section, we mainly consider the inverse of the product along an element. Benítez and Boasso [1] studied the reverse order law for the inverse along an element (\((ab)^{\Vert d}=b^{\Vert d}a^{\Vert d}\)) and the commutative inverse along an element (\(aa^{\Vert d}=a^{\Vert d}a\)) in a ring. Motivated by them, we will investigate these subjects in a semigroup.

We first derive the representation for the inverse of the product along an element, which recovers the corresponding results of the classical generalized inverses.

Theorem 6.1

Let \(a, b, d_{1}, d_{2} \in S\) with \(d_{1} \in comm(bd_{2})\), \(d_{2}\in comm(d_{1}a)\). If a is invertible along \(d_{1}\) and b is invertible along \(d_{2}\), then ab is invertible along \(d_{2}d_{1}\) and \((ab)^{\Vert d_{2}d_{1}}=b^{\Vert d_{2}}a^{\Vert d_{1}}\).

Proof

Since a is invertible along \(d_{1}\) and b is invertible along \(d_{2}\), we have

$$\begin{aligned} a^{\Vert d_{1}}ad_{1}=d_{1}=d_{1}aa^{\Vert d_{1}} \ \ \text{ and }\ \ b^{\Vert d_{2}}bd_{2}=d_{2}=d_{2}bb^{\Vert d_{2}}. \end{aligned}$$
(1)

By Lemma 5.1,

$$\begin{aligned} a^{\Vert d_{1}}=d_{1}(ad_{1})^{\#}=(d_{1}a)^{\#}d_{1} \ \ \text{ and }\ \ b^{\Vert d_{2}}=d_{2}(bd_{2})^{\#}=(d_{2}b)^{\#}d_{2}. \end{aligned}$$
(2)

Let \(y=b^{\Vert d_{2}}a^{\Vert d_{1}}\). Next, we will prove y is the inverse of ab along \(d_{2}d_{1}\). According to the conditions \(d_{1} \in comm(bd_{2})\), \(d_{2}\in comm(d_{1}a)\) and Lemma 3.6, it follows that

$$\begin{aligned} d_{1}bd_{2}=bd_{2}d_{1},\ \ \ d_{2}d_{1}a=d_{1}ad_{2},\ \ \ d_{1}(bd_{2})^{\#}=(bd_{2})^{\#}d_{1}\ \ \ \text{ and } \ \ \ d_{2}(d_{1}a)^{\#}=(d_{1}a)^{\#}d_{2}. \end{aligned}$$
(3)

From (1), (2), and (3), we have

$$\begin{aligned}&y(ab)(d_{2}d_{1})=b^{\Vert d_{2}}a^{\Vert d_{1}}(ab)(d_{2}d_{1}) \overset{(3)}{=}b^{\Vert d_{2}}a^{\Vert d_{1}}ad_{1}bd_{2}\\&\quad \overset{(1)}{=}b^{\Vert d_{2}}d_{1}bd_{2}\overset{(3)}{=}b^{\Vert d_{2}} bd_{2}d_{1}\overset{(1)}{=}d_{2}d_{1} \end{aligned}$$

and dually

$$\begin{aligned} (d_{2}d_{1})(ab)y=(d_{2}d_{1})(ab)b^{\Vert d_{2}}a^{\Vert d_{1}} \overset{(3)}{=}d_{1}ad_{2}bb^{\Vert d_{2}}a^{\Vert d_{1}} \overset{(1)}{=}d_{1}ad_{2}a^{\Vert d_{1}}\overset{(3)}{=}d_{2}d_{1}aa^{\Vert d_{1}}\overset{(1)}{=}d_{2}d_{1}. \end{aligned}$$

Also, we have

$$\begin{aligned} y=b^{\Vert d_{2}}a^{\Vert d_{1}}\overset{(2)}{=}(d_{2}b)^{\#} d_{2}(d_{1}a)^{\#}d_{1}\overset{(3)}{=}(d_{2}b)^{\#} (d_{1}a)^{\#}d_{2}d_{1} \end{aligned}$$

and

$$\begin{aligned} y=b^{\Vert d_{2}}a^{\Vert d_{1}}\overset{(2)}{=}d_{2}(bd_{2})^{\#} d_{1}(ad_{1})^{\#}\overset{(3)}{=}d_{2}d_{1}(bd_{2})^{\#}(ad_{1})^{\#}, \end{aligned}$$

which imply \(y\le _{\mathcal {H}}d_{2}d_{1}\). Hence, y is the inverse of ab along \(d_{2}d_{1}\). \(\square \)

By Lemma 4.1, the following results involving classical inverses are straightforward consequences of Theorem 6.1.

Corollary 6.2

Let \(a,b \in S\). (S is a \(*\)-semigroup in (3).)

  1. (1)

    If \(a, b\in S^{\#}\) and \(ab=ba\), then \(ab\in S^{\#}\) and \((ab)^{\#}=b^{\#}a^{\#}\).

  2. (2)

    If \(a, b\in S^{D}\) and \(ab=ba\), then \(ab\in S^{D}\) and \((ab)^{D}=b^{D}a^{D}\).

  3. (3)

    If \(a, b\in S^{\dagger }\), \(a^{*}ab=ba^{*}a\) and \(bb^{*}a=abb^{*}\), then \(ab\in S^{\dagger }\) and \((ab)^{\dagger }=b^{\dagger }a^{\dagger }\).

Remark 6.3

We have known that if \(a, b\in S^{\dagger }\), \(ab=ba\) and \(ab^{*}=b^{*}a\), then \(ab\in S^{\dagger }\) and \((ab)^{\dagger }=b^{\dagger }a^{\dagger }\). Here, it is clear that \(ab=ba\) and \(ab^{*}=b^{*}a\) imply \(a^{*}ab=ba^{*}a\) and \(bb^{*}a=abb^{*}\). But the converse is not true. For example, let \(S=M_{2}(\mathbb {C})\) and the involution be the conjugate transpose. Take \(a=\left[ {\begin{matrix} &{}i&{}&{}0&{}\\ &{}1&{}&{}0&{}\\ \end{matrix}}\right] \), \(b=\left[ {\begin{matrix} &{}1&{}&{}0&{}\\ &{}0&{}&{}-1&{}\\ \end{matrix}}\right] \). Then, by an elemental computation, we have \(a^{*}ab=ba^{*}a\) and \(bb^{*}a=abb^{*}\). However, \(ab\ne ba\) and \(ab^{*}\ne b^{*}a\).

Next, we consider the inverse of \(a^{k}\) (\(k\in \mathbb {N}\)) along an element in a semigroup, which also recovers the results of the classical generalized inverses.

Theorem 6.4

Let \(a, d \in S\) with \(d \in comm(ad)\) (or \(d\in comm(da))\) and \(k\in \mathbb {N}\). If a is invertible along d, then \(a^{k}\) is invertible along \(d^{k}\) and \((a^{k})^{\Vert d^{k}}=(a^{\Vert d})^{k}\).

Proof

Suppose that \(d \in comm(ad)\), by Lemma 5.2, we have \((ad)^{k}=a^{k}d^{k}\), where \(k \in \mathbb {N}\). Since a is invertible along d, then \(a^{\Vert d}ad=d=daa^{\Vert d}\) and \(a^{\Vert d}=d(ad)^{\#}\). Let \(y=(a^{\Vert d})^{k}\). Then we have the following equalities

$$\begin{aligned} ya^{k}d^{k}= & {} (a^{\Vert d})^{k}a^{k}d^{k}=(d(ad)^{\#})^{k}(ad)^{k} =d^{k}((ad)^{\#})^{k}(ad)^{k}=d^{k}ad(ad)^{\#}\\= & {} d^{k}aa^{\Vert d}=d^{k} \end{aligned}$$

and

$$\begin{aligned} d^{k}a^{k}y =d^{k}a^{k}(a^{\Vert d})^{k}=d^{k}a^{k}(d(ad)^{\#})^{k} =d^{k}a^{k}d^{k}((ad)^{\#})^{k}=d^{k}(ad)^{k}((ad)^{\#})^{k}=d^{k}. \end{aligned}$$

In addition, note that

$$\begin{aligned} y=(a^{\Vert d})^{k}=(d(ad)^{\#})^{k}=d^{k}((ad)^{\#})^{k}=((ad)^{\#})^{k}d^{k}, \end{aligned}$$

which implies \(y\le _{\mathcal {H}}d^{k}\). Therefore, \(a^{k}\) is invertible along \(d^{k}\) with inverse y.

The proof of the case \(d\in comm(da)\) is analogous. \(\square \)

Combining Theorem 6.4 and Lemma 4.1, we deduce the following results.

Corollary 6.5

Let \(a\in S\) and \(k\ge 1\). (S is a \(*\)-semigroup in (3).)

  1. (1)

    If \(a\in S^{\#}\) , then \(a^{k}\in S^{\#}\) and \((a^{k})^{\#}=(a^{\#})^{k}\).

  2. (2)

    [4, Theorem 2] If \(a\in S^{D}\), then \(a^{k}\in S^{D}\) and \((a^{k})^{D}=(a^{D})^{k}\).

  3. (3)

    If \(a\in S^{\dagger }\) and \(aa^{*}a=a^{2}a^{*}\) (or \(\ a^{*}a^{2}=aa^{*}a)\), then \(a^{k}\in S^{\dagger }\) and \((a^{k})^{\dagger }=(a^{\dagger })^{k}\).

Remark 6.6

It is well known that if \(a\in S^{\dagger }\) and \(aa^{*}=a^{*}a\), then \(a^{k}\in S^{\dagger }\) and \((a^{k})^{\dagger }=(a^{\dagger })^{k}\). Here, it is clear that \(aa^{*}=a^{*}a\) can imply \(aa^{*}a=a^{2}a^{*}\). But the converse is not true. For instance, let \(S=RCFM_{\mathbb {N}}(\mathbb {R})\), which denotes the ring of both row-finite and column-finite \(\mathbb {N}\times \mathbb {N}\) matrices over real number field \(\mathbb {R}\). Let involution \(*\) be the transpose. Take \(a=\sum \nolimits _{i=1}^{\infty }e_{i,i+1}\), where \(e_{i,j}\) is the same as the infinite matrix in Remark 3.12. Then \(aa^{*}=1\). Hence, we have \(a^{\dagger }=a^{*}\) and \(aa^{*}a=a^{2}a^{*}\). However, \(a^{*}a=\sum \nolimits _{i=2}^{\infty }e_{i,i}\ne aa^{*}\).

The reverse order law for the inverse along an element is given as follows.

Theorem 6.7

Let \(a,b,d\in S\) with d \(\in \) comm(da) and d \(\in \) comm(ad). If both a and b are invertible along d, then ab is invertible along d and \((ab)^{\Vert d}=b^{\Vert d}a^{\Vert d}\).

Proof

Since a is invertible along d, by Lemma 3.3(3), we have \(d\le _{\mathcal {H}} dad\), which implies \(d=dadu\) and \(d=vdad\) for some \(u, v\in S^{1}\). Since d \(\in \) comm(da) and d \(\in \) comm(ad), then \(d=d^{2}au\) and \(d=vad^{2}\). Thus \(d\in S^{\#}\) by Lemma 3.1(1). In addition, we have

$$\begin{aligned} a^{\Vert d}ad=d=daa^{\Vert d} \ \ \text{ and }\ \ a^{\Vert d}=d(ad)^{\#}=(da)^{\#}d \end{aligned}$$

and

$$\begin{aligned} b^{\Vert d}bd=d=dbb^{\Vert d} \ \ \text{ and }\ \ b^{\Vert d}=d(bd)^{\#}=(db)^{\#}d. \end{aligned}$$

Then, using previous equalities, we deduce that

$$\begin{aligned} (b^{\Vert d}a^{\Vert d})(ab)d= & {} (db)^{\#}d(da)^{\#}dabd=(db)^{\#}(da)^{\#}dadbd\\= & {} (db)^{\#}a^{\Vert d}adbd=(db)^{\#}dbd=b^{\Vert d}bd=d \end{aligned}$$

and

$$\begin{aligned} d(ab)(b^{\Vert d}a^{\Vert d})= & {} d^{\#}d^{2}abb^{\Vert d}a^{\Vert d}=d^{\#}dadbb^{\Vert d}a^{\Vert d}=d^{\#}dada^{\Vert d}=d^{\#}ddaa^{\Vert d}\\= & {} d^{\#}d^{2}=d. \end{aligned}$$

In addition, we can easily see that \(b^{\Vert d}a^{\Vert d}\le _{\mathcal {H}} d\). Hence, we get ab is invertible along d and \((ab)^{\Vert d}=b^{\Vert d}a^{\Vert d}\). \(\square \)

Finally, thanks to the characterizations of commutative inverses along an element in a ring (see [1, Theorem 7.1 and Theorem 7.3]), next we consider the corresponding results in a semigroup.

Theorem 6.8

Let \(a, d \in S\). If a is invertible along d, then the following are equivalent:

  1. (1)

    \(aa^{\Vert d}=a^{\Vert d}a\).

  2. (2)

    \(d\in S^{\#}\) and \(add^{\#}=dd^{\#}a\).

  3. (3)

    \(da\le _{\mathcal {L}} d\) and \(ad \le _{\mathcal {R}} d\).

Proof

Since a is invertible along d, then we have

$$\begin{aligned} a^{\Vert d}ad=d=daa^{\Vert d} \ \ \text{ and }\ \ a^{\Vert d}=d(ad)^{\#}=(da)^{\#}d. \end{aligned}$$

(1) \(\Rightarrow \) (2) From the previous equalities, it follows that \(d=a^{\Vert d}ad=aa^{\Vert d}d=a(da)^{\#}d^{2}\). Similarly, \(d=d^{2}(ad)^{\#}a\). Thus, \(d\in S^{\#}\) by Lemma 3.1(1).

In addition, we deduce that

$$\begin{aligned} dd^{\#}=a^{\Vert d}add^{\#}=aa^{\Vert d}d^{\#}d=a(da)^{\#}dd^{\#}d=a(da)^{\#}d=aa^{\Vert d}, \end{aligned}$$

which implies \(add^{\#}=aaa^{\Vert d}=aa^{\Vert d}a=dd^{\#}a\).

(2) \(\Rightarrow \) (3) Note that

$$\begin{aligned} da=daa^{\Vert d}a=da(da)^{\#}da=da(da)^{\#}ddd^{\#}a=da(da)^{\#}dad^{\#}d, \end{aligned}$$

which yields \(da\le _{\mathcal {L}} d\). Similarly, \(ad \le _{\mathcal {R}} d\) holds.

(3) \(\Rightarrow \) (1) By Lemma 3.3(3), we get \(d\le _{\mathcal {H}} dad\), i.e., \(d=rdad\) and \(d=dads\) for suitable \(r, s\in S^{1}\). According to the assumptions \(da\le _{\mathcal {L}} d\) and \(ad \le _{\mathcal {R}} d\), there exist \(x, y\in S^{1}\) such that \(da=xd\) and \(ad=dy\). Thus, \(d=rdad=rxd^{2}\) and \(d=dads=d^{2}ys\), which imply \(d\in S^{\#}\). Also, we have \(dad^{\#}d=xdd^{\#}d=xd=da\) and \(dd^{\#}ad=dd^{\#}dy=dy=ad\). Then, we have

$$\begin{aligned} a^{\Vert d}a=(da)^{\#}da=(da)^{\#}dad^{\#}d=a^{\Vert d}ad^{\#}d=a^{\Vert d}add^{\#}=dd^{\#}. \end{aligned}$$

Similarly, we can get \(aa^{\Vert d}=d^{\#}d\). Therefore, \(aa^{\Vert d}=a^{\Vert d}a\). \(\square \)