In this section we will discuss various problems of completions of operator matrices and present an application of such results to some problems concerning generalized inverses and to that of invertibility of linear combinations of operators. It is worth mentioning that this very intensively studied topic of Operator theory finds large application in the theory of generalized inverses.

Although the reverse order law for \(\{1\}\)-generalized inverses of matrices was completely resolved already by 1998, the corresponding problem for the operators on separable Hilbert spaces was only solved in 2015. Namely, the reverse order law for \(\{1\}\)-generalized inverses for the operators on separable Hilbert spaces was completely solved in the paper of Pavlović et al. [1]. One of the objective of this chapter is to present the approach taken in resolving the reverse order law for \(\{1\}\)-generalized inverses for the operators on separable Hilbert spaces which involves some of the previous research on completions of operator matrices to left and right invertibility.

We will first go over some characteristic results on the problem of completions of operator matrices, with a special emphasis on some instructive examples, and then demonstrate usability of results of that type by showing how they can be applied to one of the topics in generalized inverses of operators that has seen a great interest over the years. Also, we will consider the existence of Drazin invertible completions of an upper triangular operator matrix and applications of results on completions of operator matrices to the problem of invertibility of a linear combination of operators with the special emphasis on the set of projectors and orthogonal projectors.

3.1 Some Specific Problems of Completions of Operator Matrices

Various aspects of operator matrices and their properties have long motivated researchers in operator theory. Completion of partially given operator matrices to operators of fixed prescribed type is an extensively studied area of operator theory, which is a topic of many various currently undergoing investigations. In this section we will consider only some specific problems from that field which will be usefull later in finding necessary and sufficient conditions for the reverse order law for \(\{1\}\)-generalized inverses for the operators on separable Hilbert spaces to hold. When we talk about completion problems we usually consider the following three types of operator matrices

$$ M_C=\left[ \begin{array}{cc} A&{}C\\ 0&{}B\end{array}\right] : \left[ \begin{array}{cc} {\mathscr {H}}\\ {\mathscr {K}}\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {H}}\\ {\mathscr {K}}\end{array}\right] , $$
$$ M_X=\left[ \begin{array}{cc} A&{}C\\ X&{}B\end{array}\right] : \left[ \begin{array}{cc} {\mathscr {H}}\\ {\mathscr {K}}\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {H}}\\ {\mathscr {K}}\end{array}\right] $$

and

$$ M_{(X\ Y)}=\left[ \begin{array}{cc} A&{}B\\ X&{}Y\end{array}\right] : \left[ \begin{array}{cc} {\mathscr {H}}\\ {\mathscr {K}}\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {H}}\\ {\mathscr {K}}\end{array}\right] $$

and for which the following three questions frequently arise:

Question 1: Is there an operator \(C\in {\mathscr {B}}({\mathscr {Y}},{\mathscr {X}})\) (resp. X and XY) such that \(M_C\) (resp. \(M_X\) and \(M_{(X\ Y)}\)) is invertible (right invertible, left invertible, regular...) ?

Question 2: \(\bigcap _{C\in {\mathscr {B}}({\mathscr {Y}},{\mathscr {X}})}\sigma _*(M_C)=?\) where \(\sigma _*\) is any type of spectrum such as the point, continuous, residual, defect, left, right, essential, Weyl spectrum etc.

Question 3: For given operators \(A\in {\mathscr {B}}({\mathscr {X}})\) and \(B\in {\mathscr {B}}({\mathscr {Y}})\), is there an operator \(C' \in {\mathscr {B}}({\mathscr {Y}},{\mathscr {X}})\) such that

$$ \sigma _*(M_{C'})=\bigcap _{C\in {\mathscr {B}}({\mathscr {Y}},{\mathscr {X}})}\sigma _*(M_C), $$

where again \(\sigma _*\) is any type of spectrum?

In the case of the operator matrix \(M_C\) it is clear that \(\sigma (M_C)\subseteq \sigma (A)\cup \sigma (B)\). In the following two examples we will show that this inclusion is sometimes actually an equality, but that also it can be a proper one:

Example 3.1

If \(\{g_i\}_{i=1}^{\infty }\) is an orthonormal basis of \({\mathscr {K}}\), define an operator \(B_0\) by

$$ \left\{ \begin{array}{ll} B_0g_1=0, \\ B_0g_i=g_{i-1}, \ i=2,3,\dots \end{array} \right. $$

If \(\{f_i\}_{i=1}^{\infty }\) is an orthonormal basis of \({\mathscr {H}}\), define an operator \(A_0\) by \(A_0f_i=f_{i+1},\ i=1,2,\dots \), and an operator \(C_0\) by \(C_0=(\cdot ,g_1)f_1\) from \({\mathscr {K}}\) into \({\mathscr {H}}\). Then it is easy to see that \(\sigma (A_0)=\sigma (B_0)=\{\lambda \, : \, |\lambda |\le 1\}\). But, in this case, \(M_{C_0}\) is a unitary operator, \(\sigma (M_{C_0})\subseteq \{\lambda \, : \, |\lambda |= 1\}\), so we have the inclusion \(\sigma (M_{C_0})\subset \sigma (A) \cup \sigma (B)\).

Example 3.2

If \(A\in {\mathscr {B}}({\mathscr {H}})\) and \(B\in {\mathscr {B}}({\mathscr {K}})\) are normal operators, then for any \(C\in {\mathscr {B}}({\mathscr {K}},{\mathscr {H}})\), \(\sigma (M_C)=\sigma (A)\cup \sigma (B)\) (see Theorem 5 from [2]).

Also, the following example shows that the inclusion \(\sigma _{gD}(M_C)\subseteq \sigma _{gD}(A)\cup \sigma _{gD}(B)\) may be strict in the case of the generalized Drazin spectrum:

Example 3.3

Define operators \(A,B_1,C_1\in {\mathscr {B}}(l_2)\) by

$$\begin{aligned}&A(x_1,x_2,x_3,\dots )=(0,x_1,x_2,x_3,\dots ), \\&B_1(x_1,x_2,x_3,\dots )=(x_2,x_3,x_4,\dots ), \\&C_1(x_1,x_2,x_3,\dots )=(x_1,0,0,\dots ). \end{aligned}$$

Consider the operator

$$\begin{aligned} M_C&=\left( \begin{array}{cc} A &{} C \\ 0 &{} B \end{array} \right) :l_2\oplus (l_2\oplus l_2)\rightarrow l_2\oplus (l_2\oplus l_2) \\&=\left( \begin{array}{ccc} A &{} C_1 &{} 0 \\ 0 &{} B_1 &{} 0 \\ 0 &{} 0 &{} 0 \end{array} \right) :l_2 \oplus l_2 \oplus l_2\rightarrow l_2\oplus l_2\oplus l_2, \end{aligned}$$

where \(B=\left( \begin{array}{cc} B_1 &{} 0 \\ 0 &{} 0 \end{array} \right) :l_2\oplus l_2\rightarrow l_2\oplus l_2,\ C=(C_1,0):l_2\oplus l_2\rightarrow l_2\).

A direct calculation shows that

  1. (i)

    \(\sigma (M_C)=\{\lambda \in \mathbb {C} \, : \, |\lambda |=1\}\cup \{0\},\ \sigma (A)=\sigma (B)=\{\lambda \in \mathbb {C}\, : \, | \lambda |\le 1\}\);

  2. (ii)

    \(\sigma _{gD} (M_C)=\{\lambda \in \mathbb {C} \, : \, |\lambda |=1\}\cup \{0\},\ \sigma _{gD}(A)=\sigma _{gD}(B)=\{\lambda \in \mathbb {C}\, : \, | \lambda |\le 1\}\).

On the other hand the inclusion \(\sigma _{c}(M_C)\subseteq \sigma _{c}(A)\cup \sigma _{c}(B)\) is not true in general, in the case of the continuous spectrum which will be shown in the next example:

Example 3.4

Let \({\mathscr {H}}={\mathscr {K}}=l_2\). Define the operators \(A,\ B,\ C\) by

$$\begin{aligned}&A(x_1,x_2,x_3,x_4,\dots )=(0,0,x_1,x_2,x_3,x_4,\dots ) \\&B(x_1,x_2,x_3,x_4,\dots )=(x_3, \frac{x_4}{\sqrt{4}},\frac{x_5}{\sqrt{5}},\frac{x_6}{\sqrt{6}},\dots ) \\&C(x_1,x_2,x_3,x_4,\dots )=(x_1,x_2,0,0,\dots ), \end{aligned}$$

for any \(x=(x_n)_{n=1}^{\infty }\in l_2\). Consider \(M_C=\left( \begin{array}{cc} A &{} C \\ 0 &{} B \end{array} \right) :l_2\oplus l_2\rightarrow l_2 \oplus l_2\). A direct calculation shows that \( 0\in \sigma _c(M_c) \text {, but }0\notin \sigma _C(A)\cup \sigma _C(B) \) which implies \(\sigma _C (M_C)\nsubseteq \sigma _C(A) \cup \sigma _C(B)\).

Given operators \(A\in {\mathscr {B}}({\mathscr {X}})\) and \(B\in {\mathscr {B}}({\mathscr {Y}})\), the question of existence of an operator \(C\in {\mathscr {B}}({\mathscr {K}},{\mathscr {H}})\) such that the operator matrix \(M_C\) is invertible was considered for the first time in [2] in the case when \({\mathscr {X}}\) and \({\mathscr {Y}}\) are separable Hilbert spaces. The results from [2] are generalized in [3] in the case of Banach spaces. In [4], the same problem is considered in the case of Banach spaces and the set of all \(C\in {\mathscr {B}}({\mathscr {Y}},{\mathscr {X}})\) for which \(M_C\) is invertible is completely described and additionally the set of all \(C\in {\mathscr {B}}({\mathscr {Y}},{\mathscr {X}})\) such that \(M_C\) is invertible, denoted by S(AB), is completely described (in the case when \({\mathscr {X}}\) and \({\mathscr {Y}}\) are Banach spaces).

Theorem 3.1

([4]) Let \(A\in {\mathscr {B}}({\mathscr {X}})\) and \(B\in {\mathscr {B}}({\mathscr {Y}})\) be given operators. The operator matrix \(M_C\) is invertible for some \(C\in {\mathscr {B}}({\mathscr {Y}},{\mathscr {X}})\) if and only if

\(\mathrm (i)\) :

A is left invertible,

\(\mathrm (ii)\) :

B is right invertible,

\(\mathrm (iii)\) :

\({\mathscr {N}}(B)\cong {\mathscr {X}}/ {\mathscr {R}}(A)\).

If conditions \((i){-}(iii)\) are satisfied, the set of all \(C\in {\mathscr {B}}({\mathscr {Y}},{\mathscr {X}})\) such that \(M_C\) is invertible is given by

$$\begin{aligned} S(A,B)=&\{C\in {\mathscr {B}}({\mathscr {Y}},{\mathscr {X}}):C=\left[ \begin{array}{cc}C_1&{}0\\ 0&{}C_4\end{array}\right] : \left[ \begin{array}{cc}{\mathscr {P}}\\ {\mathscr {N}}(B)\end{array}\right] \rightarrow \left[ \begin{array}{cc}{\mathscr {R}}(A)\\ {\mathscr {S}}\end{array}\right] , \nonumber \\&C_4\ \text {is invertible},{\mathscr {X}}={\mathscr {R}}(A)\oplus {\mathscr {S}}\ \text {and}\ {\mathscr {Y}}={\mathscr {P}}\oplus {\mathscr {N}}(B)\}. \end{aligned}$$
(3.1)

In Remark 2.5 in [4], it is proved that if we take arbitrary but fixed decompositions of \({\mathscr {X}}\) and \({\mathscr {Y}}\), \({\mathscr {X}}={\mathscr {R}}(A)\oplus {\mathscr {S}}\) and \({\mathscr {Y}}={\mathscr {P}}\oplus {\mathscr {N}}(B)\), then

$$\begin{aligned} S(A,B)=&\{C\in {\mathscr {B}}({\mathscr {Y}},{\mathscr {X}}):C=\left[ \begin{array}{cc}C_1&{}C_2\\ C_3&{}C_4\end{array}\right] : \left[ \begin{array}{cc}{\mathscr {P}}\\ {\mathscr {N}}(B)\end{array}\right] \rightarrow \left[ \begin{array}{cc}{\mathscr {R}}(A)\\ {\mathscr {S}}\end{array}\right] , \nonumber \\&C_4\ \text {is invertible}\}. \end{aligned}$$
(3.2)

Based on the above results and using the fact that the invertibility of \(C_4\in {\mathscr {B}}({\mathscr {N}}(B),{\mathscr {S}})\) simply means that \(P_{{\mathscr {S}},{\mathscr {R}}(A)}C|_{{\mathscr {N}}(B)}\) is an injective operator with range \({\mathscr {S}}\), in the case of separable Hilbert spaces we have the following characterization of invertibility of an upper triangular operator matrix:

Theorem 3.2

Let \({\mathscr {H}}\) and \({\mathscr {K}}\) be separable Hilbert spaces and let \(A\in {\mathscr {B}}({\mathscr {H}})\), \(B\in {\mathscr {B}}({\mathscr {K}})\) and \(C\in {\mathscr {B}}({\mathscr {K}},{\mathscr {H}})\) be given operators. The operator matrix \(\left[ \begin{array}{cc} A&{}C\\ 0&{}B\end{array}\right] \) is invertible if and only if A is left invertible, B is right invertible and \(P_{{\mathscr {S}},{\mathscr {R}}(A)}C|_{{\mathscr {N}}(B)}\) is an injective operator with (closed) range \({\mathscr {S}}\), where \({\mathscr {H}}={\mathscr {R}}(A)\oplus {\mathscr {S}}\).

Aside from the existence of invertible completions of the aforementioned operator matrix, the problems of existence of completions of the operator matrix \(M_C\) that are Fredholm, semi-Fredholm, Kato, Browder etc. have subsequently been studied in literature. In Sect. 3.4, we will consider such problem in the case of Drazin invertible completions.

Moving on, in [5] the problem is considered of completing an operator matrix

$$\begin{aligned} M_{(X,Y)}=\left[ \begin{array}{cc} A&{}C\\ X&{}Y\end{array}\right] : \left[ \begin{array}{cc} {\mathscr {H}}_1\\ {\mathscr {H}}_2\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {H}}_1\\ {\mathscr {H}}_2\end{array}\right] \end{aligned}$$
(3.3)

to left (right) invertibility in the case when \(A\in {\mathscr {B}}({\mathscr {H}}_1)\) and \(C\in {\mathscr {B}}({\mathscr {H}}_2,{\mathscr {H}}_1)\) are given and \({\mathscr {H}}_1, {\mathscr {H}}_2\) are separable Hilbert spaces.

Theorem 3.3

([5]) Let \(M_{(X,Y)}\) be given by (3.3).

\(\mathrm (i)\) If \(\mathrm{dim}{\mathscr {H}}_2=\infty \), then there exist \(X\in {\mathscr {B}}({\mathscr {H}}_1,{\mathscr {H}}_2)\) and \(Y\in {\mathscr {B}}({\mathscr {H}}_2)\) such that \(M_{(X,Y)}\) is left invertible.

\(\mathrm (ii)\) If \(\mathrm{dim}{\mathscr {H}}_2<\infty \), then \(M_{(X,Y)}\) is left invertible for some operators \(X\in {\mathscr {B}}({\mathscr {H}}_1,{\mathscr {H}}_2)\) and \(Y\in {\mathscr {B}}({\mathscr {H}}_2)\) if and only if \(\mathrm{dim}{\mathscr {N}}\left( \left[ \begin{array}{cc} A&C\end{array}\right] \right) \le \mathrm{dim}{\mathscr {H}}_2\) and \({\mathscr {R}}(A)\) is closed.

Here, we will present a result of this type in the case when

$$\begin{aligned} M_{(X,Y)}=\left[ \begin{array}{cc} A&{}C\\ X&{}Y\end{array}\right] : \left[ \begin{array}{cc} {\mathscr {H}}_1\\ {\mathscr {H}}_2\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {H}}_3\\ {\mathscr {H}}_4\end{array}\right] \end{aligned}$$
(3.4)

and \({\mathscr {H}}_i\), \(i=\overline{1,4}\) are separable Hilbert spaces. So, we will give a modification of Theorem 3.3 for the operator matrix \(M_{(X,Y)}\) given by (3.4).

Theorem 3.4

Let \(M_{(X,Y)}\) be given by (3.4).

\(\mathrm (i)\) If \(\mathrm{dim}{\mathscr {H}}_4=\infty \), then there exist \(X\in {\mathscr {B}}({\mathscr {H}}_1,{\mathscr {H}}_4)\) and \(Y\in {\mathscr {B}}({\mathscr {H}}_2,{\mathscr {H}}_4)\) such that \(M_{(X,Y)}\) is left invertible.

\(\mathrm (ii)\) If \(\mathrm{dim}{\mathscr {H}}_4<\infty \), then \(M_{(X,Y)}\) is left invertible for some operators \(X\in {\mathscr {B}}({\mathscr {H}}_1,{\mathscr {H}}_4)\) and \(Y\in {\mathscr {B}}({\mathscr {H}}_2,{\mathscr {H}}_4)\) if and only if \(\mathrm{dim}{\mathscr {N}}\left( \left[ \begin{array}{cc} A&C\end{array}\right] \right) \le \mathrm{dim}{\mathscr {H}}_4\) and \({\mathscr {R}}(A)+{\mathscr {R}}(C)\) is closed.

Proof

\(\mathrm (i)\) If \(\mathrm{dim}{\mathscr {H}}_4=\infty \), then there exists a closed infinite dimensional subspace of \({\mathscr {H}}_4\), \({\mathscr {M}}\) such that \(\mathrm{dim}{\mathscr {M}}^\bot =\mathrm{dim}{\mathscr {H}}_1\). Now, there exist left invertible operators \(J_1: {\mathscr {H}}_1\rightarrow {\mathscr {M}}^\bot \) and \(J_2: {\mathscr {H}}_2\rightarrow {\mathscr {M}}\). Let

$$ X=\left[ \begin{array}{cc} 0\\ J_1\end{array}\right] :{\mathscr {H}}_1\rightarrow \left[ \begin{array}{cc} {\mathscr {M}}\\ {\mathscr {M}}^\bot \end{array}\right] ,\ \ \ Y=\left[ \begin{array}{cc} J_2\\ 0\end{array}\right] :{\mathscr {H}}_2\rightarrow \left[ \begin{array}{cc} {\mathscr {M}}\\ {\mathscr {M}}^\bot \end{array}\right] . $$

Let

$$ X^-=\left[ \begin{array}{cc} 0&(J_1)_l^{-1}\end{array}\right] :\left[ \begin{array}{cc} {\mathscr {M}}\\ {\mathscr {M}}^\bot \end{array}\right] \rightarrow {\mathscr {H}}_1, Y^-=\left[ \begin{array}{cc} (J_2)_l^{-1}&0\end{array}\right] :\left[ \begin{array}{cc} {\mathscr {M}}\\ {\mathscr {M}}^\bot \end{array}\right] \rightarrow {\mathscr {H}}_2. $$

Now,

$$ \left[ \begin{array}{cc} 0&{}X^-\\ 0&{}Y^-\end{array}\right] \left[ \begin{array}{cc} A&{}C\\ X&{}Y\end{array}\right] =\left[ \begin{array}{cc} I&{}0\\ 0&{}I\end{array}\right] , $$

i.e., \(M_{(X,Y)}\) is left-invertible.

\(\mathrm (ii)\) Suppose that \(\mathrm{dim}{\mathscr {H}}_4<\infty \). If there exist regular \(X\in {\mathscr {B}}({\mathscr {H}}_1,{\mathscr {H}}_4)\) and \(Y\in {\mathscr {B}}({\mathscr {H}}_2,{\mathscr {H}}_4)\) such that \(M_{(X,Y)}\) is left invertible, from the fact that \({\mathscr {R}}(M_{(X,Y)})\) is closed we get that

$$ {\mathscr {R}}\left( \left[ \begin{array}{cc} A^*&{}X^*\\ C^*&{}Y^*\end{array}\right] \right) ={\mathscr {R}}\left( \left[ \begin{array}{cc} A^*&{}0\\ C^*&{}0\end{array}\right] \right) +{\mathscr {R}}\left( \left[ \begin{array}{cc} 0&{}X^*\\ 0&{}Y^*\end{array}\right] \right) $$

is closed. It follows that \({\mathscr {R}}\left( \left[ \begin{array}{cc} A&{}C\\ 0&{}0\end{array}\right] \right) \) is closed, i.e., \({\mathscr {R}}(A)+{\mathscr {R}}(C)\) is closed since \({\mathscr {R}}\left( \left[ \begin{array}{cc} 0&{}X^*\\ 0&{}Y^*\end{array}\right] \right) \) is a finite dimensional subspace. From the injectivity of \(M_{(X,Y)}\), it follows that \({\mathscr {N}}\left( \left[ \begin{array}{cc} A&C\end{array}\right] \right) \cap {\mathscr {N}}\left( \left[ \begin{array}{cc} X&Y\end{array}\right] \right) =\{0\}\) which implies that

$$ \mathrm{dim}{\mathscr {N}}\left( \left[ \begin{array}{cc} A&C\end{array}\right] \right) \le \mathrm{dim}{\mathscr {N}}\left( \left[ \begin{array}{cc} X&Y\end{array}\right] \right) ^\bot \le \mathrm{dim}{\mathscr {H}}_4. $$

For the converse, suppose that \(\mathrm{dim}{\mathscr {N}}\left( \left[ \begin{array}{cc} A&C\end{array}\right] \right) \le \mathrm{dim}{\mathscr {H}}_4\) and \({\mathscr {R}}(A)+{\mathscr {R}}(C)\) is closed. Since \({\mathscr {N}}\left( \left[ \begin{array}{cc} A&C\end{array}\right] \right) ={\mathscr {K}}_1\oplus {\mathscr {K}}_2\oplus {\mathscr {K}}_3\), where \({\mathscr {K}}_1=\left\{ \left[ \begin{array}{cc} x\\ 0\end{array}\right] : x\in {\mathscr {N}}(A)\right\} \), \({\mathscr {K}}_2=\left\{ \left[ \begin{array}{cc} 0\\ y\end{array}\right] : y\in {\mathscr {N}}(C)\right\} \) and \({\mathscr {K}}_3=\left\{ \left[ \begin{array}{cc} x\\ y\end{array}\right] : x\in {\mathscr {N}}(A)^\bot , y\in {\mathscr {N}}(C)^\bot , Ax+Cy=0\right\} \), there exists a subspace \({\mathscr {M}}\) of \({\mathscr {H}}_4\) such that \(\mathrm{dim}{\mathscr {M}}=\mathrm{dim}{\mathscr {K}}_1\). Then \(\mathrm{dim}{\mathscr {M}}^\bot \ge \mathrm{dim}{\mathscr {K}}_2+\mathrm{dim}{\mathscr {K}}_3\). Now, there exist left invertible operators \(J_1: {\mathscr {N}}(A)\rightarrow {\mathscr {M}}\) and \(J_2: P_{{\mathscr {H}}_2}{\mathscr {N}}\big (\left[ \begin{array}{cc} A&C\end{array}\right] \big )\rightarrow {\mathscr {M}}^\bot \). Let

$$ X=\left[ \begin{array}{cc} J_1&{}0\\ 0&{}0\end{array}\right] :\left[ \begin{array}{cc} {\mathscr {N}}(A)\\ {\mathscr {N}}(A)^\bot \end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {M}}\\ {\mathscr {M}}^\bot \end{array}\right] $$

and

$$ Y=\left[ \begin{array}{cc} 0&{}0\\ 0&{}J_2\end{array}\right] :\left[ \begin{array}{cc} \big (P_{{\mathscr {H}}_2}{\mathscr {N}}\big (\left[ \begin{array}{cc} A&{}C\end{array}\right] \big )\big )^\bot \\ P_{{\mathscr {H}}_2}{\mathscr {N}}\big (\left[ \begin{array}{cc} A&{}C\end{array}\right] \big )\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {M}}\\ {\mathscr {M}}^\bot \end{array}\right] . $$

Now, as in Theorem 2.1 [5], we can check that \(M_{(X,Y)}\) is left-invertible, i.e., we will prove that \({\mathscr {R}}(M_{(X,Y)})\) is closed and \(M_{(X,Y)}\) is injective. From the fact that \(\mathrm{dim}{\mathscr {H}}_4<\infty \) and Kato’s lemma we have that \({\mathscr {R}}(M_{(X,Y)})\) is closed. On the other hand, let

$$ \left[ \begin{array}{cc} A&{}C\\ X&{}Y\end{array}\right] \left[ \begin{array}{cc} x\\ y\end{array}\right] =\left[ \begin{array}{cc} 0\\ 0\end{array}\right] , $$

which is equivalent to

$$ Ax+Cy=0,\ \ \ Xx+Yy=0. $$

Then it follows that \(y\in P_{{\mathscr {H}}_2}{\mathscr {N}}\big (\left[ \begin{array}{cc} A&C\end{array}\right] \big )\). Also, we have that \(Xx=Yy=0\) which implies that \(y=0\). Thus, \(Ax=0\) which by definition of X implies that \(x=0\). This proves that \(M_{(X,Y)}\) is injective. \(\Box \)

As for completions of an operator matrix

$$\begin{aligned} M_{X}=\left[ \begin{array}{cc} A&{}C\\ X&{}B\end{array}\right] : \left[ \begin{array}{cc} {\mathscr {H}}_1\\ {\mathscr {H}}_2\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {H}}_1\\ {\mathscr {H}}_2\end{array}\right] , \end{aligned}$$
(3.5)

where \(A\in {\mathscr {B}}({\mathscr {H}}_1)\), \(B\in {\mathscr {B}}({\mathscr {H}}_2)\) and \(C\in {\mathscr {B}}({\mathscr {H}}_2,{\mathscr {H}}_1)\) are given, the first to ever address any kind of questions (for separable Hilbert spaces not necessarily of finite dimension) related to it was Takahashi. More specifically, in his paper [6] he gave necessary and sufficient conditions for the existence of \(X\in {\mathscr {B}}({\mathscr {H}}_1)\) such that \(M_X\) is invertible.

Although Takahashi’s paper was published in 1995, there have only been several papers since, namely [7, 9,10,11,12,13,14], which deal with various completions of the operator matrix of the form \(M_X\). Actually in [13] exactly the same problem was considered as in [6] but using methods of geometrical structure of operators and in it some necessary and sufficient conditions were given different than those from [6]. In [9] the authors considered the problem of completions of \(M_X\) given by (3.5) to right (left) invertibility in the case when \(A\in {\mathscr {B}}({\mathscr {H}}_1)\), \(B\in {\mathscr {B}}({\mathscr {H}}_2)\) and \(C\in {\mathscr {B}}({\mathscr {H}}_2,{\mathscr {H}}_1)\) are given.

Theorem 3.5

([9]) Let \(A\in {\mathscr {B}}({\mathscr {H}}_1)\), \(B\in {\mathscr {B}}({\mathscr {H}}_2)\) and \(C\in {\mathscr {B}}({\mathscr {H}}_2,{\mathscr {H}}_1)\) be given. Then \(M_{X}\) is right invertible for some \(X\in {\mathscr {B}}({\mathscr {H}}_1,{\mathscr {H}}_2)\) if and only if \({\mathscr {R}}(A)+{\mathscr {R}}(C)={\mathscr {H}}_1\) and one of the following conditions holds:

(1):

\({\mathscr {N}}(A\mid C;\ {\mathscr {H}}_2)\) contains a non-compact operator,

(2):

\(M_0=\left[ \begin{array}{cc} A&{}C\\ 0&{}B\end{array}\right] \) is a right semi-Fredholm operator and

$$ d(M_0)\le n(A)+\mathrm{dim}\left( {\mathscr {R}}(A)\cap {\mathscr {R}}\big (C|_{{\mathscr {N}}(B)}\big )\right) , $$

where \({\mathscr {N}}(A\mid C;\ {\mathscr {H}}_2)=\{G\in {\mathscr {B}}({\mathscr {H}}_2, {\mathscr {H}}_1): {\mathscr {R}}(AG)\subseteq {\mathscr {R}}(C)\}.\)

Here we will present a result of this type in the case when

$$\begin{aligned} M_{X}=\left[ \begin{array}{cc} A&{}C\\ X&{}B\end{array}\right] : \left[ \begin{array}{cc} {\mathscr {H}}_1\\ {\mathscr {H}}_2\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {H}}_3\\ {\mathscr {H}}_4\end{array}\right] \end{aligned}$$
(3.6)

and give a modification of Theorem 3.5 which shortens significantly one implication of the original one. Since for the proof we need some auxiliary results, we begin by stating these.

Lemma 3.1

([15]) If \({\mathscr {H}}\) is an infinite dimensional Hilbert space, then \(T\in {\mathscr {B}}({\mathscr {H}})\) is compact if and only if \({\mathscr {R}}(T)\) contains no closed infinite dimensional subspaces.

Lemma 3.2

Let \({\mathscr {H}}_1\) and \({\mathscr {H}}_2\) be separable Hilbert spaces. If \(U\subseteq {\mathscr {H}}_1\) and \(V\subseteq {\mathscr {H}}_2\) are closed subspaces with \(\dim U = \dim V\), then there exists \(T\in {\mathscr {B}}({\mathscr {H}}_1,{\mathscr {H}}_2)\) such that \({\mathscr {N}}(T) = U^\bot \), \({\mathscr {R}}(T) = V\) and \(T|_U : U \rightarrow V\) is unitary. In particular, if \(U = {\mathscr {H}}_1\), then T is left invertible; if \(V = {\mathscr {H}}_2\), then T is right invertible.

Lemma 3.3

([9]) Let \(S\in {\mathscr {B}}({\mathscr {H}}_1,{\mathscr {H}}_2)\), and let T be a closed linear operator from \({\mathscr {H}}_2\) into \({\mathscr {H}}_3\). If \({\mathscr {R}}(S)\subseteq {\mathscr {D}}(T)\), then \(TS\in {\mathscr {B}}({\mathscr {H}}_1,{\mathscr {H}}_3)\).

For Hilbert spaces \({\mathscr {H}}_i\), \(i=\overline{1,4}\) and operators \(A\in {\mathscr {B}}({\mathscr {H}}_1,{\mathscr {H}}_3)\) and \(C\in {\mathscr {B}}({\mathscr {H}}_2, {\mathscr {H}}_3)\), let

$$ {\mathscr {N}}(A\mid C;\ {\mathscr {H}}_4)=\{G\in {\mathscr {B}}({\mathscr {H}}_4, {\mathscr {H}}_1): {\mathscr {R}}(AG)\subseteq {\mathscr {R}}(C)\}. $$

It is well known that \(G\in {\mathscr {B}}({\mathscr {H}}_4, {\mathscr {H}}_1)\) belongs to \({\mathscr {N}}(A\mid C;\ {\mathscr {H}}_4)\) if and only if there exists \(H\in {\mathscr {B}}({\mathscr {H}}_4,{\mathscr {H}}_2)\) such that \(AG=CH\).

Lemma 3.4

([9]) Let \(A\in {\mathscr {B}}({\mathscr {H}}_1,{\mathscr {H}}_3)\), \(B\in {\mathscr {B}}({\mathscr {H}}_2,{\mathscr {H}}_4)\) and \(C\in {\mathscr {B}}({\mathscr {H}}_2,{\mathscr {H}}_3)\) be given operators. Assume that

$$ M_0 = M(A,B,C; 0) = \left[ \begin{array}{cc} A&{}C\\ 0&{}B\end{array}\right] $$

is a right Fredholm operator on \({\mathscr {H}}_1\oplus {\mathscr {H}}_2\). Then B is a right Fredholm operator, \({\mathscr {R}}(A) + {\mathscr {R}}(C|_{{\mathscr {N}}(B)})\) is a closed subspace, and

$$\begin{aligned}&d(M_0) =\dim ({\mathscr {R}}(A) + {\mathscr {R}}(C|_{{\mathscr {N}}(B)}))^\bot + d(B),\\&n(M_0) =n(A) + n(C|_{{\mathscr {N}}(B)}) + \mathrm{dim}({\mathscr {R}}(A)\cap {\mathscr {R}}(C|_{{\mathscr {N}}(B)})). \end{aligned}$$

Finally, we will give a a modification of Theorem 3.5:

Theorem 3.6

Let \(A\in {\mathscr {B}}({\mathscr {H}}_1,{\mathscr {H}}_3)\), \(B\in {\mathscr {B}}({\mathscr {H}}_2,{\mathscr {H}}_4)\) and \(C\in {\mathscr {B}}({\mathscr {H}}_2,{\mathscr {H}}_3)\) be given. Then \(M_{X}\) is right invertible for some \(X\in {\mathscr {B}}({\mathscr {H}}_1,{\mathscr {H}}_4)\) if and only if \({\mathscr {R}}(A)+{\mathscr {R}}(C)={\mathscr {H}}_3\) and one of the following conditions holds:

(1):

\({\mathscr {N}}(A\mid C;\ {\mathscr {H}}_4)\) contains a non-compact operator,

(2):

\(M_0=\left[ \begin{array}{cc} A&{}C\\ 0&{}B\end{array}\right] \) is a right semi-Fredholm operator and

$$ d(M_0)\le n(A)+\mathrm{dim}\left( {\mathscr {R}}(A)\cap {\mathscr {R}}\big (C|_{{\mathscr {N}}(B)}\big )\right) . $$

Proof

Suppose \(M_X\) given by (3.6) is right invertible for some \(X\in {\mathscr {B}}({\mathscr {H}}_1,{\mathscr {H}}_4)\). This implies that \(\left[ \begin{array}{cc} A&C\end{array}\right] \) is right invertible and so \({\mathscr {R}}(A)+{\mathscr {R}}(C)={\mathscr {H}}_3\). Let \({\mathscr {H}}_2'=\left( {\mathscr {N}}(C)\cap {\mathscr {N}}(B)\right) ^\bot \). Then

$$M_{X}=\left[ \begin{array}{ccc} A&{}0&{}C'\\ X&{}0&{}B'\end{array}\right] : \left[ \begin{array}{ccc} {\mathscr {H}}_1\\ ({\mathscr {H}}_2')^\bot \\ {\mathscr {H}}_2'\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {H}}_3\\ {\mathscr {H}}_4\end{array}\right] ,$$

where \({\mathscr {N}}(C')\cap {\mathscr {N}}(B')=\{0\}\). Clearly

$$M_{X}'=\left[ \begin{array}{cc} A&{}C'\\ X&{}B'\end{array}\right] : \left[ \begin{array}{ccc} {\mathscr {H}}_1\\ {\mathscr {H}}_2'\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {H}}_3\\ {\mathscr {H}}_4\end{array}\right] $$

is right invertible. Thus there is a bounded linear operator

$$\left[ \begin{array}{cc} E&{}G\\ F&{}H\end{array}\right] : \left[ \begin{array}{cc} {\mathscr {H}}_3\\ {\mathscr {H}}_4\end{array}\right] \rightarrow \left[ \begin{array}{ccc} {\mathscr {H}}_1\\ {\mathscr {H}}_2'\end{array}\right] $$

such that

$$\left[ \begin{array}{cc} A&{}C'\\ X&{}B'\end{array}\right] \left[ \begin{array}{cc} E&{}G\\ F&{}H\end{array}\right] = \left[ \begin{array}{cc} I_{{\mathscr {H}}_3}&{}0\\ 0&{}I_{{\mathscr {H}}_4}\end{array}\right] .$$

From \(AG+C'H=0\) it follows that \({\mathscr {R}}(AG)\subseteq {\mathscr {R}}(C')={\mathscr {R}}(C)\) so, if G is a non-compact operator then (1) holds. If on the other hand G is compact, then from \(XG+B'H=I\), we see that \(B'H\) is a Fredholm operator and \(d(B'H)=n(B'H)\). Since

$$ \left[ \begin{array}{cc} I&{}0\\ -B^{\prime }F&{}I \end{array}\right] \left[ \begin{array}{cc} A&{}C^{\prime }\\ 0&{}B^{\prime } \end{array}\right] \left[ \begin{array}{cc} E&{}G\\ F&{}H \end{array}\right] = \left[ \begin{array}{cc} I_{{\mathscr {H}}_3}&{}0\\ 0&{}B^{\prime }H \end{array}\right] $$

and \(B'H\) is a Fredholm operator, it follows that \(M_0'=\left[ \begin{array}{cc} A&{}C'\\ 0&{}B'\end{array}\right] \) is a right Fredholm operator. As \({\mathscr {R}}(M_0)={\mathscr {R}}(M_0')\) the operator \(M_0\) is right Fredholm. Also

$$\begin{aligned} d(M_0)&=d(M_0')\le d\left( \left[ \begin{array}{cc} I_{{\mathscr {H}}_3}&{}0\\ 0&{}B'H\end{array}\right] \right) = d(B'H)=n(B'H)\\&\le n(M_0')= n(A)+n\left( C' \mid _{{\mathscr {N}}(B')}\right) + \mathrm{dim}\left( {\mathscr {R}}\left( C' \mid _{{\mathscr {N}}(B')}\right) \cap {\mathscr {R}}(A)\right) \\&=n(A)+\mathrm{dim}\left( {\mathscr {R}}(A)\cap {\mathscr {R}}\big (C|_{{\mathscr {N}}(B)}\big )\right) . \end{aligned}$$

For the converse implication: If \({\mathscr {N}}(A\mid C; {\mathscr {H}}_4)\) contains a non-compact operator, then \({\mathscr {H}}_1\) and \({\mathscr {H}}_4\) are infinite dimensional. By Lemma 3.1, there exists a closed subspace \({\mathscr {M}}\subseteq {\mathscr {H}}_1\) with \(\dim {\mathscr {M}}=\dim {\mathscr {H}}_4=\infty \) such that \({\mathscr {R}}(A|_{{\mathscr {M}}})\subseteq {\mathscr {R}}(C)\), and hence \({\mathscr {R}}({AP}_{{\mathscr {M}}}) = {\mathscr {R}}(A|_{{\mathscr {M}}}) \subseteq {\mathscr {R}}(C)\subseteq \), where \(C^+:{\mathscr {R}}(C)\oplus {\mathscr {R}}(C)^\bot \rightarrow {\mathscr {H}}_2\) is defined to be 0 on \({\mathscr {R}}(C)^\bot \) and \((C|_{{\mathscr {N}}(C)^\bot })^{-1}\) on \({\mathscr {R}}(C)\). This, together with \(AP_{{\mathscr {M}}}\in {\mathscr {B}}({\mathscr {H}}_1,{\mathscr {H}}_3)\) and Lemma 3.3, shows that \(C^+AP_{{\mathscr {M}}}\in {\mathscr {B}}({\mathscr {H}}_2,{\mathscr {H}}_3)\). On the other hand, it follows from Lemma 3.2 that there exists a right invertible operator \(T\in {\mathscr {B}}({\mathscr {H}}_1,{\mathscr {H}}_4)\) such that \({\mathscr {N}}(T)={\mathscr {M}}^\bot \). Define an operator \(X\in {\mathscr {B}}({\mathscr {H}}_1,{\mathscr {H}}_4)\) by

$$ X = T + BC^+AP_{{\mathscr {M}}}. $$

Then \(M_X\) is a right invertible operator. To prove that let \(u\in {\mathscr {H}}_3\) and \(v\in {\mathscr {H}}_4\) be arbitrary. Since \({\mathscr {R}}(A)+{\mathscr {R}}(C)={\mathscr {H}}_3\) and \({\mathscr {R}}(A|_{{\mathscr {M}}})\subseteq {\mathscr {R}}(C)\), there exist \(x_1\in {\mathscr {M}}^\bot \) and \(y_1\in {\mathscr {H}}_2\) such that \(Ax_1 + Cy_1 = u\). Also, by right invertibility of T, there exists \(x_2\in {\mathscr {M}}\) such that \(Tx_2 = v- By_1\). Let \(x_0 = x_1 + x_2\) and \(y_0 = y_1 - C^+Ax_2\). Then

$$ \left[ \begin{array}{cc} A&{}C\\ X&{}B\end{array}\right] \left[ \begin{array}{cc} x_0\\ y_0\end{array}\right] =\left[ \begin{array}{cc} u\\ v\end{array}\right] . $$

This establishes right invertibility of \(M_X\).

If (2) holds, put \(E = {\mathscr {R}}(A)+{\mathscr {R}}(C|_{{\mathscr {N}}(B)})\). From Lemma 3.4 and the right Fredholmness of \(M_0\) we can infer that B is a right Fredholm operator, E is closed and \(\dim E^\bot = d(M_0)-d(B) < \infty \). From \({\mathscr {R}}(A)+{\mathscr {R}}(C)={\mathscr {H}}_3\) it follows that \({\mathscr {R}}(P_{E^\bot }C) = E^\bot \). Let \(G = (P_{E^\bot }C)^+E^\bot \) and \(S = BG\oplus {\mathscr {R}}(B)^\bot \). Then clearly \(G \subseteq {\mathscr {N}}(B)^\bot \) and so \(\dim E^\bot = \dim G = \dim BG\). Therefore \(\dim S = d(M_0)\). On the other hand, since \(d(M_0)\le n(A) + \dim ({\mathscr {R}}(A)\cap {\mathscr {R}}(C|_{{\mathscr {N}}(B)}))\), there exists a subspace \({\mathscr {M}}\subseteq {\mathscr {H}}_1\) with \(\dim {\mathscr {M}}= d(M_0)\) such that \({\mathscr {R}}(A|_{{\mathscr {M}}})\subseteq {\mathscr {R}}(C|_{{\mathscr {N}}(B)})\). From \(\dim {\mathscr {M}}=\dim S = d(M_0) < \infty \) and Lemma 3.2, there exists an operator \(J : {\mathscr {H}}_1\rightarrow S\) such that \({\mathscr {N}}(J) = {\mathscr {M}}^\bot \) and \(J|_{{\mathscr {M}}} : {\mathscr {M}}\rightarrow S\) is unitary. Define \(X\in {\mathscr {B}}({\mathscr {H}}_1,{\mathscr {H}}_2)\) by

$$ X=\left[ \begin{array}{cc} J\\ 0\end{array}\right] :{\mathscr {H}}_1\rightarrow S\oplus S^\bot . $$

Then \(M_X\) as an operator from \({\mathscr {H}}_1\oplus {\mathscr {N}}(B)\oplus G\oplus ({\mathscr {N}}(B)^\bot \ominus G)\) into \(E \oplus E^\bot \oplus S \oplus S^\bot \) has the following operator matrix:

$$ M_X=\left[ \begin{array}{cccc} A_1&{}C_1&{}C_2&{}C_3\\ 0&{}0&{}C_4&{}0\\ J&{}0&{}B_1&{}B_3\\ 0&{}0&{}0&{}B_2\end{array}\right] , $$

where \({\mathscr {N}}(B)^\bot \ominus G=\{y\in {\mathscr {N}}(B)^\bot : y\in G^\bot \}\). Obviously, \(C_4\) is invertible. From the right Fredholmness of B we can infer that \(B_2\) is invertible. Thus there is an invertible operator \(U\in {\mathscr {B}}({\mathscr {H}}_1,{\mathscr {H}}_2)\) such that

$$ UM_X = U\left[ \begin{array}{cccc} A_1&{}C_1&{}C_2&{}C_3\\ 0&{}0&{}C_4&{}0\\ J&{}0&{}B_1&{}B_3\\ 0&{}0&{}0&{}B_2\end{array}\right] =\left[ \begin{array}{cccc} A_1&{}C_1&{}0&{}0\\ 0&{}0&{}C_4&{}0\\ J&{}0&{}0&{}0\\ 0&{}0&{}0&{}B_2\end{array}\right] . $$

It follows that \(M_X\) is a right invertible operator if and only if

$$ \left[ \begin{array}{cc} A_1&{}C_1\\ J&{}0\end{array}\right] :{\mathscr {H}}_1\oplus {\mathscr {N}}(B)\rightarrow E\oplus S, $$

is a right invertible operator.

For any \(u\in E\) and \(v\in S\), it follows from \(E = {\mathscr {R}}(A) + {\mathscr {R}}(C|_{{\mathscr {N}}(B)})\), \({\mathscr {R}}(A|_{{\mathscr {M}}}) \subseteq {\mathscr {R}}(C|_{{\mathscr {N}}(B)})\) and the definition of J that there exist \(x_1\in {\mathscr {M}}\), \(x_2\in {\mathscr {M}}^\bot \) and \(y_1\in {\mathscr {N}}(B)\) such that

$$ Jx_1 = v,\ Ax_2 + Cy_1 = u. $$

Since \({\mathscr {R}}(A|_{{\mathscr {M}}}) \subseteq {\mathscr {R}}(C|_{{\mathscr {N}}(B)})\), there exists \(y_2\in {\mathscr {N}}(B)\) with

$$ Ax_1 + Cy_2 = 0. $$

Note that \(A_1 = A : {\mathscr {H}}_1\rightarrow E\), \(C_1 = C|_{{\mathscr {N}}(B)} : {\mathscr {N}}(B) \rightarrow E\) and \({\mathscr {N}}(J) ={\mathscr {M}}^\bot \), and hence

$$ \left[ \begin{array}{cc} A_1&{}C_1\\ J&{}0\end{array}\right] \left[ \begin{array}{cc} x_1+x_2\\ y_1+y_2\end{array}\right] =\left[ \begin{array}{cc} u\\ v\end{array}\right] . $$

From the argument above we get that \(M_X\) is a right invertible operator.\(\Box \)

Remark 3.1

The condition (1) from the previous theorem is equivalent to the existence of a closed infinite dimensional subspace \({\mathscr {M}}\) of \({\mathscr {H}}_1\) such that \({\mathscr {R}}\left( A\mid _{{\mathscr {M}}}\right) \subseteq {\mathscr {R}}(C)\).

As a corollary of Theorem 3.6 we have the following result concerning completions to left invertibility, that parallels Theorem 2.7 [9].

Corollary 3.1

Let \(A\in {\mathscr {B}}({\mathscr {H}}_1,{\mathscr {H}}_3)\), \(B\in {\mathscr {B}}({\mathscr {H}}_2,{\mathscr {H}}_4)\) and \(C\in {\mathscr {B}}({\mathscr {H}}_2,{\mathscr {H}}_3)\) be given. Then \(M_{X}\) is left invertible for some \(X\in {\mathscr {B}}({\mathscr {H}}_1,{\mathscr {H}}_4)\) if and only if \({\mathscr {R}}(B^*)+{\mathscr {R}}(C^*)={\mathscr {H}}_2\) and one of the following conditions holds:

(1):

\({\mathscr {N}}(B^*\mid C^*;\ {\mathscr {H}}_1)\) contains a non-compact operator,

(2):

\(M_0=\left[ \begin{array}{cc} A&{}C\\ 0&{}B\end{array}\right] \) is a left semi-Fredholm operator and

$$ n(M_0)\le d(B)+\mathrm{dim}\left( {\mathscr {R}}(B^*)\cap {\mathscr {R}}\big (C^*|_{{\mathscr {N}}(A^*)}\big )\right) . $$

3.2 Applications of Completions of Operator Matrices to Reverse Order Law for \(\{1\}\)-Inverses of Operators on Hilbert Spaces

The reverse order law problem for \(\{1\}\)-inverses for operators acting on separable Hilbert spaces was completely resolved in the paper [1] and this was done using a radically new approach than in the recent papers on this subject, one that involves some of the previous research on completions of operator matrices to left and right invertibility. More exactly, the solution of this problem relies heavily on the results on completions of operator matrices presented in Sect. 3.1, so that the results of the present section can in a way be regarded as an interesting application of the research related to the topic of completions of operator matrices.

First, we will need the following observations.

Let \(A\in {\mathscr {B}}({\mathscr {H}},{\mathscr {K}})\) and \(B\in {\mathscr {B}}({\mathscr {L}}, {\mathscr {H}})\) be arbitrary regular operators. Using the following decompositions of the spaces \({\mathscr {H}}\), \({\mathscr {K}}\) and \({\mathscr {L}}\),

$$ {\mathscr {L}}={\mathscr {R}}(B^*)\oplus {\mathscr {N}}(B),\ {\mathscr {H}}={\mathscr {R}}(B)\oplus {\mathscr {N}}(B^*),\ {\mathscr {K}}={\mathscr {R}}(A)\oplus {\mathscr {N}}(A^*), $$

we have that the corresponding representations of operators A and B are given by

$$\begin{aligned} \begin{aligned}&A=\left[ \begin{array}{cc} A_1&{}A_2\\ 0&{}0\end{array}\right] :&\left[ \begin{array}{cc} {\mathscr {R}}(B)\\ {\mathscr {N}}(B^*)\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {R}}(A)\\ {\mathscr {N}}(A^*)\end{array}\right] , \\&B=\left[ \begin{array}{cc} {B}\quad B_1 &{}0\\ 0&{}0\end{array}\right] :&\left[ \begin{array}{cc} {\mathscr {R}}(B^*)\\ {\mathscr {N}}(B)\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {R}}(B)\\ {\mathscr {N}}(B^*)\end{array}\right] , \end{aligned} \end{aligned}$$
(3.7)

where \(B_1\) is invertible and \(\left[ \begin{array}{cc} A_1&A_2\end{array}\right] : \left[ \begin{array}{cc} {\mathscr {R}}(B^*)\\ {\mathscr {N}}(B)\end{array}\right] \rightarrow {\mathscr {R}}(A)\) is right invertible. In that case the operator AB is given by

$$\begin{aligned} AB=\left[ \begin{array}{cc} A_1B_1&{}0\\ 0&{}0\end{array}\right] : \left[ \begin{array}{cc} {\mathscr {R}}(B^*)\\ {\mathscr {N}}(B)\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {R}}(A)\\ {\mathscr {N}}(A^*)\end{array}\right] . \end{aligned}$$
(3.8)

The following lemma gives a description of all the \(\{1\}\)-inverses of A, B and AB in terms of their representations corresponding to appropriate decompositions of spaces.

Lemma 3.5

Let \(A\in {\mathscr {B}}({\mathscr {H}},{\mathscr {K}})\) and \(B\in {\mathscr {B}}({\mathscr {L}}, {\mathscr {H}})\) be regular operators given by (3.7). Then

\(\mathrm{(i)}\) :

an arbitrary \(\{1\}\)-inverse of A is given by:

$$\begin{aligned} A^{(1)}= & {} \left[ \begin{array}{cc} X_1&{}X_2\\ X_3&{}X_4\end{array}\right] : \left[ \begin{array}{cc} {\mathscr {R}}(A)\\ {\mathscr {N}}(A^*)\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {R}}(B)\\ {\mathscr {N}}(B^*)\end{array}\right] , \end{aligned}$$
(3.9)

where \(X_1\) and \(X_3\) satisfy the following equality

$$\begin{aligned} A_1X_1+A_2X_3=I_{{\mathscr {R}}(A)}, \end{aligned}$$
(3.10)

and \(X_2\), \(X_4\) are arbitrary operators from appropriate spaces.

\(\mathrm{(ii)}\) :

an arbitrary \(\{1\}\)-inverse of B is given by:

$$\begin{aligned} B^{(1)}= & {} \left[ \begin{array}{cc} B_1^{-1}&{}Y_2\\ Y_3&{}Y_4\end{array}\right] : \left[ \begin{array}{cc} {\mathscr {R}}(B)\\ {\mathscr {N}}(B^*)\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {R}}(B^*)\\ {\mathscr {N}}(B)\end{array}\right] , \end{aligned}$$
(3.11)

where \(Y_2, Y_3\) and \(Y_4\) are arbitrary operators from appropriate spaces.

\(\mathrm{(iii)}\) :

if AB is regular, then so is \(A_1B_1\) and an arbitrary \(\{1\}\)-inverse of AB is given by:

$$\begin{aligned} (AB)^{(1)}= & {} \left[ \begin{array}{cc} (A_1B_1)^{(1)}&{}Z_2\\ Z_3&{}Z_4\end{array}\right] : \left[ \begin{array}{cc} {\mathscr {R}}(A)\\ {\mathscr {N}}(A^*)\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {R}}(B^*)\\ {\mathscr {N}}(B)\end{array}\right] , \end{aligned}$$
(3.12)

where \((A_1B_1)^{(1)}\in (A_1B_1)\{1\}\) and \(Z_i\), \(i=\overline{2,4}\) are arbitrary operators from appropriate spaces.

Proof

\(\mathrm{(i)}\) Suppose a \(\{1\}\)-inverse of A is given by:

$$\begin{aligned} A^{(1)}= & {} \left[ \begin{array}{cc} X_1&{}X_2\\ X_3&{}X_4\end{array}\right] : \left[ \begin{array}{cc} {\mathscr {R}}(A)\\ {\mathscr {N}}(A^*)\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {R}}(B)\\ {\mathscr {N}}(B^*)\end{array}\right] . \end{aligned}$$

From \(AXA=A\) we get that \(X\in A\{1\}\) if and only if \(X_1\) and \(X_3\) satisfy the following equations

$$\begin{aligned}&(A_1X_1+A_2X_3)A_1=A_1, \nonumber \\&(A_1X_1+A_2X_3)A_2=A_2. \end{aligned}$$
(3.13)

Since \(S=\left[ \begin{array}{cc} A_1&A_2\end{array}\right] : \left[ \begin{array}{cc} {\mathscr {R}}(B^*)\\ {\mathscr {N}}(B)\end{array}\right] \rightarrow {\mathscr {R}}(A)\) is a right invertible operator, there exists \(S^{-1}_r: {\mathscr {R}}(A) \rightarrow \left[ \begin{array}{cc} {\mathscr {R}}(B^*)\\ {\mathscr {N}}(B)\end{array}\right] \) such that \(\left[ \begin{array}{cc} A_1&A_2\end{array}\right] S^{-1}_r=I_{{\mathscr {R}}(A)}\). Notice that (3.13) is equivalent to

$$\begin{aligned} \left[ \begin{array}{cc} A_1&A_2\end{array}\right] \left[ \begin{array}{cc} X_1\\ X_3\end{array}\right] \left[ \begin{array}{cc} A_1&A_2\end{array}\right] =\left[ \begin{array}{cc} A_1&A_2\end{array}\right] . \end{aligned}$$
(3.14)

Multiplying (3.14) by \(S^{-1}_r\) from the right, we get that (3.14) is equivalent with \(\left[ \begin{array}{cc} A_1&A_2\end{array}\right] \left[ \begin{array}{cc} X_1\\ X_3\end{array}\right] =I_{{\mathscr {R}}(A)}\), i.e.,

$$\begin{aligned} A_1X_1+A_2X_3=I_{{\mathscr {R}}(A)}. \end{aligned}$$
(3.15)

Note, that for \(X_1\) and \(X_3\) which satisfy (3.15), (3.13) also holds.

\(\mathrm{(ii)}\) Suppose that a \(\{1\}\)-inverse of B is given by:

$$\begin{aligned} B^{(1)}= & {} \left[ \begin{array}{cc} Y_1&{}Y_2\\ Y_3&{}Y_4\end{array}\right] : \left[ \begin{array}{cc} {\mathscr {R}}(B)\\ {\mathscr {N}}(B^*)\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {R}}(B^*)\\ {\mathscr {N}}(B)\end{array}\right] . \end{aligned}$$

From \(BB^{(1)}B=B\) it follows that \(B_1Y_1B_1=B_1\) and since \(B_1\) is invertible, \(Y_1=B_1^{-1}\).

\(\mathrm{(iii)}\) Suppose that a \(\{1\}\)-inverse of AB is given by:

$$\begin{aligned} (AB)^{(1)}= & {} \left[ \begin{array}{cc} Z_1&{}Z_2\\ Z_3&{}Z_4\end{array}\right] : \left[ \begin{array}{cc} {\mathscr {R}}(A)\\ {\mathscr {N}}(A^*)\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {R}}(B^*)\\ {\mathscr {N}}(B)\end{array}\right] . \end{aligned}$$

From \(AB(AB)^{(1)}AB=AB\), we get

$$\begin{aligned} A_1B_1Z_1A_1B_1=A_1B_1, \end{aligned}$$
(3.16)

and we also see that the operators \(Z_2\), \(Z_3\) and \(Z_4\) can be arbitrary. Now, from (3.16) we see that \(Z_1\in (A_1B_1)\{1\}\). \(\Box \)

Lemma 3.6

Let \(K_1\in {\mathscr {B}}({\mathscr {H}}_1,{\mathscr {H}}_3)\) be left invertible and \(K_2\in {\mathscr {B}}({\mathscr {H}}_2,{\mathscr {H}}_3)\) be arbitrary. If \((I-K_1K_1^{(1)})K_2\) is left invertible for some inner inverse \(K_1^{(1)}\) of \(K_1\), then \(\left[ \begin{array}{cc} K_1&K_2\end{array}\right] :\left[ \begin{array}{cc} {\mathscr {H}}_1\\ {\mathscr {H}}_2\end{array}\right] \rightarrow {\mathscr {H}}_3\) is left invertible.

Proof

By our assumptions there are \(X\in {\mathscr {B}}({\mathscr {H}}_3,{\mathscr {H}}_1)\), an inner inverse \(K_1^{(1)}\) of \(K_1\) and \(Y_0\in {\mathscr {B}}({\mathscr {H}}_3,{\mathscr {H}}_2)\) such that \(XK_1=I\) and \(Y_0(I-K_1K_1^{(1)})K_2=I\). It is easily verified that \(D \left[ \begin{array}{cc} K_1&K_2\end{array}\right] = I\), where

$$D=\left[ \begin{array}{cc} X-XK_2Y\\ Y\end{array}\right] :{\mathscr {H}}_3\rightarrow \left[ \begin{array}{cc} {\mathscr {H}}_1\\ {\mathscr {H}}_2\end{array}\right] $$

for \(Y=Y_0(I-K_1K_1^{(1)})\). \(\Box \)

To enhance readability of the proof of our main result, we will first prove it under the assumption that \(\mathrm{dim}{\mathscr {N}}(A^*)\le \mathrm{dim}{\mathscr {N}}(B)\), then directly derive from that the version in the remaining case \(\mathrm{dim}{\mathscr {N}}(B)\le \mathrm{dim}{\mathscr {N}}(A^*)\), and finally simply combine the two results in Theorem 3.10 in which no assumptions are made.

The following auxiliary theorem will play a key role in the proof of our main result.

Theorem 3.7

Let regular operators \(A\in {\mathscr {B}}({\mathscr {H}},{\mathscr {K}})\) and \(B\in {\mathscr {B}}({\mathscr {L}},{\mathscr {H}})\) be given by (3.7). If \(\mathrm{dim}{\mathscr {N}}(A^*)\le \mathrm{dim}{\mathscr {N}}(B)\) and AB is regular, then the following conditions are equivalent:

\(\mathrm{(i)}\) :

\((AB)\{1\}\subseteq B\{1\} A\{1\}\),

\(\mathrm{(ii)}\) :

For any \((A_1B_1)^{(1)}\in (A_1B_1)\{1\}\) and \(Z_2\in {\mathscr {B}}({\mathscr {N}}(A^*),{\mathscr {R}}(B^*))\), there exist operators \(W_3\in {\mathscr {B}}({\mathscr {R}}(A),{\mathscr {N}}(B^*))\) with \({\mathscr {R}}(I-A_2W_3)\subseteq {\mathscr {R}}(A_1)\) and \(W_4\in {\mathscr {B}}({\mathscr {N}}(A^*),\) \({\mathscr {N}}(B^*))\) such that

$$\begin{aligned} X'=\left[ \begin{array}{cc} B_1(A_1B_1)^{(1)} &{}B_1Z_2\\ W_3&{}W_4\end{array}\right] : \left[ \begin{array}{cc} {\mathscr {R}}(A)\\ {\mathscr {N}}(A^*)\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {R}}(B)\\ {\mathscr {N}}(B^*)\end{array}\right] \end{aligned}$$
(3.17)

is left invertible,

\(\mathrm{(iii)}\) :

For any \((A_1B_1)^{(1)}\in (A_1B_1)\{1\}\) and \(Z_2\in {\mathscr {B}}({\mathscr {N}}(A^*),{\mathscr {R}}(B^*))\), there exists \(W_3\in {\mathscr {B}}({\mathscr {R}}(A),{\mathscr {N}}(B^*))\) with \({\mathscr {R}}(I-A_2W_3)\subseteq {\mathscr {R}}(A_1)\) such that at least one of the following two conditions is satisfied

(1):

\({\mathscr {N}}(W_3^*\mid (B_1(A_1B_1)^{(1)})^*;\ {\mathscr {N}}(A^*))\) contains a non-compact operator

(2):

\(X_0=\left[ \begin{array}{cc} B_1(A_1B_1)^{(1)}&{}B_1Z_2\\ W_3&{}0\end{array}\right] \) is a left-Fredholm operator and

$$ n(X_0)\le d(W_3)+\mathrm{dim}\left( {\mathscr {R}}(W_3^*)\cap {\mathscr {R}}\left( ((B_1(A_1B_1)^{(1)})^*|_{{\mathscr {N}}((B_1Z_2)^*)}\right) \right) . $$

Proof

Condition \(\mathrm (i)\) states that for any \((AB)^{(1)}\in (AB)\{1\}\) there exist \(A^{(1)}\in A\{1\}\) and \(B^{(1)}\in B\{1\}\) such that

$$ (AB)^{(1)}=B^{(1)}A^{(1)} $$

which is using Lemma 3.5, equivalent with the fact that for any \((A_1B_1)^{(1)}\in (A_1B_1)\{1\}\), \(Z_2\in {\mathscr {B}}({\mathscr {N}}(A^*),{\mathscr {R}}(B^*))\), \(Z_3\in {\mathscr {B}}({\mathscr {R}}(A),{\mathscr {N}}(B))\) and \(Z_4\in {\mathscr {B}}({\mathscr {N}}(A^*),\) \({\mathscr {N}}(B))\), there exist \(Y_2\in {\mathscr {B}}({\mathscr {N}}(B^*),{\mathscr {R}}(B^*))\), \(Y_3\in {\mathscr {B}}({\mathscr {R}}(B),{\mathscr {N}}(B))\), \(Y_4\in \) \({\mathscr {B}}({\mathscr {N}}(B^*),\) \({\mathscr {N}}(B))\) and \(X=\left[ \begin{array}{cc} X_1&{}X_2\\ X_3&{}X_4\end{array}\right] : \left[ \begin{array}{cc} {\mathscr {R}}(A)\\ {\mathscr {N}}(A^*)\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {R}}(B)\\ {\mathscr {N}}(B^*)\end{array}\right] \) satisfying (3.10) such that

$$\begin{aligned} \left[ \begin{array}{cc} (A_1B_1)^{(1)}&{}Z_2\\ Z_3&{}Z_4\end{array}\right] =\left[ \begin{array}{cc} B_1^{-1}&{}Y_2\\ Y_3&{}Y_4\end{array}\right] \left[ \begin{array}{cc} X_1&{}X_2\\ X_3&{}X_4\end{array}\right] , \end{aligned}$$

i.e.,

$$\begin{aligned}&\left[ \begin{array}{cc} (A_1B_1)^{(1)}&Z_2\end{array}\right] =\left[ \begin{array}{cc} B_1^{-1}&Y_2\end{array}\right] X\end{aligned}$$
(3.18)
$$\begin{aligned}&\left[ \begin{array}{cc} Z_3&Z_4\end{array}\right] =\left[ \begin{array}{cc} Y_3&Y_4\end{array}\right] X. \end{aligned}$$
(3.19)

In general for arbitrary but fixed \(Y_2\) the Eq. (3.18) is solvable for X and the set of the solutions is given by

$$\begin{aligned} S= & {} \left\{ \left[ \begin{array}{c} B_1\\ 0\end{array}\right] \right. \left[ \begin{array}{cc} (A_1B_1)^{(1)}&Z_2\end{array}\right] + \left( I- \left[ \begin{array}{c} B_1\\ 0\end{array}\right] \left[ \begin{array}{cc} B_1^{-1}&Y_2\end{array}\right] \right) W:\nonumber \\&\left. W\in {\mathscr {B}}({\mathscr {K}},{\mathscr {H}})\right\} \nonumber \\= & {} \left\{ \left[ \begin{array}{cc} B_1(A_1B_1)^{(1)}-B_1Y_2W_3 &{}B_1Z_2-B_1Y_2W_4\\ W_3&{}W_4\end{array}\right] : \right. \\&\left. \left[ \begin{array}{cc} W_1&{}W_2\\ W_3&{}W_4\end{array}\right] : \left[ \begin{array}{cc} {\mathscr {R}}(A)\\ {\mathscr {N}}(A^*)\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {R}}(B)\\ {\mathscr {N}}(B^*)\end{array}\right] \right\} .\nonumber \end{aligned}$$
(3.20)

Thus \(\mathrm{(i)}\) is equivalent with the existence of at least one \(X\in S\cap A\{1\}\) for which the Eq. (3.19) is solvable for \(\left[ \begin{array}{cc} Y_3&Y_4\end{array}\right] \). That is \(\mathrm (i)\) holds if and only if for any \((A_1B_1)^{(1)}\in (A_1B_1)\{1\}\), \(Z_2\in {\mathscr {B}}({\mathscr {N}}(A^*),{\mathscr {R}}(B^*))\), \(Z_3\in {\mathscr {B}}({\mathscr {R}}(A),{\mathscr {N}}(B))\) and \(Z_4\in {\mathscr {B}}({\mathscr {N}}(A^*),\) \({\mathscr {N}}(B))\) there exist operators \(W_3\in {\mathscr {B}}({\mathscr {R}}(A),{\mathscr {N}}(B^*))\), \(W_4\in {\mathscr {B}}({\mathscr {N}}(A^*),{\mathscr {N}}(B^*))\) and \(Y_2\in {\mathscr {B}}({\mathscr {N}}(B^*),{\mathscr {R}}(B^*))\) such that \(K_1=\) \(\left[ \begin{array}{cc} B_1(A_1B_1)^{(1)}-B_1Y_2W_3 \\ W_3 \end{array}\right] \) is a right inverse of \(\left[ \begin{array}{cc} A_1&A_2\end{array}\right] \) and the following system

$$\begin{aligned}&Z_3=\left[ \begin{array}{cc} Y_3&Y_4\end{array}\right] K_1 \end{aligned}$$
(3.21)
$$\begin{aligned}&Z_4=\left[ \begin{array}{cc} Y_3&Y_4\end{array}\right] K_2, \end{aligned}$$
(3.22)

is solvable for \(\left[ \begin{array}{cc} Y_3&Y_4\end{array}\right] \), where \(K_2=\left[ \begin{array}{cc} B_1Z_2-B_1Y_2W_4 \\ W_4\end{array}\right] \).

This is the reformulation of the condition \(\mathrm (i)\) that we will use in proving the implication \(\mathrm (i)\Rightarrow \mathrm (ii)\).

\(\mathrm (i)\Rightarrow \mathrm (ii)\): Let \((A_1B_1)^{(1)}\in (A_1B_1)\{1\}\) and \(Z_2\in {\mathscr {B}}({\mathscr {N}}(A^*),{\mathscr {R}}(B^*))\). Taking \(Z_3=0\) and a left invertible \(Z_4\in {\mathscr {B}}({\mathscr {N}}(A^*),{\mathscr {N}}(B))\) (such \(Z_4\) exists since \(\mathrm{dim}{\mathscr {N}}(A^*)\le \mathrm{dim}{\mathscr {N}}(B)\)), the condition \(\mathrm (i)\) yields an operator \(\left[ \begin{array}{cc} K_1&K_2\end{array}\right] \) as described above. Since the Eqs. (3.21) and (3.22) have a common solution and \(K_1\) is regular, we get that

$$\begin{aligned} Z_4=W(I-K_1K_1^{(1)})K_2, \end{aligned}$$

for some \(W\in {\mathscr {B}}({\mathscr {H}},{\mathscr {N}}(B))\) and some (any) \(K_1^{(1)}\). Left invertibility of \(Z_4\) implies left invertibility of \(T=(I-K_1K_1^{(1)})K_2\) which, given that \(K_1\) is left invertible, implies that \(X=\left[ \begin{array}{cc} K_1&K_2\end{array}\right] \) is a left invertible operator by Lemma 3.6. It can easily be checked that X is left invertible if and only if

$$ X'=\left[ \begin{array}{cc} B_1(A_1B_1)^{(1)} &{}B_1Z_2\\ W_3&{}W_4\end{array}\right] $$

is left invertible.

Finally, \(\left[ \begin{array}{cc} A_1&A_2\end{array}\right] K_1=I\) means just that

$$ A_1B_1Y_2W_3=A_2W_3-\left( I-(A_1B_1)(A_1B_1)^{(1)}\right) $$

which upon multiplication from the left by \(I-(A_1B_1)(A_1B_1)^{(1)}\) gives

$$ \left( I-(A_1B_1)(A_1B_1)^{(1)}\right) A_2W_3=I-(A_1B_1)(A_1B_1)^{(1)}, $$

i.e., \({\mathscr {R}}(I-A_2W_3)\subseteq {\mathscr {R}}(A_1B_1)={\mathscr {R}}(A_1)\).

\(\mathrm (ii)\Rightarrow \mathrm (i)\): Let \((A_1B_1)^{(1)}\in (A_1B_1)\{1\}\), \(Z_2\in {\mathscr {B}}({\mathscr {N}}(A^*),{\mathscr {R}}(B^*))\), \(Z_3\in {\mathscr {B}}({\mathscr {R}}(A),{\mathscr {N}}(B))\) and \(Z_4\in {\mathscr {B}}({\mathscr {N}}(A^*),\) \({\mathscr {N}}(B))\) be arbitrary. By our assumption, there are operators \(W_3\) and \(W_4\) acting between appropriate spaces such that \(X'\) given by (3.17) is left invertible and \({\mathscr {R}}(I-A_2W_3)\subseteq {\mathscr {R}}(A_1B_1)\). The latter condition implies that for \(Y_2=(A_1B_1)^{(1)}A_2\) the operator

$$\begin{aligned} X=\left[ \begin{array}{cc} B_1(A_1B_1)^{(1)}-B_1Y_2W_3 &{}B_1Z_2-B_1Y_2W_4\\ W_3&{}W_4\end{array}\right] , \end{aligned}$$
(3.23)

is an inner inverse of A. Also \(X\in S\) so (3.18) is satisfied. As before, left invertibility of \(X'\) implies left invertibility of X, so the Eq. (3.19) is solvable for \(\left[ \begin{array}{cc} Y_3&Y_4\end{array}\right] \). Thus \(\left[ \begin{array}{cc} (A_1B_1)^{(1)}&{}Z_2\\ L_3&{}Z_4\end{array}\right] \in B{\{1\}} A{\{1\}}\).

\(\mathrm (ii)\Leftrightarrow \mathrm (iii)\): This follows from Corollary 3.1.\(\Box \)

The following lemma of technical character will be needed later.

Lemma 3.7

Let

$$D=\left[ \begin{array}{cc} A_1&A_2\end{array}\right] : \left[ \begin{array}{cc} {\mathscr {H}}_1\\ {\mathscr {H}}_2\end{array}\right] \rightarrow {\mathscr {H}}_3$$

be a right invertible operator such that \(A_1\) has closed range. Suppose \({\mathscr {H}}_2={\mathscr {M}}\oplus {\mathscr {N}}(A_2)\) and \({\mathscr {H}}_3={\mathscr {R}}(A_1)\oplus {\mathscr {N}}\), with \({\mathscr {M}}\) and \({\mathscr {N}}\) closed, and let

$$A_2=\left[ \begin{array}{cc} A_2^{'} &{}0\\ A_2^{''}&{}0 \end{array}\right] : \left[ \begin{array}{cc} {\mathscr {M}}\\ {\mathscr {N}}(A_2)\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {R}}(A_1)\\ {\mathscr {N}}\end{array}\right] $$
\(\mathrm (i)\) :

An operator \(W:{\mathscr {H}}_3\rightarrow {\mathscr {H}}_2\) satisfies \({\mathscr {R}}(I-A_2W)\subseteq {\mathscr {R}}(A_1)\) if and only if it has a representation

$$\begin{aligned} W=\left[ \begin{array}{cc} W_1 &{}W_2\\ W_3&{}W_4 \end{array}\right] : \left[ \begin{array}{cc} {\mathscr {R}}(A_1)\\ {\mathscr {N}}\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {M}}\\ {\mathscr {N}}(A_2)\end{array}\right] \end{aligned}$$
(3.24)

where \(A_2^{''}W_1=0\) and \(A_2^{''}W_2=I\). There is at least one such operator.

\(\mathrm (ii)\) :

\(\mathrm{dim}{\mathscr {N}}\le \mathrm{dim}{\mathscr {M}}\).

Proof

\(\mathrm (i)\) Suppose W is given by (3.24). From

$$I-A_2W=\left[ \begin{array}{cc} I-A_2^{'}W_1 &{}-A_2^{'}W_2\\ -A_2^{''}W_1&{}I -A_2^{''}W_2 \end{array}\right] : \left[ \begin{array}{cc} {\mathscr {R}}(A_1)\\ {\mathscr {N}}\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {R}}(A_1)\\ {\mathscr {N}}\end{array}\right] $$

we see that \({\mathscr {R}}(I-A_2W)\subseteq {\mathscr {R}}(A_1)\) holds if and only if \(A_2^{''}W_1=0\) and \(A_2^{''}W_2=I\). One such operator is obtained by taking \(W=X_2\) where

$$\left[ \begin{array}{cc} X_1\\ H_2 \end{array}\right] : {\mathscr {H}}_3 \rightarrow \left[ \begin{array}{cc} {\mathscr {H}}_1\\ {\mathscr {H}}_2\end{array}\right] $$

is any right inverse of D.

\(\mathrm (ii)\) The inequality follows from the fact that the existence of an operator W as in \(\mathrm (i)\) implies that \(A_2^{''}:{\mathscr {M}}\rightarrow {\mathscr {N}}\) is right invertible. It can also be trivially seen to hold true directly, without any recourse to \(\mathrm (i)\). \(\Box \)

The following theorem gives necessary and sufficient conditions for the inclusion \((AB)\{1\}\subseteq B\{1\} A\{1\}\) to hold under the additional assumption that \(\mathrm{dim}{\mathscr {N}}(A^*)\le \mathrm{dim}{\mathscr {N}}(B)\). As we will explain later, the main result is practically a direct consequence of it.

Theorem 3.8

Let regular operators \(A\in {\mathscr {B}}({\mathscr {H}},{\mathscr {K}})\) and \(B\in {\mathscr {B}}({\mathscr {L}},{\mathscr {H}})\) be given by (3.7). If \(\mathrm{dim}{\mathscr {N}}(A^*)\le \mathrm{dim}{\mathscr {N}}(B)\) and AB is regular, then the following conditions are equivalent:

\(\mathrm (i)\) \((AB)\{1\}\subseteq B\{1\} A\{1\}\),

\(\mathrm (ii)\) One of the following conditions is satisfied:

  • \(\mathrm (a)\) \(\mathrm{dim}{\mathscr {N}}(B^*)<\infty \) and \(\mathrm{dim}{\mathscr {N}}(A_1^*)+\mathrm{dim}{\mathscr {N}}(A^*)\le \mathrm{dim}{\mathscr {N}}(B^*)\)

  • \(\mathrm (b)\) \(\mathrm{dim}{\mathscr {N}}(B^*)=\infty \) and \(\mathrm{dim}{\mathscr {N}}(A^*)\le \mathrm{dim}{\mathscr {N}}(A_2^{''})+\mathrm{dim}{\mathscr {N}}(A_2)\),

where \(A_2^{''}=P_{{\mathscr {N}}(A_1^*)} A_2|_{{\mathscr {R}}\left( A_2^*\right) }\).

Proof

\(\mathrm (i)\Rightarrow \mathrm (ii)\): We distinguish two cases:

Case 1. \(\mathrm{dim}{\mathscr {N}}(B^*)<\infty \). Using Theorems 3.7 and 3.4 we see that

$$ \mathrm{dim}{\mathscr {N}}\Big (\left[ \begin{array}{cc} B_1(A_1B_1)^\dagger&B_1Z_2\end{array}\right] \Big )\le \mathrm{dim}{\mathscr {N}}(B^*), $$

for any operator \(Z_2\) which belongs to \({\mathscr {B}}({\mathscr {N}}(A^*), {\mathscr {R}}(B^*))\), since by our assumption there always are \(W_3\in {\mathscr {B}}({\mathscr {R}}(A),{\mathscr {N}}(B^*))\) and \(W_4\in {\mathscr {B}}({\mathscr {N}}(A^*),{\mathscr {N}}(B^*))\) such that \(X'\) is left invertible. In particular, for \(Z_2=0\) we have that \({\mathscr {N}}\Big (\left[ \begin{array}{cc} B_1(A_1B_1)^{(1)}&B_1Z_2\end{array}\right] \Big )= {\mathscr {N}}(A_1^*)\oplus {\mathscr {N}}(A^*)\), hence \(\mathrm{dim}{\mathscr {N}}(A_1^*)+\mathrm{dim} {\mathscr {N}}(A^*)\le \mathrm{dim}{\mathscr {N}}(B^*)\). Thus \(\mathrm (a)\) holds.

Case 2. \(\mathrm{dim}{\mathscr {N}}(B^*)=\infty \). Taking \(Z_2=0\) and \((A_1B_1)^{(1)}=(A_1B_1)^{\dagger }\) we obtain an operator \(W_3\) such that \({\mathscr {R}}(I-A_2W_3)\subseteq {\mathscr {R}}(A_1)\) for which one of the conditions (1) and (2) from \(\mathrm (iii)\) of Theorem 3.7 is satisfied. From Lemma 3.7, we know that

$$\begin{aligned} W_3=\left[ \begin{array}{cc} L&{}J \\ K &{} T\end{array}\right] :\left[ \begin{array}{cc} {\mathscr {R}}(A_1)\\ {\mathscr {N}}(A_1^*)\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {R}}(A_2^*)\\ {\mathscr {N}}(A_2)\end{array}\right] , \end{aligned}$$
(3.25)

where \(A_2^{''}L=0\), \(A_2^{''}J=I\) and

$$A_2=\left[ \begin{array}{cc} A_2^{'} &{}0\\ A_2^{''}&{}0 \end{array}\right] : \left[ \begin{array}{cc} {\mathscr {R}}(A_2^*)\\ {\mathscr {N}}(A_2)\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {R}}(A_1)\\ {\mathscr {N}}(A_1^*)\end{array}\right] .$$

If \({\mathscr {N}}(W_3^*\mid (B_1(A_1B_1)^{\dagger })^*)\) contains a non-compact operator, then there is a (closed) infinite dimensional subspace \({\mathscr {M}}\) of \({\mathscr {N}}(B^*)\) such that

$$\begin{aligned} {\mathscr {R}}\left( W_3^*\mid _{{\mathscr {M}}}\right) \subseteq {\mathscr {R}}\left( (B_1(A_1B_1)^{\dagger })^*\right) ={\mathscr {R}}(A_1). \end{aligned}$$
(3.26)

From (3.26) it follows that \({\mathscr {M}}\subseteq {\mathscr {N}}\left( \left[ \begin{array}{cc} J^*&T^* \end{array}\right] \right) \). Now

$$\mathrm{dim}{\mathscr {N}}\left( \left[ \begin{array}{cc} J^*&T^* \end{array}\right] \right) \le \mathrm{dim}{\mathscr {N}}(J^*)+\mathrm{dim}{\mathscr {N}}(A_2)=\mathrm{dim}{\mathscr {N}}(A_2^{''})+\mathrm{dim}{\mathscr {N}}(A_2),$$

since \(\mathrm{dim}{\mathscr {N}}(J^*)=\mathrm{dim}{\mathscr {N}}(A_2^{''})\), given that J is a right inverse of \(A_2^{''}\), so \(\mathrm{dim}{\mathscr {N}}(A_2^{''})+\mathrm{dim}{\mathscr {N}}(A_2)=\infty \). Thus \(\mathrm (b)\) holds.

Suppose the condition (2) from \(\mathrm (iii)\) of Theorem 3.7 holds. We have

$$ {\mathscr {R}}\big (((B_1(A_1B_1)^{\dagger })^*|_{{\mathscr {N}}((B_1Z_2)^*)}\big )= {\mathscr {R}}(A_1) $$

and also

$$ d(W_3)= n\left( \left[ \begin{array}{cc} L^*&K^* \end{array}\right] \mid _{{\mathscr {N}}\left( \left[ \begin{array}{cc} J^*&T^* \end{array}\right] \right) }\right) $$

and

$${\mathscr {R}}(W_3^*)\cap {\mathscr {R}}\big (((B_1(A_1B_1)^{\dagger })^*|_{{\mathscr {N}}((B_1Z_2)^*)}\big )= {\mathscr {R}}\left( \left[ \begin{array}{cc} L^*&K^* \end{array}\right] \mid _{{\mathscr {N}}\left( \left[ \begin{array}{cc} J^*&T^* \end{array}\right] \right) }\right) .$$

The inclusion \({\mathscr {R}}(I-A_2W_3)\subseteq {\mathscr {R}}(A_1)\) implies that for \(Y_2=(A_1B_1)^{(1)}A_2\) the first column of the operator X given by (3.23) is left invertible. Thus the first column of the operator \(X_0\) is also left invertible so \({\mathscr {N}}(X_0)= {\mathscr {N}}(A^*)\). Hence \(n(A^*)\le n\left( \left[ \begin{array}{cc} J^*&T^* \end{array}\right] \right) \). Now using \(A_2^{''}J=I\) we get

$$\begin{aligned} n\left( \left[ \begin{array}{cc} J^*&T^* \end{array}\right] \right)&= n(J^*)+n(T^*)+\mathrm{dim}({\mathscr {R}}(J^*)\cap {\mathscr {R}}(T^*))\\&= \mathrm{dim}{\mathscr {N}}(A_2^{''})+\mathrm{dim}{\mathscr {N}}(A_2). \end{aligned}$$

Again, \(\mathrm (b)\) holds.

We now turn to establishing the implication \(\mathrm (ii)\Rightarrow \mathrm (i)\).

\(\mathrm (a)\Rightarrow \mathrm (i)\): We will show that condition \(\mathrm (ii)\) from Theorem 3.7 is satisfied. Let \((A_1B_1)^{(1)}\in (A_1B_1)\{1\}\) and \(Z_2\in {\mathscr {B}}({\mathscr {N}}(A^*),{\mathscr {R}}(B^*))\) be given.

By Lemma 3.7, we can fix a right inverse \(J:{\mathscr {N}}(A_1^*)\rightarrow {\mathscr {R}}(A_2^*)\) of \(A_2^{''}=P_{{\mathscr {N}}(A_1^*)} A_2|_{{\mathscr {R}}\left( A_2^*\right) }\). Consider

$$ W_3=\left[ \begin{array}{cc} J&{}0\\ 0&{}0\end{array}\right] :\left[ \begin{array}{cc} {\mathscr {N}}(A_1^*)\\ {\mathscr {R}}(A_1)\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {R}}(A_2^*)\\ {\mathscr {N}}(A_2)\end{array}\right] . $$

Using Lemma 3.7, we have that \({\mathscr {R}}(I-A_2W_3)\subseteq {\mathscr {R}}(A_1)\). Put \({\mathscr {M}}={\mathscr {R}}(W_3)={\mathscr {R}}(J)\). Since J is left invertible, \(\mathrm{dim}{\mathscr {M}}= \mathrm{dim}{\mathscr {N}}(A_1^*)\). Since \(\mathrm{dim}{\mathscr {N}}(A_1^*)+\mathrm{dim}{\mathscr {N}}(A^*)\le \mathrm{dim}{\mathscr {N}}(B^*)<\infty \) it follows that \(\mathrm{dim}{\mathscr {N}}(A^*)\le \mathrm{dim}{\mathscr {M}}^\bot \), where \({\mathscr {M}}^\bot \) is the orthogonal complement of \({\mathscr {M}}\) in \({\mathscr {N}}(B^*)\). Hence there is a left invertible \(W_4\in {\mathscr {B}}({\mathscr {N}}(A^*),{\mathscr {N}}(B^*))\) such that \({\mathscr {R}}(W_4)\subseteq {\mathscr {M}}^\bot \). We will show that \(X'\) given by (3.17) is left invertible.

To see that \(X'\) has closed range suppose \(x_n\in {\mathscr {R}}(A)\) and \(y_n\in {\mathscr {N}}(A^*)\) for \(n\in \mathbb N\) are such that

$$\begin{aligned} B_1(A_1B_1)^{(1)}x_n+B_1Z_2y_n\rightarrow u, \ \ W_3x_n+W_4y_n\rightarrow v. \end{aligned}$$

Since \({\mathscr {R}}(W_4)\subseteq {\mathscr {R}}(W_3)^{\bot }\) and \(W_4\) is left invertible, it follows that \(y_n\rightarrow y\) for some \(y\in {\mathscr {N}}(A^*)\). Using left invertibility of \(\left[ \begin{array}{cc} B_1(A_1B_1)^{(1)}\\ W_3\end{array}\right] \), we get that \(x_n\rightarrow x\) for some \(x\in {\mathscr {R}}(A)\). Hence \(\left[ \begin{array}{cc} u\\ v\end{array}\right] \in {\mathscr {R}}(X')\).

We now show that \(X'\) is injective. If \(\left[ \begin{array}{cc} x\\ y\end{array}\right] \in {\mathscr {N}}(X')\), then

$$\begin{aligned} B_1(A_1B_1)^{(1)}x+B_1Z_2y=0\\ W_3x+W_4y=0. \end{aligned}$$

Since \({\mathscr {M}}={\mathscr {R}}(W_3)\) and \({\mathscr {R}}(W_4)\subseteq {\mathscr {M}}^\bot \) it follows that \(W_3x=W_4y=0\). The injectivity of \(W_4\) now gives \(y=0\) and so \(B_1(A_1B_1)^{(1)}x=0\). The inclusion \({\mathscr {R}}(I-A_2W_3)\subseteq {\mathscr {R}}(A_1)={\mathscr {R}}(A_1B_1)\) implies

$$ A_1B_1(A_1B_1)^{(1)}(I-A_2W_3)=I-A_2W_3 $$

yielding \(x=0\).

\(\mathrm (b)\Rightarrow \mathrm (i)\): Let \((A_1B_1)^{(1)}\in (A_1B_1)\{1\}\) and \(Z_2\in {\mathscr {B}}({\mathscr {N}}(A^*),{\mathscr {R}}(B^*))\) be given. By Lemma 3.7 we have \({\mathscr {R}}(I-A_2W_3)\subseteq {\mathscr {R}}(A_1)\) for the operator \(W_3\) defined by (3.25) where \(L=0\), \(K=0\) and \(T=0\) and \(J:{\mathscr {N}}(A_1^*)\rightarrow {\mathscr {R}}(A_2^*)\) is any right inverse of \(A_2^{''}=P_{{\mathscr {N}}(A_1^*)} A_2|_{{\mathscr {R}}\left( A_2^*\right) }\) (Lemma 3.7 guaranties that there is one).

Since J is a right inverse of \(A_2^{''}\) we have that \({\mathscr {R}}(J)\oplus {\mathscr {N}}\left( A_2^{''}\right) ={\mathscr {R}}\left( A_2^*\right) \), so \(\mathrm{dim}{\mathscr {R}}(J)^{\bot }=\mathrm{dim}{\mathscr {N}}(A_2^{''})+\mathrm{dim}{\mathscr {N}}(A_2)\), where \({\mathscr {R}}(J)^{\bot }\) is the orthogonal complement of \({\mathscr {R}}(J)\) in \({\mathscr {N}}(B^*)\). Hence there is a left invertible \(W_4:{\mathscr {N}}(A^*)\rightarrow {\mathscr {N}}(B^*)\) such that \({\mathscr {R}}(W_4)\subseteq {\mathscr {R}}(J)^{\bot }\). That the operator \(X'\) given by (3.17) is left invertible can now be proved exactly as in \(\mathrm (a)\Rightarrow \mathrm (i)\).

We have thus shown that \(\mathrm (ii)\) of Theorem 3.7 holds. \(\Box \)

A standard argument allows us to easily turn the previous theorem into one dealing with the remaining case \(\mathrm{dim}{\mathscr {N}}(B)\le \mathrm{dim}{\mathscr {N}}(A^*)\).

Theorem 3.9

Let regular operators \(A\in {\mathscr {B}}({\mathscr {H}},{\mathscr {K}})\) and \(B\in {\mathscr {B}}({\mathscr {L}},{\mathscr {H}})\) be given by (3.7). If \(\mathrm{dim}{\mathscr {N}}(B)\le \mathrm{dim}{\mathscr {N}}(A^*)\) and AB is regular, then the following conditions are equivalent:

\(\mathrm (i)\) \((AB)\{1\}\subseteq B\{1\} A\{1\}\),

\(\mathrm (ii)\) One of the following conditions is satisfied:

\(\mathrm (a)\) :

\(\mathrm{dim}{\mathscr {N}}(A)<\infty \) and \(\mathrm{dim}{\mathscr {N}}(B_1^*)+\mathrm{dim}{\mathscr {N}}(B)\le \mathrm{dim}{\mathscr {N}}(A)\)

\(\mathrm (b)\) :

\(\mathrm{dim}{\mathscr {N}}(A)=\infty \) and \(\mathrm{dim}{\mathscr {N}}(B)\le \mathrm{dim}{\mathscr {N}}(B_2^{''})+\mathrm{dim}{\mathscr {N}}(B_2)\),

where \(B_1= P_{{\mathscr {R}}(B^*)} B^*|_{{\mathscr {R}}\left( A^*\right) }\), \(B_2= P_{{\mathscr {R}}(B^*)} B^*|_{{\mathscr {N}}\left( A\right) }\) and \(B_2^{''}=P_{{\mathscr {N}}(B_1^*)} B_2|_{{\mathscr {R}}(B_2^*)}\).

Proof

Since \(\mathrm (i)\) is equivalent with

$$\begin{aligned} (B^*A^*)\{1\}\subseteq A^*\{1\} B^*\{1\}, \end{aligned}$$
(3.27)

we can apply Theorem 3.8 to the operators \(B^*\) and \(A^*\) instead of A and B, respectively. \(\Box \)

Combining Theorems 3.8 and 3.9 we are finally in the position to state the main result of this section.

Theorem 3.10

Let regular operators \(A\in {\mathscr {B}}({\mathscr {H}},{\mathscr {K}})\) and \(B\in {\mathscr {B}}({\mathscr {L}},{\mathscr {H}})\) be given by (3.7) and let AB be regular. Then the following conditions are equivalent:

\(\mathrm (i)\) \((AB)\{1\}\subseteq B\{1\} A\{1\}\),

\(\mathrm (ii)\) One of the following conditions is satisfied:

  • \(\mathrm{(a)}\) \(\mathrm{dim}{\mathscr {N}}(A^*)\le \mathrm{dim}{\mathscr {N}}(B)\), \(\mathrm{dim}{\mathscr {N}}(A_1^*)+ \mathrm{dim}{\mathscr {N}}(A^*)\le \mathrm{dim}{\mathscr {N}}(B^*)\) and     \(\mathrm{dim}{\mathscr {N}}(B^*)<\infty \),

  • \(\mathrm (b)\) \(\mathrm{dim}{\mathscr {N}}(A^*)\le \mathrm{dim}{\mathscr {N}}(B)\), \(\mathrm{dim}{\mathscr {N}}(A^*)\le \mathrm{dim}{\mathscr {N}}(A_2^{''})+\mathrm{dim}{\mathscr {N}}(A_2)\) and     \(\mathrm{dim}{\mathscr {N}}(B^*)=\infty \),

  • \(\mathrm (c)\) \(\mathrm{dim}{\mathscr {N}}(B)\le \mathrm{dim}{\mathscr {N}}(A^*)\), \(\mathrm{dim}{\mathscr {N}}(B_1^*)+ \mathrm{dim}{\mathscr {N}}(B)\le \mathrm{dim}{\mathscr {N}}(A)\) and     \(\mathrm{dim}{\mathscr {N}}(A)<\infty \),

  • \(\mathrm (d)\) \(\mathrm{dim}{\mathscr {N}}(B)\le \mathrm{dim}{\mathscr {N}}(A^*)\), \(\mathrm{dim}{\mathscr {N}}(B)\le \mathrm{dim}{\mathscr {N}}(B_2^{''})+\mathrm{dim}{\mathscr {N}}(B_2)\) and     \(\mathrm{dim}{\mathscr {N}}(A)=\infty \),

where \(A_2^{''}=P_{{\mathscr {N}}(A_1^*)} A_2|_{{\mathscr {R}}\left( A_2^*\right) }\), \(B_1= P_{{\mathscr {R}}(B^*)} B^*|_{{\mathscr {R}}\left( A^*\right) }\), \(B_2= P_{{\mathscr {R}}(B^*)} B^*|_{{\mathscr {N}}\left( A\right) }\) and \(B_2^{''}=P_{{\mathscr {N}}(B_1^*)} B_2|_{{\mathscr {R}}(B_2^*)}\).

As a corollary of the previous theorem, in the case of matrices we have the following already known result:

Corollary 3.2

Let \(A\in \mathbb {C}^{m\times n}\) and \(B\in \mathbb {C}^{n\times p}\). The following conditions are equivalent:

\(\mathrm (i)\) \((AB)\{1\}\subseteq B\{1\} A\{1\}\),

\(\mathrm (ii)\) \(r(A)+r(B)-n\le r(AB)- \mathrm{min}\{m-r(A), p-r(B)\}\).

3.3 Applications of Completions of Operator Matrices to Invertibility of Linear Combination of Operators

In this section for given operators \(A,B\in {\mathscr {B}}({\mathscr {H}})\), we consider the problem of invertibility of the linear combination \(\alpha A + \beta B\), \(\alpha ,\beta \in \mathbb {C}\setminus \lbrace 0 \rbrace \) using the results concerning the invertibility of an upper triangular operator matrix of the form \(M_C\). The motivation behind this section was the paper of G. Hai et al. [8] where the invertibility of the linear combination \(\alpha A + \beta B\), was considered in the case when \(A,B\in {\mathscr {B}}({\mathscr {H}})\) are regular operators and \(\alpha ,\beta \in \mathbb {C}\setminus \lbrace 0 \rbrace \) but also some recently published papers (see [16,17,18,19,20]) which considered the independence of the invertibility of the linear combination \(\alpha A + \beta B\) in the cases, when \(A,B\in {\mathscr {B}}({\mathscr {H}})\) are projectors or orthogonal projectors.

Here, we will consider the general case, without the assumptions that \(A,B\in {\mathscr {B}}({\mathscr {H}})\) are closed range operators or that they belong to any particular classes of operators. As corollaries of our main result, we obtain results for certain special classes of operators. Hence, we completely solve the problem of invertibility of the linear combination \(\alpha A + \beta B\) in each of the following cases:

  • if \(A,B\in {\mathscr {B}}({\mathscr {H}})\) are regular operators,

  • if \(A,B\in {\mathscr {B}}({\mathscr {H}})\) are projectors or orthogonal projectors,

  • if \({\mathscr {R}}(A)\cap {\mathscr {R}}(B)=\{0\}\).

  • if \(\overline{{\mathscr {R}}\left( AP_{{\mathscr {N}}(B)}\right) }=\overline{{\mathscr {R}}(A)}\)

  • if either one of \(A,B \in {\mathscr {B}}({\mathscr {H}})\) is injective.

The following well-known lemma will be used throughout this section.

Lemma 3.8

Let \({\mathscr {M}}\) and \({\mathscr {N}}\) be subspaces of a Hilbert space \({\mathscr {H}}\). Then

$$ ({\mathscr {M}}+{\mathscr {N}})^\bot ={\mathscr {M}}^\bot \cap {\mathscr {N}}^\bot . $$

In the following theorem we will reduce the problem of invertibility of the linear combination \(\alpha A + \beta B\) to an equivalent one which concerns the invertibility of a certain upper triangular operator matrix. Of course, instead of the linear combination one could have simply considered the sum \(A+B\) throughout the sequel.

Theorem 3.11

Let \(A,B\in {\mathscr {B}}({\mathscr {H}})\) be given operators and \(\alpha ,\beta \in \mathbb {C}\setminus \lbrace 0 \rbrace \). Then \(\alpha A + \beta B\) is invertible if and only if the following conditions hold:

\(\mathrm (i)\) :

\({\mathscr {N}}(A)\cap {\mathscr {N}}(B) = \lbrace 0 \rbrace \), \({\mathscr {R}}(A)+{\mathscr {R}}(B)={\mathscr {H}}\),

\(\mathrm (ii)\) :

\(A|_{{\mathscr {N}}(B)}\) has a closed range,

\(\mathrm (iii)\) :

\(P_{{\mathscr {S}}, {\mathscr {R}}(A|_{{\mathscr {N}}(B)})} \left( \alpha A + \beta B\right) |_{{\mathscr {T}}}\) is an injective operator with range \({\mathscr {S}}\),

where \(\overline{{\mathscr {R}}(A)}={\mathscr {R}}(A|_{{\mathscr {N}}(B)})\oplus {\mathscr {S}}\), \({\mathscr {T}}=B^{-1}\left( \overline{{\mathscr {R}}(A)}\right) \cap {\mathscr {P}}\) and \({\mathscr {H}}={\mathscr {N}}(B)\oplus {\mathscr {P}}\).

Proof

Let \({\mathscr {H}}={\mathscr {N}}(B)\oplus {\mathscr {P}}=\overline{{\mathscr {R}}(A)} \oplus {\mathscr {Q}}\) be decompositions of the space \({\mathscr {H}}\). With respect to these decompositions the given operators \(A,B\in {\mathscr {B}}({\mathscr {H}})\) have the following representations:

$$\begin{aligned} A=\left[ \begin{array}{cc}A_1 &{} A_2 \\ 0 &{} 0 \end{array}\right] : \left[ \begin{array}{cc}{\mathscr {N}}(B) \\ {\mathscr {P}}\end{array}\right] \rightarrow \left[ \begin{array}{cc}\overline{{\mathscr {R}}(A)} \\ {\mathscr {Q}}\end{array}\right] , \end{aligned}$$
(3.28)
$$\begin{aligned} B=\left[ \begin{array}{cc}0 &{} B_1 \\ 0 &{} B_2 \end{array}\right] : \left[ \begin{array}{cc}{\mathscr {N}}(B) \\ {\mathscr {P}}\end{array}\right] \rightarrow \left[ \begin{array}{cc}\overline{{\mathscr {R}}(A)} \\ {\mathscr {Q}}\end{array}\right] . \end{aligned}$$
(3.29)

Take arbitrary \(\alpha ,\beta \in \mathbb {C}\setminus \lbrace 0 \rbrace \). Using the above decompositions of \(A,B\in {\mathscr {B}}({\mathscr {H}})\), it follows that the linear combination \(\alpha A + \beta B \) is invertible if and only if the operator matrix

$$\begin{aligned} \left[ \begin{array}{cc}\alpha A_1 &{} \alpha A_2+\beta B_1 \\ 0 &{} \beta B_2 \end{array}\right] : \left[ \begin{array}{cc}{\mathscr {N}}(B) \\ {\mathscr {P}}\end{array}\right] \rightarrow \left[ \begin{array}{cc}\overline{{\mathscr {R}}(A)} \\ {\mathscr {Q}}\end{array}\right] \end{aligned}$$
(3.30)

is invertible. Using Theorem 3.2 we have that this holds if and only if the following three conditions are satisfied:

(i):

\( \alpha A_1\) is left invertible

(ii):

\(\beta B_2\) is right invertible

(iii):

\(P_{{\mathscr {S}}, {\mathscr {R}}(A_1)}\left( \alpha A_2 + \beta B_1\right) |_{{\mathscr {N}}(B_2)}\) is an injective operator with range \({\mathscr {S}}\), where \(\overline{{\mathscr {R}}(A)}={\mathscr {R}}(A_1)\oplus {\mathscr {S}}\).

Evidently, \(\mathrm (i)\) holds if and only if \({\mathscr {N}}(A)\cap {\mathscr {N}}(B) = \{ 0\} \) and \({\mathscr {R}}(A|_{{\mathscr {N}}(B)})\) is closed. Also, \(\mathrm (ii)\) is satisfied if and only if \({\mathscr {R}}(P_{{\mathscr {Q}}, \overline{{\mathscr {R}}(A)}} B)={\mathscr {Q}}\). Since

$$ {\mathscr {R}}(P_{{\mathscr {Q}}, \overline{{\mathscr {R}}(A)}} B)={\mathscr {Q}}\Leftrightarrow {\mathscr {Q}}\subseteq \overline{{\mathscr {R}}(A)}+{\mathscr {R}}(B)\Leftrightarrow \overline{{\mathscr {R}}(A)}+{\mathscr {R}}(B)={\mathscr {H}}$$

we have that \(\mathrm (ii)\) is equivalent with \(\overline{{\mathscr {R}}(A)}+{\mathscr {R}}(B)={\mathscr {H}}\).

To discuss the third condition notice that

$$ {\mathscr {N}}(B_2)={\mathscr {N}}\left( P_{{\mathscr {Q}}, \overline{{\mathscr {R}}(A)}} B\right) \cap {\mathscr {P}}=B^{-1}\left( \overline{{\mathscr {R}}(A)}\right) \cap {\mathscr {P}}$$

and let \({\mathscr {T}}=B^{-1}\left( \overline{{\mathscr {R}}(A)}\right) \cap {\mathscr {P}}\). Evidently,

$$ {\mathscr {N}}\left( P_{{\mathscr {S}}, {\mathscr {R}}(A_1)}\left( \alpha A_2 + \beta B_1\right) |_{{\mathscr {T}}}\right) ={\mathscr {N}}\left( P_{{\mathscr {S}}, {\mathscr {R}}(A_1)}\left( \alpha A + \beta B\right) |_{{\mathscr {T}}}\right) $$

and

$$ {\mathscr {R}}\left( P_{{\mathscr {S}}, {\mathscr {R}}(A_1)}\left( \alpha A_2 + \beta B_1\right) |_{{\mathscr {T}}}\right) ={\mathscr {R}}\left( P_{{\mathscr {S}}, {\mathscr {R}}(A_1)}\left( \alpha A + \beta B\right) |_{{\mathscr {T}}}\right) . $$

Hence, we can conclude that \(\alpha A + \beta B\) is invertible if and only if the following conditions hold:

\(\mathrm (i)\) :

\({\mathscr {N}}(A)\cap {\mathscr {N}}(B) = \lbrace 0 \rbrace \), \(\overline{{\mathscr {R}}(A)}+{\mathscr {R}}(B)={\mathscr {H}}\),

\(\mathrm (ii)\) :

\(AP_{{\mathscr {N}}(B)}\) has closed range,

\(\mathrm (iii)\) :

\(P_{{\mathscr {S}}, {\mathscr {R}}(A_1)}\left( \alpha A + \beta B\right) |_{{\mathscr {T}}}\) is an injective operator with range \({\mathscr {S}}\).

Notice that the second condition in \(\mathrm (i)\), \(\overline{{\mathscr {R}}(A)}+{\mathscr {R}}(B)={\mathscr {H}}\), can be replaced by \({\mathscr {R}}(A)+{\mathscr {R}}(B)={\mathscr {H}}\): Suppose that \(\mathrm (i)-(iii)\) are satisfied. Since

$${\mathscr {S}}={\mathscr {R}}(P_{{\mathscr {S}}, {\mathscr {R}}(A_1)}\left( \alpha A + \beta B\right) |_{{\mathscr {T}}}),$$

we have that

$$ {\mathscr {S}}\subseteq {\mathscr {R}}\left( \left( \alpha A + \beta B\right) |_{{\mathscr {T}}}\right) +{\mathscr {R}}(A_1)\subseteq {\mathscr {R}}(A)+{\mathscr {R}}(B). $$

Now, \(\overline{{\mathscr {R}}(A)}={\mathscr {R}}(A_1)\oplus {\mathscr {S}}\) implies that \(\overline{{\mathscr {R}}(A)}\subseteq {\mathscr {R}}(A)+{\mathscr {R}}(B)\). Hence, \({\mathscr {R}}(A)+{\mathscr {R}}(B)={\mathscr {H}}\). (Also, directly from the invertibility of \(\alpha A + \beta B\), we can conclude that \({\mathscr {R}}(A)+{\mathscr {R}}(B)={\mathscr {H}}\)). \(\Box \)

In the special case, when \({\mathscr {S}}\) is the orthogonal complement of \({\mathscr {R}}(A|_{{\mathscr {N}}(B)})={\mathscr {R}}(AP_{{\mathscr {N}}(B)})\) in \(\overline{{\mathscr {R}}(A)}\) and \({\mathscr {P}}={\mathscr {N}}(B)^\bot \), applying Theorem 3.11 we get the following result:

Theorem 3.12

Let \(A,B\in {\mathscr {B}}({\mathscr {H}})\) be given operators and \(\alpha ,\beta \in \mathbb {C}\setminus \lbrace 0 \rbrace \). Then \(\alpha A + \beta B\) is invertible if and only if the following conditions hold:

\(\mathrm (i)\) :

\({\mathscr {N}}(A)\cap {\mathscr {N}}(B) = \lbrace 0 \rbrace \), \({\mathscr {R}}(A)+{\mathscr {R}}(B)={\mathscr {H}}\),

\(\mathrm (ii)\) :

\(A|_{{\mathscr {N}}(B)}\) has a closed range,

\(\mathrm (iii)\) :

\(P_{{\mathscr {S}}}\left( \alpha A + \beta B\right) |_{{\mathscr {T}}}\) is an injective operator with range \({\mathscr {S}}\),

where \({\mathscr {S}}={\mathscr {R}}\left( AP_{{\mathscr {N}}(B)}\right) ^\bot \cap \overline{{\mathscr {R}}(A)}\) and \({\mathscr {T}}={\mathscr {N}}\left( P_{{\mathscr {R}}(A)^\bot } B\right) \cap {\mathscr {N}}(B) ^\bot \).

Evidently from the theorem given above, we can conclude that the invertibility of the linear combination \(\alpha A + \beta B \) is possible for some constants \(\alpha ,\beta \in \mathbb {C}\setminus \lbrace 0 \rbrace \) only if

$$ \mathrm{dim}{ {\mathscr {N}}\left( P_{{\mathscr {R}}(A)^\bot } B\right) \cap {\mathscr {N}}(B) ^\bot }=\mathrm{dim}{{\mathscr {R}}\left( AP_{{\mathscr {N}}(B)}\right) ^\bot \cap \overline{{\mathscr {R}}(A)}}, $$

so we get the following result:

Corollary 3.3

Let \(A,B\in {\mathscr {B}}({\mathscr {H}})\) be given operators. If

$$ \mathrm{dim}{ {\mathscr {N}}\left( P_{{\mathscr {R}}(A)^\bot } B\right) \cap {\mathscr {N}}(B) ^\bot }\ne \mathrm{dim}{{\mathscr {R}}\left( AP_{{\mathscr {N}}(B)}\right) ^\bot \cap \overline{{\mathscr {R}}(A)}}, $$

then the linear combination \(\alpha A + \beta B\) is not invertible for any \(\alpha ,\beta \in \mathbb {C}\setminus \lbrace 0 \rbrace \).

Now we will reconsider the condition \(\mathrm {(iii)}\) from Theorem 3.12, which says that \({\mathscr {R}}\left( P_{{\mathscr {S}}}\left( \alpha A + \beta B\right) |_{{\mathscr {T}}}\right) ={\mathscr {S}}\) and \({\mathscr {N}}\left( P_{{\mathscr {S}}}\left( \alpha A + \beta B\right) |_{{\mathscr {T}}}\right) =\{0\}\). Suppose that \(A,B\in {\mathscr {B}}({\mathscr {H}})\) are given by (3.28) and (3.29), respectively, where \({\mathscr {S}}\) is the orthogonal complement of \({\mathscr {R}}(A|_{{\mathscr {N}}(B)})={\mathscr {R}}(AP_{{\mathscr {N}}(B)})\) in \(\overline{{\mathscr {R}}(A)}\), \({\mathscr {T}}={\mathscr {N}}\left( P_{{\mathscr {R}}(A)^\bot } B\right) \cap {\mathscr {N}}(B) ^\bot \) and \({\mathscr {P}}={\mathscr {N}}(B)^\bot \). The first condition is equivalent with

$$\begin{aligned}&\overline{{\mathscr {R}}(A)}=\overline{{\mathscr {R}}(AP_{{\mathscr {N}}(B)})}+\overline{{\mathscr {R}}(A)}\cap {\mathscr {R}}\left( \left( \alpha A + \beta B\right) P_{{\mathscr {N}}(B)^\bot }\right) , \end{aligned}$$
(3.31)

since \({\mathscr {R}}\left( \left( \alpha A + \beta B\right) |_{{\mathscr {T}}}\right) =\overline{{\mathscr {R}}(A)}\cap {\mathscr {R}}\left( \left( \alpha A + \beta B\right) P_{{\mathscr {N}}(B)^\bot }\right) \). The second condition from \(\mathrm (iii)\), \({\mathscr {N}}\left( P_{{\mathscr {S}}}\left( \alpha A + \beta B\right) |_{{\mathscr {T}}}\right) =\{0\}\) is equivalent with

$$\begin{aligned} {\mathscr {N}}\left( \alpha A_2 + \beta B_1\right) \cap {\mathscr {N}}(B_2)=\{0\},\nonumber \\ {\mathscr {R}}\left( \left( \alpha A_2 + \beta B_1\right) |_{{\mathscr {T}}}\right) \cap \overline{{\mathscr {R}}(AP_{{\mathscr {N}}(B)})}=\{0\}. \end{aligned}$$
(3.32)

Evidently the first condition from (3.32) is equivalent with

$$ {\mathscr {N}}\left( \alpha A + \beta B\right) \cap {\mathscr {N}}(B)^\bot = \lbrace 0 \rbrace $$

while the second one is equivalent with

$$ {\mathscr {R}}\left( \left( \alpha A + \beta B\right) P_{{\mathscr {N}}(B)^\bot }\right) \cap \overline{{\mathscr {R}}(AP_{{\mathscr {N}}(B)})}=\{0\} . $$

Now, in view of the previous two conditions and (3.31), we can conclude that the condition \((\mathrm iii)\) from Theorem 3.12 is equivalent with

$$ \overline{{\mathscr {R}}(A)}=\overline{{\mathscr {R}}(AP_{{\mathscr {N}}(B)})}\oplus \overline{{\mathscr {R}}(A)}\cap {\mathscr {R}}\left( \left( \alpha A + \beta B\right) P_{{\mathscr {N}}(B)^\bot }\right) $$

and

$$ {\mathscr {N}}\left( \alpha A + \beta B\right) \cap {\mathscr {N}}(B)^\bot = \lbrace 0 \rbrace $$

and we can formulate the following result:

Theorem 3.13

Let \(A,B\in {\mathscr {B}}({\mathscr {H}})\) be given operators and \(\alpha ,\beta \in \mathbb {C}\setminus \lbrace 0 \rbrace \). Then the operator \(\alpha A + \beta B\) is invertible if and only if the following conditions hold:

\(\mathrm (i)\) :

\({\mathscr {N}}(A)\cap {\mathscr {N}}(B) = \lbrace 0 \rbrace \), \({\mathscr {R}}(A)+{\mathscr {R}}(B)={\mathscr {H}}\),

\(\mathrm (ii)\) :

\(AP_{{\mathscr {N}}(B)}\) has closed range,

\(\mathrm (iii)\) :

\(\overline{{\mathscr {R}}(A)}={\mathscr {R}}(AP_{{\mathscr {N}}(B)})\oplus \overline{{\mathscr {R}}(A)}\cap {\mathscr {R}}\left( \left( \alpha A + \beta B\right) P_{{\mathscr {N}}(B)^\bot }\right) \), \({\mathscr {N}}\left( \alpha A + \beta B\right) \cap {\mathscr {N}}(B)^\bot = \lbrace 0 \rbrace .\)

In Theorems 3.11 and 3.12, the problem of invertibility of a linear combination of two given operators is reduced to one in which yet another linear combination is required to be injective and to have a prescribed range, which at first glance might not strike the reader as much of an achievement. However, the conditions we have obtained (those given in Theorems 3.11 and 3.12) lend themselves for applications in further analysis of the initial problem for many special classes of operators where they will lead to its complete solution.

Since the condition that \(\alpha A + \beta B\) be nonsingular is symmetrical in A and B, we can obtain new variants of the necessary and sufficient conditions in Theorems 3.11, 3.12 and 3.13 by interchanging the operators A and B in them.

Now, will be the focus of our attention on invertibility of linear combinations for some special classes of operators using the above mentioned results:

\( (\mathbf{1})\) The problem of invertibility of \(\alpha A + \beta B\), in the case when \(A,B\in {\mathscr {B}}({\mathscr {H}})\) are regular operators and \(\alpha ,\beta \in \mathbb {C}\setminus \lbrace 0 \rbrace \) was considered in [9].

Theorem 3.14

([9]) Let \(A,B\in {\mathscr {B}}({\mathscr {H}})\) be given operators with closed ranges and \(\alpha ,\beta \in \mathbb {C}\setminus \lbrace 0 \rbrace \). The operator \(\alpha A + \beta B\) is invertible if and only if the following conditions hold:

\(\mathrm (i')\) :

\({\mathscr {N}}(A)\cap {\mathscr {N}}(B) = \lbrace 0 \rbrace \), \({\mathscr {R}}(A)^\bot \cap {\mathscr {R}}(B)^\bot =\{0\}\),

\(\mathrm (ii')\) :

Both \(A^\dagger A(I-B^\dagger B)\) and \((I-AA^\dagger )BB^\dagger \) are closed range operators,

\(\mathrm (iii')\) :

\(P'_{{\mathscr {L}}}\left( \alpha AB^\dagger B+ \beta AA^\dagger B\right) |_{{\mathscr {M}}}\) is an invertible,

where \({\mathscr {L}}=(A^*)^\dagger \left( {\mathscr {R}}(A^*)\cap {\mathscr {R}}(B^*)\right) \), \({\mathscr {M}}=B^\dagger \left( {\mathscr {R}}(A)\cap {\mathscr {R}}(B)\right) \) and \(P'_{{\mathscr {L}}}\in {\mathscr {B}}({\mathscr {H}},{\mathscr {L}})\) is defined by \(P'_{{\mathscr {L}}}x=P_{{\mathscr {L}}}x\), \(x\in {\mathscr {H}}\).

As a corollary of Theorem 3.12 we get some different conditions for the invertibility of \(\alpha A + \beta B\) than the ones given in [9]. First give the following lemma.

Lemma 3.9

Let \(A,B\in {\mathscr {B}}({\mathscr {H}})\) be given operators. If A and B have closed ranges then

\(\mathrm (i)\) :

\({\mathscr {N}}\left( P_{{\mathscr {R}}(A)^\bot } B\right) \cap {\mathscr {N}}(B) ^\bot =B^{-1}\left( {\mathscr {R}}(A)\right) \cap {\mathscr {N}}(B)^\perp =B^\dagger \left( {\mathscr {R}}(A)\cap {\mathscr {R}}(B)\right) \)

\(\mathrm (ii)\) :

\({\mathscr {R}}(A)\cap {\mathscr {R}}\left( AP_{{\mathscr {N}}(B)}\right) ^\bot =(A^*)^\dagger \left( {\mathscr {R}}(A^*)\cap {\mathscr {R}}(B^*)\right) \)

Proof

\(\mathrm (i)\) The first equality is evident. Let \(x\in B^{-1}\left( {\mathscr {R}}(A)\right) \cap {\mathscr {N}}(B)^\perp \). Then \(Bx\in {\mathscr {R}}(A)\) and \(x=B^\dagger Bx\). So, \(x\in B^\dagger \left( {\mathscr {R}}(A)\cap {\mathscr {R}}(B)\right) \). Now, suppose that \(x\in B^\dagger \left( {\mathscr {R}}(A)\cap {\mathscr {R}}(B)\right) \). Then for some \(s,t\in {\mathscr {H}}\) we have that \(x=B^\dagger Bt=B^\dagger As\) and \(Bt=As\). Evidently, \(x\in {\mathscr {R}}(B^*)={\mathscr {N}}(B)^\perp \) and \(Bx=Bt=As\in {\mathscr {R}}(A)\).

\(\mathrm (ii)\) Let \(y\in {\mathscr {R}}(A)\cap {\mathscr {R}}\left( AP_{{\mathscr {N}}(B)}\right) ^\bot \). Then \(y=AA^\dagger y\) and \(A^*y=B^\dagger BA^*y\). Hence, \(A^*y=B^\dagger BA^*y\in {\mathscr {R}}(A^*)\cap {\mathscr {R}}(B^*)\). Now

$$ y=(A^\dagger )^*A^*y=(A^\dagger )^*B^\dagger BA^*y. $$

Now, suppose that \(x\in (A^*)^\dagger \left( {\mathscr {R}}(A^*)\cap {\mathscr {R}}(B^*)\right) \). Then for some \(s,t\in {\mathscr {H}}\) we have that \(y=(A^\dagger )^*A^*t=(A^\dagger )^*B^*s\) and \(A^*t=B^*s\). Evidently, \(y\in {\mathscr {R}}(A)\) which implies \(y=AA^\dagger y\). Now, we will prove that \(y\in {\mathscr {R}}\left( AP_{{\mathscr {N}}(B)}\right) ^\bot ={\mathscr {N}}(P_{{\mathscr {N}}(B)}A^*)\):

$$\begin{aligned} B^\dagger BA^*y&=B^\dagger BA^*(A^\dagger )^*B^*s=B^\dagger BA^\dagger AB^*s\\&=B^\dagger BA^\dagger A A^*t=B^\dagger B B^*s=B^*s\\&=A^*t=A^*y. \end{aligned}$$

\(\Box \)

Now, in the case when \(A,B\in {\mathscr {B}}({\mathscr {H}})\) are closed range operators, from Theorem 3.12 we get the following:

Theorem 3.15

Let \(A,B\in {\mathscr {B}}({\mathscr {H}})\) be given closed range operators and \(\alpha ,\beta \in \mathbb {C}\setminus \lbrace 0 \rbrace \). Then \(\alpha A + \beta B\) is invertible if and only if the following conditions hold:

\(\mathrm (i)\) :

\({\mathscr {N}}(A)\cap {\mathscr {N}}(B) = \lbrace 0 \rbrace \), \({\mathscr {R}}(A)+{\mathscr {R}}(B)={\mathscr {H}}\),

\(\mathrm (ii)\) :

\(A|_{{\mathscr {N}}(B)}\) has a closed range,

\(\mathrm (iii)\) :

\(P_{{\mathscr {S}}, {\mathscr {R}}(A|_{{\mathscr {N}}(B)})} \left( \alpha A + \beta B\right) |_{{\mathscr {T}}}\) is an injective operator with range \({\mathscr {S}}\),

where \({\mathscr {S}}=(A^*)^\dagger \left( {\mathscr {R}}(A^*)\cap {\mathscr {R}}(B^*)\right) \) and \({\mathscr {T}}=B^\dagger \left( {\mathscr {R}}(A)\cap {\mathscr {R}}(B)\right) \).

\((\mathbf{2})\) The problem of invertibility of projections (idempotents) has been considered in several papers. Coming from that line of research we can single out the result that the invertibility of any linear combination \(\alpha P + \beta Q\), where \(\alpha ,\beta \in \mathbb {C}\setminus \lbrace 0 \rbrace \), \(\alpha +\beta \ne 0\), is in fact equivalent to the invertibility of \(P+Q\) which means that it is independent of the choice of the scalars \(\alpha \) and \(\beta \). For the first time, this was realized by J.K. Baksalary et al. [16] for the finite-dimesional case who proved that

$$ \alpha P + \beta Q\ {\text {is nonsing.}}\ \Leftrightarrow {\mathscr {R}}(P(I-Q))\cap {\mathscr {R}}(Q(I-P))={\mathscr {N}}(P)\cap {\mathscr {N}}(Q)=\{0\} $$

and later generalized by Du et al. [21] to the case of idempotent operators on a Hilbert space and finally by Koliha et al. [19] to the Banach algebra case, without giving any necessary and sufficient conditions for the invertibility of \(\alpha P + \beta Q\). The necessary and sufficient conditions for the invertibility of a linear combination of projections P and Q on a Hilbert space are given later in another paper by Koliha et al. [17] (as well as for the elements of a unital ring):

Theorem 3.16

([17]) Let \(P,Q\in {\mathscr {B}}({\mathscr {H}})\) be projections on a Hilbert space \({\mathscr {H}}\). Then the following conditions are equivalent:

\(\mathrm (i)\) :

\(P+Q\) is invertible.

\(\mathrm (ii)\) :

The range of \(P+Q\) is closed and

$$ {\mathscr {R}}(P)\cap {\mathscr {R}}(Q(I-P))={\mathscr {N}}(P)\cap {\mathscr {N}}(Q)=\{0\}, $$
$$ {\mathscr {R}}(P^*)\cap {\mathscr {R}}(Q^*(I-P^*))={\mathscr {N}}(P^*)\cap {\mathscr {N}}(Q^*)=\{0\}. $$

In the case when \(P,Q\in {\mathscr {B}}({\mathscr {H}})\) are projections, applying Theorem 3.11 to the decompositions \({\mathscr {H}}={\mathscr {N}}(Q)\oplus {\mathscr {R}}(Q)={\mathscr {R}}(P) \oplus {\mathscr {N}}(P)\) we get the main result from [21], which says that the invertibility of the linear combination \(\alpha P + \beta Q\) is independent of the choice of the scalars \(\alpha ,\beta \in \mathbb {C}\), but additionally we also obtain necessary and sufficient conditions for the invertibility of the linear combination \(\alpha P + \beta Q\) which are different from those given in Theorem 3.16.

Theorem 3.17

Let \(P,Q\in {\mathscr {B}}({\mathscr {H}})\) be given projections and \(\alpha ,\beta \in \mathbb {C}\setminus \lbrace 0 \rbrace \), \(\alpha + \beta \ne 0\). Then \(\alpha P + \beta Q\) is an invertible operator if and only if the following conditions hold:

\(\mathrm (i)\) :

\({\mathscr {N}}(P)\cap {\mathscr {N}}(Q) = \lbrace 0 \rbrace \), \({\mathscr {R}}(P)+{\mathscr {R}}(Q)={\mathscr {H}}\),

\(\mathrm (ii)\) :

\({\mathscr {R}}(P)={\mathscr {R}}(P)\cap {\mathscr {R}}(Q)\oplus {\mathscr {R}}(P|_{{\mathscr {N}}(Q)})\).

Proof

Indeed, in this case the subspace \({\mathscr {T}}\) defined in Theorem 3.11 by \({\mathscr {T}}=Q^{-1}\left( {\mathscr {R}}(P)\right) \cap {\mathscr {R}}(Q)\) is equal to \({\mathscr {T}}={\mathscr {R}}(P)\cap {\mathscr {R}}(Q)\). Hence, for any \(x\in {\mathscr {T}}\), we have that \((\alpha P + \beta Q)x=(\alpha + \beta )x\) which implies that the injectivity of operator \(P_{{\mathscr {S}}, {\mathscr {R}}(P|_{{\mathscr {N}}(Q)})} \left( \alpha P + \beta Q\right) |_{{\mathscr {T}}}\) is equivalent with \({\mathscr {R}}(P|_{{\mathscr {N}}(Q)})\cap {\mathscr {T}}=\{0\}\). i.e.,

$$\begin{aligned} {\mathscr {R}}(P|_{{\mathscr {N}}(Q)})\cap {\mathscr {R}}(Q)=\{0\}. \end{aligned}$$
(3.33)

Also, operator \(P_{{\mathscr {S}}, {\mathscr {R}}(P|_{{\mathscr {N}}(Q)})} \left( \alpha P + \beta Q\right) |_{{\mathscr {T}}}\) has range \({\mathscr {S}}\) if and only if \({\mathscr {S}}\subseteq {\mathscr {T}}+{\mathscr {R}}(P|_{{\mathscr {N}}(Q)})\), which is equivalent with \({\mathscr {R}}(P)={\mathscr {R}}(P)\cap {\mathscr {R}}(Q)+{\mathscr {R}}(P|_{{\mathscr {N}}(Q)})\). Now, by (3.33), we have that

$$\begin{aligned} {\mathscr {R}}(P)={\mathscr {R}}(P)\cap {\mathscr {R}}(Q)\oplus {\mathscr {R}}(P|_{{\mathscr {N}}(Q)}). \end{aligned}$$
(3.34)

Using (3.34), the fact that the intersection of two operator ranges is an operator range and Theorem 2.3 [22], we conclude that \({\mathscr {R}}(P|_{{\mathscr {N}}(Q)})\) is closed. Now, the proof follows by Theorem 3.11. \(\Box \)

Obviously, from Theorem 3.17 we get the following corollary:

Corollary 3.4

Let \(P,Q\in {\mathscr {B}}({\mathscr {H}})\) be given projections and \(\alpha ,\beta \in \mathbb {C}\setminus \lbrace 0 \rbrace \), \(\alpha + \beta \ne 0\). Then the invertibility of the linear combination \(\alpha P + \beta Q\) is independent of the choice of the scalars \(\alpha ,\beta \in \mathbb {C}\setminus \lbrace 0 \rbrace \), \(\alpha + \beta \ne 0\).

\((\mathbf{3})\) The problem of invertibility of the linear combination \(\alpha P + \beta Q\) when P and Q are orthogonal projections has also received a lot of attention. In [20] Buckholtz considers the special case when \(\alpha +\beta =1\) and gives conditions under which the difference of projections on a Hilbert space is invertible, as well as an explicit formula for its inverse. In the paper of Koliha et al. [17], the invertibility of the sum of two orthogonal projections was considered which is, as we already know, equivalent with the invertibility of the linear combination \(\alpha P + \beta Q\):

Theorem 3.18

([17]) Let \(P,Q\in {\mathscr {B}}({\mathscr {H}})\) be orthogonal projections on a Hilbert space \({\mathscr {H}}\). Then the following conditions are equivalent:

\(\mathrm (i)\) :

\(P+Q\) is invertible,

\(\mathrm (ii)\) :

The range of \(P+Q\) is closed and

$$ \ {\mathscr {R}}(P)\cap {\mathscr {R}}(Q(I-P))={\mathscr {N}}(P)\cap {\mathscr {N}}(Q)=\{0\} $$

Here, using Theorem 3.11 we obtain the following result:

Theorem 3.19

Let \(P,Q\in {\mathscr {B}}({\mathscr {H}})\) be given orthogonal projections and \(\alpha ,\beta \in \mathbb {C}\setminus \lbrace 0 \rbrace \), \(\alpha + \beta \ne 0\). Then \(\alpha P + \beta Q\) is an invertible operator if and only if \({\mathscr {R}}(P)+{\mathscr {R}}(Q)={\mathscr {H}}\).

Proof

Notice that in the case when \(P,Q\in {\mathscr {B}}({\mathscr {H}})\) are orthogonal projections, the subspaces \({\mathscr {S}}\) and \({\mathscr {T}}\) defined in Theorem 3.12 by \({\mathscr {S}}={\mathscr {R}}\left( PP_{{\mathscr {N}}(Q)}\right) ^\bot \cap {\mathscr {R}}(P)\) and \({\mathscr {T}}={\mathscr {N}}\left( P_{{\mathscr {R}}(P)^\bot } Q\right) \cap {\mathscr {N}}(Q) ^\bot \) coincide and \({\mathscr {S}}={\mathscr {T}}={\mathscr {R}}(P)\cap {\mathscr {R}}(Q)\). Indeed, if \(P,Q\in {\mathscr {B}}({\mathscr {H}})\) are orthogonal projections, then

$$\begin{aligned} {\mathscr {S}}&={\mathscr {R}}\left( PP_{{\mathscr {N}}(Q)}\right) ^\bot \cap {\mathscr {R}}(P) ={\mathscr {R}}(P(I-Q))^\bot \cap {\mathscr {R}}(P)\\&={\mathscr {N}}((I-Q)P)\cap {\mathscr {R}}(P)={\mathscr {R}}(P)\cap {\mathscr {R}}(Q) \end{aligned}$$

and

$$\begin{aligned} {\mathscr {T}}&={\mathscr {N}}\left( P_{{\mathscr {R}}(P)^\bot } Q\right) \cap {\mathscr {N}}(Q) ^\bot ={\mathscr {N}}((I-P)Q)\cap {\mathscr {R}}(Q)\\&={\mathscr {R}}(P)\cap {\mathscr {R}}(Q). \end{aligned}$$

Hence, for any \(x\in {\mathscr {T}}\), we have that \((\alpha P + \beta Q)x=(\alpha + \beta )x\) and \(P_{{\mathscr {S}}}(\alpha P + \beta Q)x=(\alpha + \beta )x\). So, the operator \(P_{{\mathscr {S}}}\left( \alpha P + \beta Q\right) |_{{\mathscr {T}}}\) from item \(\mathrm (iii)\) of Theorem 3.12 is an injective operator with range \({\mathscr {S}}\) if and only if \(\alpha + \beta \ne 0\). Also, the condition \({\mathscr {R}}(P)+{\mathscr {R}}(Q)={\mathscr {H}}\) implies \({\mathscr {N}}(P)\cap {\mathscr {N}}(Q) = \lbrace 0 \rbrace \). Now, from Theorem 3.12 we can conclude that in the case when \(P,Q\in {\mathscr {B}}({\mathscr {H}})\) are orthogonal projections, \(\alpha P + \beta Q\) is an invertible operator if and only if the following conditions hold:

\(\mathrm (i)\) :

\({\mathscr {R}}(P)+{\mathscr {R}}(Q)={\mathscr {H}}\),

\(\mathrm (ii)\) :

\(P|_{{\mathscr {N}}(Q)}\) has closed range.

Notice that the condition \(\mathrm (ii)\) that \(P|_{{\mathscr {N}}(Q)}\) has closed range can be replaced by the condition that \({\mathscr {R}}(P(I-Q))\) is closed. By Proposition 2.4 [23], we have that \({\mathscr {R}}(P(I-Q))\) is closed if and only if \({\mathscr {R}}(P+Q)\) is closed, which is by Corollary 3 [22] equivalent with the fact that \({\mathscr {R}}(P)+{\mathscr {R}}(Q)\) is closed. Since the condition \(\mathrm (i)\) guarantees closedness of \({\mathscr {R}}(P)+{\mathscr {R}}(Q)\), we conclude that condition \(\mathrm (i)\) is necessary and sufficient for the invertibility of \(\alpha P + \beta Q\). \(\Box \).

If we compare Theorem 3.18 from [17] and our Theorem 3.19, it is evident that the condition \({\mathscr {R}}(P)\cap {\mathscr {R}}(Q(I-P))=\{0\}\) is superfluous. In the following lemma we will give an explanation for that:

Lemma 3.10

Let \(P,Q\in {\mathscr {B}}({\mathscr {H}})\) be orthogonal projections on a Hilbert space \({\mathscr {H}}\). Then

$$ {\mathscr {R}}(P)\cap {\mathscr {R}}(Q(I-P))=\{0\}. $$

Proof

First, let us observe that

$$ {\mathscr {R}}(P)\cap {\mathscr {R}}(Q(I-P))={\mathscr {R}}(P)\cap {\mathscr {R}}(Q(I-P))\cap {\mathscr {R}}(Q). $$

So, it is sufficient to prove that \({\mathscr {R}}(P)\cap {\mathscr {R}}(Q(I-P))\cap {\mathscr {R}}(Q)=\{0\}\). It can be easy checked that

$$ {\mathscr {R}}(Q(I-P))^\bot \cap {\mathscr {R}}(Q)={\mathscr {N}}((I-P)Q)\cap {\mathscr {R}}(Q)={\mathscr {R}}(P)\cap {\mathscr {R}}(Q), $$

implying that \({\mathscr {R}}(P)\cap {\mathscr {R}}(Q)\subseteq {\mathscr {R}}(Q(I-P))^\bot \), i.e., \({\mathscr {R}}(P)\cap {\mathscr {R}}(Q)\cap {\mathscr {R}}(Q(I-P))=\{0\}\). \(\Box \)

\((\mathbf{4})\) Now we will consider the invertibility of the linear combination \(\alpha A + \beta B\) for given operators \(A,B\in {\mathscr {B}}({\mathscr {H}})\) in two special cases: when \({\mathscr {R}}(A)\cap {\mathscr {R}}(B)=\{0\}\) and when \(\overline{{\mathscr {R}}\left( AP_{{\mathscr {N}}(B)}\right) }=\overline{{\mathscr {R}}(A)}\). In both of these two cases, beside giving necessary and sufficient conditions for the the invertibility of \(\alpha A + \beta B\), we will conclude that the invertibility of the linear combination \(\alpha A + \beta B\) is independent of the choice of the scalars \(\alpha ,\beta \in \mathbb {C}\setminus \lbrace 0\rbrace \).

In the special case when \(A,B\in {\mathscr {B}}({\mathscr {H}})\) are such that \({\mathscr {R}}(A)\cap {\mathscr {R}}(B)=\{0\}\) using Theorem 3.12 we get the following:

Theorem 3.20

Let \(A,B\in {\mathscr {B}}({\mathscr {H}})\) be given operators and \(\alpha ,\beta \in \mathbb {C}\setminus \lbrace 0 \rbrace \). If \({\mathscr {R}}(A)\cap {\mathscr {R}}(B)=\{0\}\), then the operator \(\alpha A + \beta B\) is invertible if and only if

$$\begin{aligned} {\mathscr {R}}(A)\oplus {\mathscr {R}}(B)={\mathscr {H}},\ {\mathscr {N}}(A)\oplus {\mathscr {N}}(B)={\mathscr {H}}. \end{aligned}$$
(3.35)

Proof

Suppose that \(\alpha A + \beta B\) is invertible. By Theorem 3.12, we have that \({\mathscr {R}}(A)\oplus {\mathscr {R}}(B)={\mathscr {H}}\) which by Theorem 2.3 [22] gives that \({\mathscr {R}}(A)\) and \({\mathscr {R}}(B)\) are closed. Now \({\mathscr {R}}(A)\cap {\mathscr {R}}(B)=\{0\}\) together with the fact that \({\mathscr {R}}(A)\) is closed implies that \({\mathscr {T}}={\mathscr {N}}\left( P_{{\mathscr {R}}(A)^\bot } B\right) \cap {\mathscr {N}}(B) ^\bot =\{0\}\) which by the condition \(\mathrm (iii)\) from Theorem 3.12 gives that \({\mathscr {S}}={\mathscr {R}}\left( AP_{{\mathscr {N}}(B)}\right) ^\bot \cap \overline{{\mathscr {R}}(A)}=\{0\}\). Hence, \({\mathscr {R}}(A)={\mathscr {R}}(AP_{{\mathscr {N}}(B)})\) which implies that \({\mathscr {N}}(A)^\bot \subseteq {\mathscr {N}}(B)+{\mathscr {N}}(A)\). So, \({\mathscr {H}}={\mathscr {N}}(B)+{\mathscr {N}}(A)\). By the condition \(\mathrm{(i)}\) of Theorem 3.12, we have that \({\mathscr {N}}(A)\cap {\mathscr {N}}(B) = \lbrace 0 \rbrace \), so \({\mathscr {H}}={\mathscr {N}}(B)\oplus {\mathscr {N}}(A)\). On the other hand suppose that (3.35) holds. Evidently, \({\mathscr {R}}(A)\) and \({\mathscr {R}}(B)\) are closed and the first condition from Theorem 3.12 is satisfied. Also, \({\mathscr {R}}(A)=A({\mathscr {H}})=A({\mathscr {N}}(A)\oplus {\mathscr {N}}(B))=A({\mathscr {N}}(B))={\mathscr {R}}(AP_{{\mathscr {N}}(B)})\), so \(\mathrm (ii)\) of Theorem 3.12 is satisfied. To conclude that \(\mathrm{(iii)}\) of Theorem 3.12 is true, simply notice that \({\mathscr {T}}={\mathscr {S}}=\{0\}\).\(\Box \)

Similarly, we get the following:

Theorem 3.21

Let \(A,B\in {\mathscr {B}}({\mathscr {H}})\) be given operators and \(\alpha ,\beta \in \mathbb {C}\setminus \lbrace 0 \rbrace \). If \(\overline{{\mathscr {R}}\left( AP_{{\mathscr {N}}(B)}\right) } =\overline{{\mathscr {R}}(A)}\), then the operator \(\alpha A + \beta B\) is invertible if and only if the following conditions hold:

\(\mathrm (i)\) :

\({\mathscr {N}}(A)\cap {\mathscr {N}}(B) = \lbrace 0 \rbrace \), \({\mathscr {R}}(A)\oplus {\mathscr {R}}(B)={\mathscr {H}}\),

\(\mathrm (ii)\) :

\(AP_{{\mathscr {N}}(B)}\) has closed range.

Proof

If \(\overline{{\mathscr {R}}\left( AP_{{\mathscr {N}}(B)}\right) }=\overline{{\mathscr {R}}(A)}\), then for \({\mathscr {S}}\) defined in Theorem 3.11 we have that \({\mathscr {S}}=\{0\}\). So the condition \((\mathrm {iii})\) from Theorem 3.11 is satisfied if and only if \({\mathscr {T}}=\{0\}\), i.e., \(\overline{{\mathscr {R}}(A)}\cap {\mathscr {R}}(B)=\{0\}\). Now, the proof follows directly from Theorem 3.11. \(\Box \)

Corollary 3.5

Let \(A,B\in {\mathscr {B}}({\mathscr {H}})\) be given operators. If one of the conditions \({\mathscr {R}}(A)\cap {\mathscr {R}}(B)=\{0\}\) and \(\overline{{\mathscr {R}}\left( AP_{{\mathscr {N}}(B)}\right) }=\overline{{\mathscr {R}}(A)}\) holds, then the invertibility of the linear combination \(\alpha A + \beta B\) is independent of the choice of the scalars \(\alpha ,\beta \in \mathbb {C}\setminus \lbrace 0 \rbrace \).

\((\mathbf{5})\) Now we will consider the case when either one of the operators \(A, B\in {\mathscr {B}}({\mathscr {H}})\) is injective.

Since the condition \(\alpha A + \beta B\) is nonsingular is symmetrical in A and B, let us suppose that \(B\in {\mathscr {B}}({\mathscr {H}})\) is injective:

Theorem 3.22

Let \(A,B\in {\mathscr {B}}({\mathscr {H}})\) be given operators such that B is injective and \(\alpha ,\beta \in \mathbb {C}\setminus \lbrace 0 \rbrace \). Then \(\alpha A + \beta B\) is invertible if and only if the following conditions hold:

\(\mathrm (i)\) :

\({\mathscr {R}}(A)+{\mathscr {R}}(B)={\mathscr {H}}\),

\(\mathrm (ii)\) :

\(\left( \alpha A + \beta B\right) |_{B^{-1}\left( \overline{{\mathscr {R}}(A)}\right) }\) is an injective operator with range \(\overline{{\mathscr {R}}(A)}\).

Considering some special classes of operators we have seen that the invertibility of the linear combination \(\alpha A + \beta B\) is independent of the choice of the scalars \(\alpha ,\beta \in \mathbb {C}\setminus \lbrace 0 \rbrace \). Another instance of this phenomenon is provided by the following result.

Theorem 3.23

Let \(A,B\in {\mathscr {B}}({\mathscr {H}})\) be given operators and \(\alpha ,\beta \in \mathbb {C}\setminus \lbrace 0 \rbrace \). If there exists a closed subspace \({\mathscr {P}}\) such that \({\mathscr {H}}={\mathscr {N}}(B)\oplus {\mathscr {P}}\) and \(A|_{{\mathscr {P}}}=0\) or \(P_{\overline{{\mathscr {R}}(A)}}B=0\), then the invertibility of the linear combination \(\alpha A + \beta B\) is independent of the choice of the scalars.

Proof

Using the representations (3.28) and (3.29) of the operators A and B, and the representation (3.30) of \(\alpha A + \beta B\), from Theorem 3.11 the desired conclusion is immediately reached. \(\Box \)

3.4 Drazin Invertible Completion of an Upper Triangular Operator Matrix

In this section we will consider the existence of a Drazin invertible completion of an upper triangular operator matrix of the form

$$ \left[ \begin{array}{cc} A&{}?\\ 0&{}B\end{array}\right] : \left[ \begin{array}{cc} {\mathscr {H}}\\ {\mathscr {K}}\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {H}}\\ {\mathscr {K}}\end{array}\right] , $$

where \(A\in {\mathscr {B}}({\mathscr {H}})\) and \(B\in {\mathscr {B}}({\mathscr {K}})\) are given operators.

Throughout the section \({\mathscr {H}}, {\mathscr {K}}\) are infinite dimensional separable complex Hilbert spaces. For a given operator \(A\in \mathscr {B}(\mathscr {H},\mathscr {K})\), we set \(n(A)=\dim {\mathscr {N}}(A)\) and \(d(A)=\dim {\mathscr {R}}(A)^\bot \).

Let us recall that for \(A\in {\mathscr {B}}({\mathscr {H}})\), the smallest nonnegative integer k such that \({\mathscr {N}}(A^{k+1})={\mathscr {N}}(A^k)\) (resp. \({\mathscr {R}}(A^{k+1})={\mathscr {R}}(A^k)\)), if one exists, is called the ascent (resp. descent) of the operator A and is denoted by \(\mathrm {asc}(A)\) (resp. \(\mathrm {dsc}(A)\)); if there is no such integer k, the operator A is said to be of infinite ascent (resp. infinite descent), which is abbreviated by \(\mathrm {asc}(A)=\infty \) (resp. \(\mathrm {dsc}(A)=\infty \)). Also \(K(0,\delta )=\{\lambda \in \mathbb {C}: |\lambda |<\delta \}\) stands for the open disc with center 0 and radius \(\delta \).

An operator \(A\in {\mathscr {B}}({\mathscr {H}})\) is left Drazin invertible if \(\mathrm {asc}(A)<\infty \) and \({\mathscr {R}}(A^{\mathrm {asc}(A)+1})\) is closed while \(A\in {\mathscr {B}}({\mathscr {H}})\) is right Drazin invertible if \(\mathrm {dsc}(A)<\infty \) and \({\mathscr {R}}(A^{\mathrm {dsc}(A)})\) is closed.

The question of existence of Drazin invertible completions of the upper-triangular operator matrix

$$ M_C=\left[ \begin{array}{cc} A&{}C\\ 0&{}B\end{array}\right] : \left[ \begin{array}{cc} {\mathscr {H}}\\ {\mathscr {K}}\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {H}}\\ {\mathscr {K}}\end{array}\right] , $$

was addressed in [24] where some sufficient conditions were given but the proof of the result presented there is not correct as it is explained in [25].

Theorem 3.24

([23]) Let \({\mathscr {H}}\) and \({\mathscr {K}}\) be separable Hilbert spaces and \(A\in {\mathscr {B}}({\mathscr {H}})\) and \(B\in {\mathscr {B}}({\mathscr {K}})\) be given operators such that

\(\mathrm (i)\) :

A is left Drazin invertible,

\(\mathrm (ii)\) :

B is right Drazin invertible,

\(\mathrm (iii)\) :

There exists a constant \(\delta >0\) such that \(d(A-\lambda )=n(B-\lambda )\), for every \(\lambda \in K(0,\delta )\setminus \{0\}\).

Then there exists an operator \(C\in {\mathscr {B}}({\mathscr {K}},{\mathscr {H}})\) such that \(M_C\) is Drazin invertible.

In order to give a correct proof of Theorem 3.24, we will first list some auxiliaries results:

Two completely different proofs of the following lemma that will be extensively used throughout the paper can be found in [26, 27]:

Lemma 3.11

For a Banach space \({\mathscr {X}}\), a given nonnegative integer m and \(A\in {\mathscr {B}}({\mathscr {X}})\), the following conditions are equivalent:

\(\mathrm (i)\) :

\(\mathrm {dsc}(A)\le m<\infty \),

\(\mathrm (ii)\) :

\({\mathscr {N}}(A^m) + {\mathscr {R}}(A^n) ={\mathscr {X}}\), for every \(n\in \mathbb {N}\),

\(\mathrm (iii)\) :

\({\mathscr {N}}(A^m) + {\mathscr {R}}(A^n) ={\mathscr {X}}\), for some \(n\in \mathbb {N}\).

We will also need the following result which is proved in [27, 28].

Lemma 3.12

Let \(A\in {\mathscr {B}}({\mathscr {X}})\). We have the following

(1):

If \(\mathrm {dsc}(A)= m<\infty \), then there exists a constant \(\delta >0\) such that for every \(\lambda \in K(0,\delta )\setminus \{0\}\):    \(\mathrm (i)\) \(\mathrm {dsc}(A-\lambda )=d(A-\lambda )=0\),    \(\mathrm (ii)\) \(n(A-\lambda )=\mathrm{dim}{\mathscr {N}}(A)\cap {\mathscr {R}}(A^m)\).

(2):

If \(\mathrm {asc}(A)= m<\infty \) and \({\mathscr {R}}(A^{m+k})\) is closed for some \(k\ge 1\), then there exists a constant \(\delta >0\) such that for every \(\lambda \in K(0,\delta )\setminus \{0\}\):    \(\mathrm (i)\) \(\mathrm {asc}(A-\lambda )=n(A-\lambda )=0,\)    \(\mathrm (ii)\) \(d(A-\lambda )=\mathrm{dim}\left( {\mathscr {R}}(A^m)/{\mathscr {R}}(A^{m+1})\right) =\mathrm{dim}\left( {\mathscr {X}}/({\mathscr {R}}(A)+{\mathscr {N}}(A^m))\right) \).

The following technical lemma will be used multiple times throughout this section.

Lemma 3.13

Suppose \(B\in {\mathscr {B}}({\mathscr {K}})\) and p is a positive integer such that \({\mathscr {R}}(B^p)\) is closed. If B is represented by

$$\begin{aligned} B=\left[ \begin{array}{cc} 0&{}B_1\\ 0&{}B_2\end{array}\right] : \left[ \begin{array}{cc} {\mathscr {N}}(B)\cap {\mathscr {R}}(B^p)\\ ({\mathscr {N}}(B)\cap {\mathscr {R}}(B^p))^\bot \end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {N}}(B)\cap {\mathscr {R}}(B^p)\\ ({\mathscr {N}}(B)\cap {\mathscr {R}}(B^p))^\bot \end{array}\right] , \end{aligned}$$
(3.36)

then \(B_1\) and \(B_2\) must satisfy the following two conditions:

\(\mathrm (i)\) :

The restriction of \(B_1B_2^{p-1}\) on \({\mathscr {N}}(B_2^p)\) is onto (equivalently: the restriction of \(B_1\) to the subspace \({\mathscr {R}}(B_2^{p-1})\cap {\mathscr {N}}(B_2)\) is onto)

\(\mathrm (ii)\) :

\({\mathscr {R}}(B_2^p)\subseteq {\mathscr {R}}(B^p)\),

\(\mathrm (iii)\) :

\({\mathscr {R}}(B_2^p)\cap {\mathscr {N}}(B_1)\cap {\mathscr {N}}(B_2)=\{0\}\) (equivalently: the restriction of \(B_1\) to the subspace \({\mathscr {R}}(B_2^{p})\cap {\mathscr {N}}(B_2)\) is injective).

Proof

Put \({\mathscr {S}}:= {\mathscr {N}}(B)\cap {\mathscr {R}}(B^p)\). To see that \(\mathrm (i)\) is true, notice that if \(y\in S\) then \(\left[ \begin{array}{c} y \\ 0 \end{array}\right] = \left[ \begin{array}{c} B_1B_2^{p-1} x\\ B_2^px \end{array}\right] \) for some \(x\in {\mathscr {S}}^\bot \). To see that \(\mathrm (ii)\) is true, notice that for any \(x\in S^\bot \) we have \(\left[ \begin{array}{c} 0\\ B_2^px \end{array}\right] = \left[ \begin{array}{c} B_1B_2^{p-1} x\\ B_2^px \end{array}\right] - \left[ \begin{array}{c} B_1B_2^{p-1} x\\ 0 \end{array}\right] \), and that by \(\mathrm (i)\) we know that \(\left[ \begin{array}{c} B_1B_2^{p-1} x\\ 0 \end{array}\right] \in {\mathscr {R}}(B^p)\). Finally to show \(\mathrm{(iii)}\), notice that if \(y\in {\mathscr {R}}(B_2^p)\cap {\mathscr {N}}(B_1)\cap {\mathscr {N}}(B_2)\) then \(y\in {\mathscr {S}}\) by \(\mathrm (ii)\), and also \(y\in {\mathscr {S}}^\bot \), so \(y=0\).\(\Box \)

The following is a key lemma in the proof of our Theorem 3.24. Suppose that \(A\in {\mathscr {B}}({\mathscr {H}})\) is a left Drazin invertible operator, \(B\in {\mathscr {B}}({\mathscr {K}})\) is an operator with finite descent and suppose in addition that there exists a constant \(\delta >0\) such that \(d(A-\lambda )=n(B-\lambda )\), for every \(\lambda \in K(0,\delta )\setminus \{0\}\). Note that if p is any integer with \(p\ge \max \{\mathrm {asc}(A),\mathrm {dsc}(B)\}\), then \({\mathscr {R}}(A)+{\mathscr {N}}(A^p)=A^{-\mathrm {asc}(A)}[{\mathscr {R}}(A^{\mathrm {asc}(A)+1})]\) is a closed subspace of codimension equal to the dimension of the subspace \({\mathscr {N}}(B)\cap {\mathscr {R}}(B^p)\), by Lemma 3.12. Thus we can fix an invertible operator \(J\in {\mathscr {B}}(\overline{{\mathscr {N}}(B)\cap {\mathscr {R}}(B^p)},({\mathscr {R}}(A)+{\mathscr {N}}(A^p))^\bot )\). Indeed, if \({\mathscr {N}}(B)\cap {\mathscr {R}}(B^p)\) is closed then this is clear. If it is not, then it must be infinite dimensional and so must be the closed subspace \(({\mathscr {R}}(A)+{\mathscr {N}}(A^p))^\bot \). But then \(\overline{{\mathscr {N}}(B)\cap {\mathscr {R}}(B^p)}\) and \(({\mathscr {R}}(A)+{\mathscr {N}}(A^p))^\bot )\) are both infinite dimensional separable Hilbert spaces and as such are isomorphic to one another.

Lemma 3.14

Let \(A\in {\mathscr {B}}({\mathscr {H}})\), \(B\in {\mathscr {B}}({\mathscr {K}})\) be given operators such that

\(\mathrm (i)\) :

A is left Drazin invertible,

\(\mathrm (ii)\) :

\(\mathrm{dsc}(B)<\infty \),

\(\mathrm (iii)\) :

There exists a constant \(\delta >0\) such that \(d(A-\lambda )=n(B-\lambda )\), for every \(\lambda \in K(0,\delta )\setminus \{0\}\).

Let \(C\in {\mathscr {B}}({\mathscr {K}},{\mathscr {H}})\) be given by

$$\begin{aligned} C=\left[ \begin{array}{cc} J&{}0\\ 0&{}0\end{array}\right] : \left[ \begin{array}{cc} \overline{{\mathscr {N}}(B)\cap {\mathscr {R}}(B^p)}\\ ({\mathscr {N}}(B)\cap {\mathscr {R}}(B^p))^\bot \end{array}\right] \rightarrow \left[ \begin{array}{cc} ({\mathscr {R}}(A)+{\mathscr {N}}(A^p))^\bot \\ {\mathscr {R}}(A)+{\mathscr {N}}(A^p)\end{array}\right] , \end{aligned}$$
(3.37)

where \(p\in \mathbb N\) is such that \(p\ge \max \{\mathrm {asc}(A),\mathrm {dsc}(B)\}\) and \(J\in {\mathscr {B}}(\overline{{\mathscr {N}}(B)\cap {\mathscr {R}}(B^p)},\) \(({\mathscr {R}}(A)+{\mathscr {N}}(A^p))^\bot )\) is any invertible operator. The following are equivalent:

\(\mathrm (i)\) :

\(\mathrm {dsc}(M_C)\le p\),

\(\mathrm (ii)\) :

for any \(x\in {\mathscr {H}}\) and \(y\in {\mathscr {K}}\), there exist \(x'\in {\mathscr {H}}\) and \(y'\in {\mathscr {K}}\) such that

$$\begin{aligned} A^px=A^{p+1}x'+A^{p}Cy', \end{aligned}$$
(3.38)

and

$$\begin{aligned} y-By'\in {\mathscr {N}}(C)\cap {\mathscr {N}}(CB)\cap ...\cap {\mathscr {N}}(CB^{p-1})\cap {\mathscr {N}}(B^p). \end{aligned}$$
(3.39)
\(\mathrm (iii)\) :

\({\mathscr {K}}={\mathscr {R}}(B)+{\mathscr {N}}(C)\cap {\mathscr {N}}(CB)\cap {\mathscr {N}}(CB^2)\cap \dots \cap {\mathscr {N}}(CB^{p-1})\cap {\mathscr {N}}(B^p)\).

Proof

\(\mathrm (i)\Leftrightarrow (ii)\) Since for any \(k\in \mathbb {N}\)

$$ M_C^k=\left[ \begin{array}{cc} A^k&{}A^{k-1}C+A^{k-2}CB+...+ACB^{k-2}+CB^{k-1}\\ 0&{}B^k\end{array}\right] : \left[ \begin{array}{cc} {\mathscr {H}}\\ {\mathscr {K}}\end{array}\right] \rightarrow \left[ \begin{array}{cc} {\mathscr {H}}\\ {\mathscr {K}}\end{array}\right] , $$

it follows that \(\mathrm {dsc}(M_C)\le p\) if and only if for any \(x\in {\mathscr {H}}\) and \(y\in {\mathscr {K}}\), there exist \(x'\in {\mathscr {H}}\) and \(y'\in {\mathscr {K}}\) such that

$$\begin{aligned}&A^px+A^{p-1}Cy+A^{p-2}CBy+...+ACB^{p-2}y+CB^{p-1}y=\nonumber \\&A^{p+1}x'+A^{p}Cy'+A^{p-1}CBy'+...+ACB^{p-1}y'+CB^{p}y'\\&\text {and } B^py=B^{p+1}y'.\nonumber \end{aligned}$$
(3.40)

The case \(p=1\) is evident, so suppose that \(p>1\). If we suppose that \(\mathrm {dsc}(M_C)\le p\), by the second equality in (3.32) we get that \(y-By'\in {\mathscr {N}}(B^p)\). Since \({\mathscr {R}}(C)\subseteq {\mathscr {R}}(A)^\bot \), by the first equality in (3.32) we get that \(y-By'\in {\mathscr {N}}(CB^{p-1})\) and

$$\begin{aligned} \begin{aligned}&A^px+A^{p-1}Cy+A^{p-2}CBy+...+ACB^{p-2}y=\\&A^{p+1}x'+A^{p}Cy'+A^{p-1}CBy'+...+ACB^{p-1}y'. \end{aligned} \end{aligned}$$
(3.41)

By (3.41), we have that

$$ A^{p-1}x+A^{p-2}Cy+...+CB^{p-2}y-(A^px'+A^{p-1}Cy'+...+CB^{p-1}y')\in {\mathscr {N}}(A)\subseteq {\mathscr {N}}(A^p) $$

which implies that \(CB^{p-2}y-CB^{p-1}y'\in {\mathscr {N}}(A^p)+{\mathscr {R}}(A)\), i.e., \(y-By'\in {\mathscr {N}}(CB^{p-2})\). Continuing in the same manner, we get that (3.39) holds. Now, by (3.32) it follows that (3.31) is also satisfied.

If \(\mathrm (ii)\) holds, then evidently (3.32) is satisfied, i.e., \(\mathrm {dsc}(M_C)\le p\).

\(\mathrm (ii)\Rightarrow (iii)\) Evidently (3.39) implies \(\mathrm (iii)\).

\(\mathrm (iii)\Rightarrow (ii)\) Let \(x\in {\mathscr {H}}\) and \(y\in {\mathscr {K}}\) be arbitrary. Then there exists \(y_0\in {\mathscr {K}}\) such that

$$ y-By_0\in {\mathscr {N}}(C)\cap {\mathscr {N}}(CB)\cap ...\cap {\mathscr {N}}(CB^{p-1})\cap {\mathscr {N}}(B^p). $$

Let \({\mathscr {S}}={\mathscr {R}}(A)+{\mathscr {N}}(A^p)\). By the definition of the operator C, for given x there exists \(y_{00}\in \overline{{\mathscr {N}}(B)\cap {\mathscr {R}}(B^p)}\) such that \((I-P_{{\mathscr {S}}})x=Jy_{00}=Cy_{00}\). Since \({\mathscr {N}}(B)\) is closed we have \(By_{00}=0\). Define \(y'=P_{{\mathscr {N}}(B)^\bot }y_0+y_{00}\). Then \(By'=By_0\) and \(Cy'=Cy_{00}\) which implies that

$$ y-By'\in {\mathscr {N}}(C)\cap {\mathscr {N}}(CB)\cap ...\cap {\mathscr {N}}(CB^{p-1})\cap {\mathscr {N}}(B^p) $$

and that

$$\begin{aligned} (I-P_{{\mathscr {S}}})x=Cy'. \end{aligned}$$
(3.42)

Now, \(A^px=A^pCy'+A^pP_{{\mathscr {S}}}x\). Since \(P_{{\mathscr {S}}}x\in {\mathscr {R}}(A)+{\mathscr {N}}(A^p)\) it follows that \(A^pP_{{\mathscr {S}}}x\in {\mathscr {R}}(A^{p+1})\) so there exists \(x'\in {\mathscr {H}}\) such that \(A^pP_{{\mathscr {S}}}x=A^{p+1}x'\). Now,

$$ A^px=A^{p+1}x'+A^{p}Cy'. $$

\(\Box \)

Now, we are ready to make clear which conditions on the operators A and B are necessary for the existence of some \(C\in {\mathscr {B}}({\mathscr {K}},{\mathscr {H}})\) such that the operator \(M_C\) is Drazin invertible. Combining Lemma 2.6 from [29], Lemma 3.12 and Theorem 3.24, we obtain the following result:

Theorem 3.25

Let \(A\in {\mathscr {B}}({\mathscr {H}})\) and \(B\in {\mathscr {B}}({\mathscr {K}})\) be given operators. If there exists an operator \(C\in {\mathscr {B}}({\mathscr {K}},{\mathscr {H}})\) such that \(M_C\) is Drazin invertible, then the following hold:

\(\mathrm (i)\) :

\(\mathrm {asc}(A)<\infty \),

\(\mathrm (ii)\) :

\(\mathrm {dsc}(B)<\infty \),

\(\mathrm (iii)\) :

There exists a constant \(\delta >0\) such that \(A-\lambda \) is left invertible, \(B-\lambda \) is right invertible and

$$ d(A-\lambda )=n(B-\lambda )=\mathrm{dim}{\mathscr {N}}(B)\cap {\mathscr {R}}(B^{\mathrm {dsc}(B)}), $$

for every \(\lambda \in K(0,\delta )\setminus \{0\}\).

We will show that the three conditions above together with the assumption that both subspaces \({\mathscr {R}}(A^{\mathrm {asc}(A)+1})\) and \({\mathscr {R}}(B^{\mathrm {dsc}(B)})\) are closed (thus meaning that A is left Drazin invertible and B is right Drazin invertible) are actually sufficient for the existence of a Drazin completion of the operator matrix in question.

In [16], the authors correctly showed that \(\mathrm {asc}(M_C)<\infty \), for \(C\in {\mathscr {B}}({\mathscr {K}},{\mathscr {H}})\) given by the following:

$$\begin{aligned} C=\left[ \begin{array}{cc} J&{}0\\ 0&{}0\end{array}\right] : \left[ \begin{array}{cc} {\mathscr {N}}(B)\cap {\mathscr {R}}(B^p)\\ ({\mathscr {N}}(B)\cap {\mathscr {R}}(B^p))^\bot \end{array}\right] \rightarrow \left[ \begin{array}{cc} ({\mathscr {R}}(A)+{\mathscr {N}}(A^p))^\bot \\ {\mathscr {R}}(A)+{\mathscr {N}}(A^p)\end{array}\right] , \end{aligned}$$
(3.43)

where \(p\ge \mathrm {max}\{\mathrm {asc}(A),\mathrm {dsc}(B)\}\) and J is an invertible operator. However we will show that the operator C as defined in (3.37) by the authors indeed does the trick. To properly show that, we first give an equivalent description of when exactly the operator \(M_C\) is Drazin invertible for this particular choice of C.

Theorem 3.26

Let \(A\in {\mathscr {B}}({\mathscr {H}})\), \(B\in {\mathscr {B}}({\mathscr {K}})\) be given operators such that

\(\mathrm (i)\) :

A is left Drazin invertible,

\(\mathrm (ii)\) :

\(\mathrm{dsc}(B)<\infty \),

\(\mathrm (iii)\) :

There exists a constant \(\delta >0\) such that \(d(A-\lambda )=n(B-\lambda )\), for every \(\lambda \in K(0,\delta )\setminus \{0\}\).

Then \(M_C\) is Drazin invertible for \(C\in {\mathscr {B}}({\mathscr {K}},{\mathscr {H}})\) given by (3.37) if and only if

$$ {\mathscr {K}}={\mathscr {R}}(B)+{\mathscr {N}}(C)\cap {\mathscr {N}}(CB)\cap {\mathscr {N}}(CB^2)\cap \dots \cap {\mathscr {N}}(CB^{p-1})\cap {\mathscr {N}}(B^p). $$

Proof

In [23] it is proved that \(\mathrm {asc}(M_C)\le p\). Thus we can conclude that \(M_C\) is Drazin invertible if and only if \(\mathrm {dsc}(M_C)\le p\). Now the assertion follows by Lemma 3.14. \(\Box \)

Remark

If \(B\in {\mathscr {B}}({\mathscr {K}})\) is right Drazin invertible and is given by (3.36), where \(p= \mathrm {dsc}(B)\), and if \(C\in {\mathscr {B}}({\mathscr {K}},{\mathscr {H}})\) is given by (3.37), then

$$\begin{aligned} \begin{aligned} {\mathscr {N}}(C)\cap {\mathscr {N}}(CB)\cap {\mathscr {N}}(CB^2)\cap \dots \cap {\mathscr {N}}(CB^{p-1})\cap {\mathscr {N}}(B^p)= \\ {\mathscr {N}}(B_1)\cap {\mathscr {N}}(B_1B_2)\cap \dots \cap {\mathscr {N}}(B_1B_2^{p-1})\cap {\mathscr {N}}(B_2^p)\, . \end{aligned} \end{aligned}$$
(3.44)

Indeed, this is a consequence of the following equalities:

$${\mathscr {N}}(C)= [{\mathscr {N}}(B)\cap {\mathscr {R}}(B^p)]^\bot ,\ {\mathscr {N}}(CB^k)=[{\mathscr {N}}(B)\cap {\mathscr {R}}(B^p)]\oplus {\mathscr {N}}(B_1B_2^{k-1})\, ,$$

the latter of which follows from the representation

$$CB^k= \left[ \begin{array}{cc} 0&{}JB_1B_2^{k-1}\\ 0&{}0\end{array}\right] : \left[ \begin{array}{cc} {\mathscr {N}}(B)\cap {\mathscr {R}}(B^p)\\ ({\mathscr {N}}(B)\cap {\mathscr {R}}(B^p))^\bot \end{array}\right] \rightarrow \left[ \begin{array}{cc} ({\mathscr {R}}(A)+{\mathscr {N}}(A^p))^\bot \\ {\mathscr {R}}(A)+{\mathscr {N}}(A^p)\end{array}\right] \, .$$

Since we make use of Lemma 3.13 in the following theorem, in contrast to the previous auxiliary results here we must assume that B is right Drazin invertible.

Theorem 3.27

Let \(A\in {\mathscr {B}}({\mathscr {H}})\), \(B\in {\mathscr {B}}({\mathscr {K}})\) be given operators such that

\(\mathrm (i)\) :

A is left Drazin invertible,

\(\mathrm (ii)\) :

B is right Drazin invertible,

\(\mathrm (iii)\) :

There exists a constant \(\delta >0\) such that \(d(A-\lambda )=n(B-\lambda )\), for every \(\lambda \in K(0,\delta )\setminus \{0\}\).

Then \(M_C\) is Drazin invertible for \(C\in {\mathscr {B}}({\mathscr {K}},{\mathscr {H}})\) given by (3.37).

Proof

Let B be given by (3.36). Suppose first that \(p= 1\). By Theorem 3.26, to prove that \(M_C\) is Drazin invertible for \(C\in {\mathscr {B}}({\mathscr {K}},{\mathscr {H}})\) given by (3.37) it is sufficient to prove that \({\mathscr {K}}={\mathscr {R}}(B)+{\mathscr {N}}(C)\cap {\mathscr {N}}(B)\). Since \(\mathrm {dsc}(B)= 1\), by Lemma 3.11 it follows that \({\mathscr {K}}={\mathscr {R}}(B)+{\mathscr {N}}(B)\). Put \({\mathscr {S}}={\mathscr {N}}(B)\cap {\mathscr {R}}(B)\). As \({\mathscr {N}}(B)={\mathscr {S}}\oplus {\mathscr {N}}(B_1)\cap {\mathscr {N}}(B_2)\), and \({\mathscr {S}}\subseteq {\mathscr {R}}(B)\), it follows that \({\mathscr {K}}= {\mathscr {R}}(B)+{\mathscr {N}}(B_1)\cap {\mathscr {N}}(B_2)\). Since \({\mathscr {N}}(C)\cap {\mathscr {N}}(B)= {\mathscr {N}}(B_1)\cap {\mathscr {N}}(B_2)\), we have \({\mathscr {K}}={\mathscr {R}}(B)+{\mathscr {N}}(C)\cap {\mathscr {N}}(B)\).

Now, consider the case when \(p>1\). By Theorem 3.26, we have to prove that

$$ {\mathscr {K}}={\mathscr {R}}(B)+{\mathscr {N}}(C)\cap {\mathscr {N}}(CB)\cap {\mathscr {N}}(CB^2)\cap \dots \cap {\mathscr {N}}(CB^{p-1})\cap {\mathscr {N}}(B^p) $$

which is by (3.44) from the preceding remark equivalent with

$$ {\mathscr {K}}={\mathscr {R}}(B)+{\mathscr {N}}(B_1)\cap {\mathscr {N}}(B_1B_2)\cap \dots \cap {\mathscr {N}}(B_1B_2^{p-1})\cap {\mathscr {N}}(B_2^p)\, . $$

Since \({\mathscr {K}}={\mathscr {R}}(B)+{\mathscr {N}}(B^p)\), which is equivalent with

$$ {\mathscr {K}}={\mathscr {R}}(B)+{\mathscr {N}}(B_1B_2^{p-1})\cap {\mathscr {N}}(B_2^{p}), $$

it is sufficient to prove that

$$\begin{aligned} \begin{aligned}&{\mathscr {N}}(B_1B_2^{p-1})\cap {\mathscr {N}}(B_2^{p})\subseteq \\&{\mathscr {R}}(B)+{\mathscr {N}}(B_1)\cap {\mathscr {N}}(B_1B_2)\cap \dots \cap {\mathscr {N}}(B_1B_2^{p-1})\cap {\mathscr {N}}(B_2^p). \end{aligned} \end{aligned}$$
(3.45)

Take arbitrary \(x\in {\mathscr {N}}(B_1B_2^{p-1})\cap {\mathscr {N}}(B_2^{p})\). Now \(B_1B_2^{p-2}\in {\mathscr {B}}(({\mathscr {N}}(B)\cap {\mathscr {R}}(B^p))^\bot , {\mathscr {N}}(B)\cap {\mathscr {R}}(B^p))\) so \(B_1B_2^{p-2}x\in {\mathscr {N}}(B)\cap {\mathscr {R}}(B^p)\). Lemma 6.13 says that the operator \(B_1B_2^{p-1}\in {\mathscr {B}}(({\mathscr {N}}(B)\cap {\mathscr {R}}(B^p))^\bot ,{\mathscr {N}}(B)\cap {\mathscr {R}}(B^p))\) maps the subspace \({\mathscr {N}}(B_2^p)\) onto \({\mathscr {N}}(B)\cap {\mathscr {R}}(B^p)\). Hence there exists \(y\in {\mathscr {N}}(B_2^p)\) such that \(B_1B_2^{p-2}x=B_1B_2^{p-1}y\). Now, \(x-B_2y\in {\mathscr {N}}(B_1B_2^{p-2})\cap {\mathscr {N}}(B_1B_2^{p-1})\cap {\mathscr {N}}(B_2^{p})\) which together with \(\mathrm{(ii)}\) of Lemma 6.13 gives that \(x\in {\mathscr {R}}(B)+{\mathscr {N}}(B_1B_2^{p-2})\cap {\mathscr {N}}(B_1B_2^{p-1})\cap {\mathscr {N}}(B_2^{p})\). We have thus shown that \({\mathscr {N}}(B_1B_2^{p-1})\cap {\mathscr {N}}(B_2^{p})\subseteq {\mathscr {R}}(B)+{\mathscr {N}}(B_1B_2^{p-2})\cap {\mathscr {N}}(B_1B_2^{p-1})\cap {\mathscr {N}}(B_2^{p})\).

Continuing in the same manner we further obtain consecutively

$$\begin{aligned}&{\mathscr {N}}(B_1B_2^{p-2})\cap {\mathscr {N}}(B_1B_2^{p-1})\cap {\mathscr {N}}(B_2^{p})\subseteq \\&{\mathscr {R}}(B)+{\mathscr {N}}(B_1B_2^{p-3})\cap {\mathscr {N}}(B_1B_2^{p-2})\cap {\mathscr {N}}(B_1B_2^{p-1})\cap {\mathscr {N}}(B_2^{p}), \end{aligned}$$

..., and finally

$$\begin{aligned}&{\mathscr {N}}(B_1B_2)\cap \dots \cap {\mathscr {N}}(B_1B_2^{p-1})\cap {\mathscr {N}}(B_2^p) \subseteq \\&{\mathscr {R}}(B)+{\mathscr {N}}(B_1)\cap {\mathscr {N}}(B_1B_2)\cap \dots \cap {\mathscr {N}}(B_1B_2^{p-1})\cap {\mathscr {N}}(B_2^p). \end{aligned}$$

Taking into account all these inclusions, we immediately get (3.45). \(\Box \)

Open question: We wonder if at least one of the conditions (if not both) \(\mathrm (i)\) and \(\mathrm (ii)\) in Theorem 3.27 could be relaxed to the requirement that simply \(\mathrm {asc}(A)<\infty \) and \(\mathrm {dsc}(B)<\infty \), respectively?