1 Introduction

The set of all order n complex (real) tensor is denoted by \(\mathbb {C}^{I_1\times \cdots \times I_n}\) (resp. \(\mathbb {R}^{I_1\times \cdots \times I_n}\)). The element of the tensor represents \(\mathcal {A}_{i_1...i_n}\). The summation of tensors \(\mathcal {A}, ~\mathcal {B}\in \mathbb {C}^{I_1\times \cdots \times I_n \times J_1 \times \cdots \times J_n }\) is defined as:

$$\begin{aligned} (\mathcal {A} + \mathcal {B})_{i_1...i_nj_1...j_n} =\mathcal {A}_{{i_1...i_n}{j_1...j_n}} + \mathcal {B}_{{i_1...i_n}{j_1...j_n}}. \end{aligned}$$
(1.1)

However, there is no unique way to represent product of two tensors. Different tensor products have been investigated frequently (Sun et al. 2016). This paper will concentrate on one such class of product ’Einstein product’ (see Einstein 1916; Lai et al. 2009). The Einstein product of \(\mathcal {A} \in \mathbb {C}^{I_1\times \cdots \times I_n \times K_1 \times \cdots \times K_n }\) and \(\mathcal {B} \in \mathbb {C}^{K_1\times \cdots \times K_n \times J_1 \times \cdots \times J_m}\) is denoted as \( \mathcal {A}{*_n}\mathcal {B}\) and is defined by:

$$\begin{aligned} (\mathcal {A}{*_n}\mathcal {B})_{i_1...i_nj_1...j_m} =\displaystyle \sum _{k_1,\ldots ,k_n}\mathcal {A}_{{i_1...i_n}{k_1...k_n}}\mathcal {B}_{{k_1...k_n}{j_1...j_m}}\in \mathbb {C}^{I_1\times \cdots \times I_n \times J_1 \times \cdots \times J_m }. \end{aligned}$$
(1.2)

Inverses and generalized inverses of matrices and tensors have significantly impacted many areas of theoretical and computational mathematics (Sun et al. 2018; Behera et al. 2019; Sahoo et al. 2020). Brazell et al. (2013) introduced the tensor inverse via the Einstein product. Then, Sun et al. (2016) and subsequently Behera and Mishra (2017) extended to the generalized inverse of tensor, since tensor decomposition is necessary to develop an efficient algorithm for solving multilinear systems. Liang et al. (2019) discussed the Schur and the \(\mathcal {L}\, \mathcal {U}\) decompositions of tensor via the Einstein product. Also, Ji and Wei (2017) discussed the weighted Moore–Penrose inverse and least-squares solutions for a multilinear system. Thereafter, the authors of Behera et al. (2018), Liang and Zheng (2019) added a few more representations through full-rank splitting of arbitrary tensors. The representation of the null and range space was discussed in Ji and Wei (2018), Stanimirović et al. (2018) via Einstein product. Recently, Stanimirović et al. (2018) investigated conditions for the existence of tensor outer inverses using specific algebraic approach.

It is well known that the outer inverse of matrices (Drazin 2012) have been making their presence felt in many pure and applied areas of mathematics (i.e., in iterative methods for solving singular operator equations (Nashed and Chen 1993), stable approximations of ill-posed problems, rank-deficient generalized inverse (Nashed 1976; Zheng and Bapat 2004), and statistics (Chen and Chen 1985). Several characterization of the outer inverse can be found in Chen and Chen (2000), Sheng and Chen (2007), Wei (1998), and Wei and Zhang (2004). As the tensors are generalizations of matrices and vectors, the study of generalized tensor inversion will initiate a multitude of different research topics. Many attractive mathematical properties of generalized tensor inverses motivate us to study outer inverses of tensors via the Einstein product. Well-known results on the numerical methods for computing outer inverses are surveyed in the monographs (Wang et al. 2018; Wei et al. 2018). Additional results about the perturbation bounds for the Moore–Penrose inverse, Core, Core-EP, CMP, and the DMP inverses of tensors via Einstein product can be found in references (Ding and Wei 2016; Jin et al. 2017; Ma 2018; Ma et al. 2019; Miao et al. 2020; Wang et al. 2020; Du et al. 2019; Wang et al. 2020; Xie et al. 2019; Wang and Wei 2017).

A summary of the main facets of this discussion may be given the following bullet points.

  • Further elaboration on reshape operation and characterizations of different generalized inverses of tensors.

  • A few representations of outer inverse of tensors and relationships with other generalized inverses, like group inverses, Moore-Penrose inverses, Drazin inverse, weighted Moore–Penrose inverse, and other generalized inverses are examined.

  • Derive computationally effective representations of generalized inverses of tensors through the tensor QR decomposition and write an effective algorithm for computing the outer inverses of tensors.

  • An application of outer inverse of tensors is demonstrated in a colour image deblurring.

An important element of this study is to achieve the conditions for which the Moore–Penrose inverses, weighted Moore–Penrose inverses, group inverses, and the Drazin inverses are the same. The main advantage of this study is that the algorithm developed for outer inverse of tensor can be easily extended to many important classes of generalized inverses.

The global organization of the manuscript by sections can be described as follows. Section 2 describes preliminary information and provides an overview of notations and definitions. In Sect. 3, we discuss our main results and it has two parts. In the first part, a few results on reshape operations along with some characterizations of outer inverses are established. The computation of outer inverse and other generalized inverse through the tensor QR decomposition are discussed in the second part. A few numerical examples and one application to image reconstruction are discussed in Sect. 4. Finally, we concluded the work in Sect. 5.

2 Preliminaries

To increase the efficiency of the presentation, we use some additional notations, which will be used here on wards:

$$\begin{aligned} \mathbf{N} (k) = N_1\times N_2 \times \cdots \times N_k,\ \ \mathbf{n} (k) = \left\{ n_1,n_2,\ldots , n_k | 1\le n_j \le N_j,~ j = 1, \ldots , k\right\} . \end{aligned}$$

Using the above notation, the range and nullity space of a tensor are defined (Ji and Wei 2018; Stanimirović et al. 2018) as follows:

$$\begin{aligned} \mathscr {R}(\mathcal {A}) = \left\{ \mathcal {A}{*_n}\mathcal {U}:~\mathcal {U}\in \mathbb {C}^\mathbf{N (n)}\right\} \text{ and } \mathscr {N}(\mathcal {A})=\left\{ \mathcal {V}:~\mathcal {A}{*_n}\mathcal {V}=\mathcal {O}\in \mathbb {C}^\mathbf{M (m)}\right\} , \end{aligned}$$

where \(\mathcal {A}\in \mathbb {C}^\mathbf{M (m) \times \mathbf{N} (n)}\) and \(\mathcal {O}\) denotes a zero tensor of proper dimensions. Using the facts about the range space of the tensor, Ji and Wei (2018) explored the index of a tensor, which is denoted by \(\mathrm {ind}(\mathcal {A})\), and it is defined as the smallest non-negative integer m,  such that: \(\mathscr {R}(\mathcal {A}^m)=\mathscr {R}(\mathcal {A}^{m+1}).\)

Next, we present the definition of the tensor Moore–Penrose inverse which is introduced by the authors of Sun et al. (2016). Let \(\mathcal {A}\in \mathbb {C}^\mathbf{N (n) \times \mathbf{N} (n)}\). The tensor \(\mathcal {Y}\in \mathbb {C}^\mathbf{N (n) \times \mathbf{N} (n)}\) satisfying:

$$\begin{aligned} \begin{aligned}&(1)\quad \mathcal {A}*_{{n}}\mathcal {Y}*_{{n}}\mathcal {A} = \mathcal {A};\qquad \quad \, (2)\quad \mathcal {Y}*_{{m}}\mathcal {A}*_{{n}}\mathcal {Y} = \mathcal {Y};\\&(3)\quad (\mathcal {A}*_{{n}}\mathcal {Y})^{*} = \mathcal {A}*_{{n}}{\mathcal {Y}};\qquad (4)\quad (\mathcal {Y}*_{{n}}\mathcal {A})^{*} = \mathcal {Y}*_{{n}}\mathcal {A}, \end{aligned} \end{aligned}$$

is called Moore-Penrose inverse of \(\mathcal {A}\) and it is denoted by \(\mathcal {A}^\dagger \). Furthermore, we recall the definition of the outer inverse from (Stanimirović et al. 2018).

Definition 2.1

A tensor \(\mathcal {Y}\) represents the outer inverse of \(\mathcal {A}\in \mathbb {C}^\mathbf{M (m) \times \mathbf{N} (n)}\) with prescribed range space \({\mathscr {R}}(\mathcal {B})\) and null space \({\mathscr {N}}(\mathcal {A})\) if it satisfies:

$$\begin{aligned} \begin{aligned} \mathcal {Y}*_{{n}}\mathcal {A}*_{{n}}\mathcal {Y} = \mathcal {Y},~ {\mathscr {R}}(\mathcal {Y})={\mathscr {R}}(\mathcal {B}), \text{ and } {\mathscr {N}}(\mathcal {Y})={\mathscr {N}}({\mathcal {C}}), \end{aligned} \end{aligned}$$

and denoted by \(\mathcal {A}^{(2)}_{{\mathscr {R}}(\mathcal {B}),{\mathscr {N}}({\mathcal {C}})}\).

Let \(S\subseteq \{1,2,3,4\}\). The set of tensors satisfying the tensor equations defined by the set S is denoted by \(\mathcal {A}^{(S)}\). If a tensor \(\mathcal {Y}\) from the set \(\mathcal {A}\{S\}\) satisfying \({\mathscr {R}}(\mathcal {X})={\mathscr {R}}(\mathcal {B})\) (resp. \({\mathscr {N}}(\mathcal {X}) = {\mathscr {N}}(\mathcal {C})\) is denoted by \(\mathcal {A}^{(S)}_{{\mathscr {R}}(\mathcal {B}),*}\) (resp. \(\mathcal {A}^{(S)}_{*,{\mathscr {N}}(\mathcal {C})}\)). The weighted Moore–Penrose inverse of a tensor was introduced in Ji and Wei (2017).

Definition 2.2

(Ji and Wei 2017) Let \(\mathcal {A} \in \mathbb {C}^\mathbf{M (m) \times \mathbf{N} (n)} \) and \(\mathcal {Y}\in \mathbb {C}^\mathbf{N (n) \times \mathbf{M} (m)}\). Suppose \(\mathcal {M}\) and \(\mathcal {N}\) are Hermitian positive-definite tensors, such that \(\mathcal {M} \in \mathbb {C}^\mathbf{M (m) \times \mathbf{M} (m)}\) and \(\mathcal {N} \in \mathbb {C}^\mathbf{N (n) \times \mathbf{N} (n)}\). Then, the tensor \(\mathcal {Y} = \mathcal {A}_{\mathcal {M},\mathcal {N}}^{\dagger }\in \mathcal {A}\{1,2\} \) is called the weighted Moore–Penrose inverse of \(\mathcal {A}\) if it satisfies:

$$\begin{aligned} (\mathcal {M}*_m \mathcal {A}*_n\mathcal {Y})^* = \mathcal {M}*_m\mathcal {A}*_n\mathcal {Y},\quad (\mathcal {N}*_n \mathcal {Y}*_m\mathcal {A})^* =\mathcal {N}*_n \mathcal {Y}*_m\mathcal {A}. \end{aligned}$$

Definition 2.3

(Ji and Wei 2018) Let \(\mathcal {A}\in \mathbb {C}^\mathbf{N (n) \times \mathbf{N} (n)}\) with \(\mathrm {ind}({\mathcal {A}})=k.\) A tensor \(\mathcal {X}\in \mathbb {C}^\mathbf{N (n) \times \mathbf{N} (n)}\) is called the Drazin inverse of \(\mathcal {A}\) if it satisfies:

$$\begin{aligned} (1^k) \ \mathcal {A}^{k+1}{*_n}\mathcal {Y}=\mathcal {A}^k, \quad (2) \ \mathcal {Y}{*_n}\mathcal {A}{*_n}\mathcal {Y}=\mathcal {Y},\quad (5) \ \mathcal {A}{*_n}\mathcal {Y}=\mathcal {Y}{*_n}\mathcal {A}. \end{aligned}$$

When \(k=1\), we \(\mathcal {Y}\) becomes the group inverse of \(\mathcal {A}\). The group and Drazin inverse of \(\mathcal {A}\) are labeled by \(\mathcal {A}^{\#}\) and \(\mathcal {A}^{\mathrm D}\), respectively.

Some preliminary results about the tensor range and null space can be collected from (Behera and Sahoo 2020; Ji and Wei 2018; Stanimirović et al. 2018).

Theorem 2.4

(Ji and Wei 2018, Theorem 3.2 (2)) Let \(\mathcal {A}\in \mathbb {C}^\mathbf{N (n)\times \mathbf{N} (n)}\). Then, for any positive integer p\(\mathscr {N}(\mathcal {A}^p)=\mathscr {N}(\mathcal {A}^{p+1})\) is equivalent to:

$$\begin{aligned} \mathscr {R}(\mathcal {A}^p)\oplus \mathscr {N}(\mathcal {A}^{p})=\mathbb {C}^\mathbf{N (n)}, \end{aligned}$$
(2.1)

and \(\mathrm {ind}({\mathcal {A}})\) of a non-invertible tensor \(\mathcal {A}\) is the smallest positive integer p, such that (2.1) holds.

Lemma 2.5

(Stanimirović et al. 2018, Lemma 2.2.) Let \(\mathcal {S}\in \mathbb {C}^\mathbf{K (k)\times \mathbf{M} (m)}\), \(\mathcal {T}\in \mathbb {c}^\mathbf{K (k)\times \mathbf{N} (n)}.\) Then, \(\mathscr {R}(\mathcal {T})\subseteq \mathscr {R}(\mathcal {S})\) if and only if \(\mathcal {T}=\mathcal {S}{*_n}\mathcal {Z}\) for some tensor \(\mathcal {Z}\in \mathbb {C}^\mathbf{M (m)\times \mathbf{N} (n)}\).

The following results discussed in the framework of Boolean tensors in Behera and Sahoo (2020). However, these two results are true for an arbitrary-order tensor. The proof follows similarly.

Theorem 2.6

(Behera and Sahoo 2020, Theorem 3.3) Let \(\mathcal {A}\in \mathbb {C}^\mathbf{M (m) \times \mathbf{N} (n) }\), \(\mathcal {B}\in \mathbb {C}^\mathbf{N (n) \times \mathbf{S} (s) } \), \(\mathcal {C}\in \mathbb {C}^\mathbf{S (s) \times \mathbf{L} (l) }\) and \(\mathcal {D}\in \mathbb {C}^\mathbf{S (s) \times \mathbf{L} (l) }\) be tensors with \(\mathcal {A}{*_n}\mathcal {B}{*_s}\mathcal {C}=\mathcal {A}{*_n}\mathcal {B}{*_s}\mathcal {D}.\) If \({\mathscr {R}}(\mathcal {B}^{\mathrm T})={\mathscr {R}}(\mathcal {B}^T{*_n}\mathcal {A}^{\mathrm T}),\) then \(\mathcal {B}{*_s}\mathcal {C}=\mathcal {B}{*_s}\mathcal {D}.\)

Corollary 2.7

(Behera and Sahoo 2020, Corollary 3.2) Let \(\mathcal {A}\in \mathbb {C}^\mathbf{M (m) \times \mathbf{N} (n) }\), \(\mathcal {B}\in \mathbb {C}^\mathbf{N (n) \times \mathbf{L} (l) } \), \(\mathcal {C}\in \mathbb {C}^\mathbf{S (s) \times \mathbf{M} (m) }\), and \(\mathcal {D}\in \mathbb {C}^\mathbf{S (s) \times \mathbf{M} (m) }\) be tensors with \(\mathcal {C}{*_m}\mathcal {A}{*_n}\mathcal {B}=\mathcal {D}{*_m}\mathcal {A}{*_n}\mathcal {B}.\) If \({\mathscr {R}}(\mathcal {A})={\mathscr {R}}(\mathcal {A}{*_n}\mathcal {B}),\) then \(\mathcal {C}{*_m}\mathcal {A}=\mathcal {D}{*_m}\mathcal {A}\).

2.1 Tensor decomposition and reshape rank

The reshape operation is an restructures the elements of a tensor into a matrix and the elements of a matrix into a tensor (Stanimirović et al. 2018). This is denoted by rsh and defined as follows.

Definition 2.8

(Definition 3.1, Stanimirović et al. 2018): Let \(\mathfrak {M}=\prod _{i=1}^m M_i\) and \(\mathfrak {N}=\prod _{j=1}^n N_j\). For \( \mathcal {A} \in \mathbb {C}^\mathbf{M (m) \times \mathbf{N} (n)}\) and \(A \in \mathbb {C}^{\mathfrak {M} \times \mathfrak {N}}\), there exist a unique bijective map, called reshape map, denoted by \(\text{ rsh }:\mathbb {C}^\mathbf{M (m) \times \mathbf{N} (n)} \longrightarrow \mathbb {C}^{\mathfrak {M} \times \mathfrak {N}}\), and defined as:

$$\begin{aligned} \text{ rsh }(\mathcal {A}) = A = \text{ reshape }(\mathcal {A},\mathfrak {M}, \mathfrak {N})\in \mathbb {C}^{\mathfrak {M} \times \mathfrak {N}}. \end{aligned}$$
(2.2)

Moreover, the inverse reshape operation is defined by the following bijective map:

$$\begin{aligned} \text{ rsh }^{-1}(A) = \mathcal {A} = \text{ reshape }(A,\mathbf{M} (m), \mathbf{N} (n))\in \mathbb {C}^\mathbf{M (m) \times \mathbf{N} (n)}. \end{aligned}$$
(2.3)

We now state the following result from the above definition.

Lemma 2.9

Observe the tensor \(\mathcal {A}\in \mathbb {C}^\mathbf{M (m)\times \mathbf{N} (n)}\) and integers \(\mathfrak {{M}}=\prod _{i=1}^{m}M_i,~\mathfrak {N}=\prod _{j=1}^{n}N_j\). If \(A=\text{ rsh }(\mathcal {A})\in \mathbb C^{\mathfrak {M}\times \mathfrak {N}}\) and \(Y=\text{ rsh }(\mathcal {Y})\), then:

$$\begin{aligned} \mathscr {R}(\mathcal {A})=\{\text{ rsh }^{-1}(Y):~Y\in \mathscr {R}(A)\} \text{ and } \mathscr {N}(\mathcal {A})=\{\text{ rsh }^{-1}(Y):~Y\in \mathscr {N}(A)\}. \end{aligned}$$

In connection with reshape of a tensor with range and nullity spaces of a tensor, we define the notions termed as reshape range and reshape null space.

Definition 2.10

Let \(\mathcal {A}\in \mathbb {C}^\mathbf{M (m)\times \mathbf{N} (n)}\), \(\mathfrak {{M}}=\prod _{i=1}^{m}M_i,~\mathfrak {N}=\prod _{j=1}^{n}N_j\), and \(A=\text{ rsh }(\mathcal {A})\). The reshape range and reshape null space of \(\mathcal {A}\) is denoted by \(\text{ RSH }(\mathscr {R}(\mathcal {A}))\) and \(\text{ RSH }(\mathscr {N}(\mathcal {A}))\), respectively, and defined as:

$$\begin{aligned} \text{ RSH }(\mathscr {R}(\mathcal {A}))=\{\text{ rsh }(\mathcal {Y}):~\mathcal {Y}\in \mathscr {R}(\mathcal {A})\} \text{ and } \text{ RSH }(\mathscr {N}(\mathcal {A}))=\{\text{ rsh }(\mathcal {Y}):~\mathcal {Y}\in \mathscr {N}(\mathcal {A})\}. \end{aligned}$$
(2.4)

Definition 2.11

Let \(\mathcal {A}\in \mathbb {C}^\mathbf{M (m)\times \mathbf{N} (n)}\), \(\mathfrak {{M}}=\prod _{i=1}^{m}M_i,~\mathfrak {N}=\prod _{j=1}^{n}N_j\), and \(A=\text{ rsh }(\mathcal {A})\). The reshape dimension is denoted by rshdim, and defined as:

$$\begin{aligned} \text{ rshdim }(\mathscr {R}(\mathcal {A}))=\text{ dim }(\text{ RSH }(\mathscr {R}(\mathcal {A}))) \text{ and } \text{ rshdim }(\mathscr {N}(\mathcal {A}))=\text{ dim }(\text{ RSH }(\mathscr {N}(\mathcal {A}))). \end{aligned}$$

Now, the reshape dimension of an arbitrary subspace \(\Omega \subseteq \mathbb {C}^\mathbf{M (m)}\) can be defined as \(\text{ rshdim }(\Omega )=\text{ dim }(\text{ RSH }(\Omega ))\), where \(\text{ RSH }(\Omega )=\{\text{ rsh }(\mathcal {V}):~\mathcal {V}\in \Omega \}\).

The following results from (Behera et al. 2018; Liang and Zheng 2019; Stanimirović et al. 2018) will be useful to prove the main results.

Theorem 2.12

(Stanimirović et al. 2018, Theorem 3.1) Consider \(\mathcal {A}\in \mathbb {C}^\mathbf{M (m)\times \mathbf{N} (n)}\) and \(\mathcal {B}\in \mathbb {C}^\mathbf{N (n)\times \mathbf{L} (l)}\). Then:

  1. (a)

    \(\mathscr {R}(\mathcal {A}{*_n}\mathcal {B})=\mathscr {R}(\mathcal {A}) \Longleftrightarrow \text{ rshrank }(\mathcal {A}{*_n}\mathcal {B})=\text{ rshrank }(\mathcal {A})\);

  2. (b)

    \(\mathscr {N}(\mathcal {A}{*_n}\mathcal {B})=\mathscr {N}(\mathcal {B}) \Longleftrightarrow \text{ rshrank }(\mathcal {A}{*_n}\mathcal {B})=\text{ rshrank }(\mathcal {B})\),

where \(\mathrm {rshrank}()\) denotes the reshaping rank of a tensor, as it is defined in Stanimirović et al. (2018).

Lemma 2.13

(Stanimirović et al. 2018) Let \(\mathcal {A} \in \mathbb {C}^\mathbf{M (m)\times \mathbf{N} (n)}\) and \(\mathcal {B} \in \mathbb {C}^\mathbf{N (n)\times \mathbf{L} (l)}\) be given tensors, integers, \(\mathfrak {M}=\prod _{i=1}^m M_i\), \(\mathfrak {N}=\prod _{j=1}^n N_j\), and \(\mathfrak {L}=\prod _{k=1}^l L_k\). Then:

$$\begin{aligned} \mathrm {rsh}\left( \mathcal {A}*_{n}\mathcal {B}\right) =\mathrm {rsh}\left( \mathcal {A}\right) \mathrm {rsh}\left( \mathcal {B}\right) =AB, \end{aligned}$$
(2.5)

where \(A=\mathrm {rsh}\left( \mathcal {A}\right) \in \mathbb C^{\mathfrak {M}\times \mathfrak {N}},B=\mathrm {rsh}\left( \mathcal {B}\right) \in \mathbb C^{\mathfrak {N}\times \mathfrak {L}}\).

Theorem 2.14

(Liang and Zheng 2019, Theorem 3.6.) For \( \mathcal {A} \in \mathbb {C}^\mathbf{M (m) \times \mathbf{N} (n)} \), there is a left invertible tensor \( \mathcal {F}\in \mathbb {C}^\mathbf{M (m) \times \mathbf{R} (r)}\) and a right invertible tensor \( \mathcal {G}\in \mathbb {C}^\mathbf{R (r) \times \mathbf{N} (n)}\) which satisfy:

$$\begin{aligned} \text{ rshrank }(\mathcal {F}) =\text{ rshrank }(\mathcal {G})= \text{ rshrank }(\mathcal {A}) = \prod _{i=1}^r R_i, \text{ and } \mathcal {A} = \mathcal {F} {*_r}\mathcal {G}. \end{aligned}$$
(2.6)

The decomposition (2.6) is called a full rank decomposition (or full rank factorization) of the tensor \(\mathcal {A} \).

Theorem 2.15

(Behera et al. 2019, Theorem 3.24) Let \(\mathcal {F}\in \mathbb {C}^\mathbf{N (n) \times \mathbf{R} (r)}\) and \(\mathcal {G}\in \mathbb {C}^\mathbf{R (r) \times \mathbf{N} (n)}\) be two tensors, such that \(\mathcal {A} = \mathcal {F}{*_r}\mathcal {G}\) be a full-rank decomposition of the tensor \(\mathcal {A}\in \mathbb {C}^\mathbf{N (n) \times \mathbf{N} (n) }\). Then, the following statements are equivalent:

  1. (a)

    \(\mathcal {A}\) is group invertible;

  2. (b)

    \(\mathcal {G}{*_n}\mathcal {F}\) is invertible.

3 Main results

3.1 Further results on reshape operations

In view of Lemma 2.9 and Eq. (2.4), we have the following result.

Lemma 3.1

Let us notice \(\mathcal {A}\in \mathbb {C}^\mathbf{M (m)\times \mathbf{N} (n)}\) and \(\mathfrak {{M}}=\prod _{i=1}^{m}M_i,~\mathfrak {N}=\prod _{j=1}^{n}N_j\), and \(A=\text{ rsh }(\mathcal {A})\). Then:

$$\begin{aligned} \text{ RSH }(\mathscr {R}(\mathcal {A}))= & {} \mathscr {R}(A)=\mathscr {R}\left( \mathrm {rsh}(\mathcal {A})\right) \ \text{ and } \\ \text{ RSH }(\mathscr {N}(\mathcal {A}))= & {} \mathscr {N}(A)=\mathscr {N}\left( \mathrm {rsh}(\mathcal {A})\right) . \end{aligned}$$

Proof

$$\begin{aligned} \text{ RSH }(\mathscr {R}(\mathcal {A}))= & {} \{\text{ rsh }(\mathcal {Y}):~\mathcal {Y}\in \mathscr {R}(\mathcal {A})\}\\= & {} \{y:~\mathcal {Y}=\text{ rsh }^{-1}(z),~z\in \mathscr {R}(A)\}\\= & {} \{y:~y=z,~z\in \mathscr {R}(A)\}\\= & {} \{y:~y\in \mathscr {R}(A)\}=\mathscr {R}(A). \end{aligned}$$

The second part can be proved similarly. \(\square \)

Consider two tensor spaces \(\Omega _1\) and \(\Omega _2\), such that \(\Omega _1 \subseteq \Omega _2\). Now, using the definition of reshape operation, we obtain \(\text{ RSH }(\Omega _1)\subseteq \text{ RSH }(\Omega _2)\). Furthermore, since \(\text{ dim }(\text{ RSH }(\Omega _1))=\text{ dim }(\text{ RSH }(\Omega _2))\), it follows that \(\text{ RSH }(\Omega _1)=\text{ RSH }(\Omega _2)\). Therefore, \(\Omega _1=\Omega _2\). This is stated in the next result.

Theorem 3.2

Let \(\Omega _1\) and \(\Omega _2\) be two subspaces of a tensor space \(\mathbb {C}^\mathbf{M (m)}\). If \(\Omega _1 \subseteq \Omega _2\) and \(\text{ rshdim }(\Omega _1)=\text{ rshdim }(\Omega _2)\), then \(\Omega _1=\Omega _2\).

Using Definition 2.11 and Lemma 3.1, one can obtain the next tensor generalization of the rank nullity theorem.

Theorem 3.3

Arbitrary tensor \(\mathcal {A}\in \mathbb {C}^\mathbf{M (m)\times \mathbf{N} (n)}\) satisfies \(\text{ rshdim }(\mathscr {R}(\mathcal {A}))+\text{ rshdim }(\mathscr {N}(\mathcal {A}))=\mathbf{N} (n).\)

Lemma 3.4

Let \(\mathcal {A}\in \mathbb {C}^\mathbf{M (m)\times \mathbf{N} (n)}\) and \(\text{ rshrank }(\mathcal {A})=\mathbf {R} (r)=\prod _{i=1}^{r}R_i.\) Let \(\Omega _1\) be a subspace of  \(\mathbb {C}^\mathbf{N (n)}\) with \(\text{ rshdim }(\Omega _1)=\mathbf{S} (s)\le \mathbf{R} (r)\), and \(\Omega _2\) be a subspace of \(\mathbb {C}^\mathbf{M (m)}\) with \(\text{ rshdim }(\Omega _2)=\mathbf{M} (m)-\mathbf{S} (s)\). If \(\mathcal {A}\) has a \(\{2\}\)-inverse \(\mathcal {X}\), such that \(\mathscr {R}(\mathcal {X})=\Omega _1\) and \(\mathscr {N}(\mathcal {X})=\Omega _2\), then \(\mathcal {A}{*_n}\Omega _1\oplus \Omega _2=\mathbb {C}^\mathbf{M (m)}\).

Proof

Since \(\mathcal {X}\) is \(\{2\}\)-inverse of \(\mathcal {A}\), \((\mathcal {A}{*_n}\mathcal {X})^2=\mathcal {A}{*_n}\mathcal {X}{*_m}\mathcal {A}{*_n}\mathcal {X}=\mathcal {A}{*_n}\mathcal {X}\). Thus, \(\mathcal {A}{*_n}\mathcal {X}\) is idempotent. Hence, \({\mathscr {R}}(\mathcal {A}{*_n}\mathcal {X})\) and \({\mathscr {N}}(\mathcal {A}{*_n}\mathcal {X})\) are complementary. Furthermore, \(\mathcal {A}{*_n}\mathcal {X}\) is the projection on \({\mathscr {R}}(\mathcal {A}{*_n}\mathcal {X})\) along \({\mathscr {N}}(\mathcal {A}{*_n}\mathcal {X})\). As \(\mathcal {X}\) is an outer inverse of \(\mathcal {A}\), it is easy to verify \({\mathscr {N}}(\mathcal {X})={\mathscr {N}}(\mathcal {A}{*_n}\mathcal {X})\). Therefore, \(\Omega _2={\mathscr {N}}(\mathcal {A}{*_n}\mathcal {X})\). The result is follows from the fact that \(\mathcal {A}{*_n}\Omega _1=\mathcal {A}{*_n}{\mathscr {R}}(\mathcal {X})={\mathscr {R}}(\mathcal {A}{*_n}\mathcal {X})\). \(\square \)

Next, we consider characterizations of tensor generalized inverses with specified range and nullity.

Theorem 3.5

Let us observe \(\mathcal {A} \in \mathbb {C}^\mathbf{M (m) \times \mathbf{N} (n) }, \mathcal {U} \in \mathbb {C}^\mathbf{N (n) \times \mathbf{S} (s) }, \mathcal {V} \in \mathbb {C}^\mathbf{L (l) \times \mathbf{M} (m) }\), and \(\mathrm {rshrank}(\mathcal {A})=\mathbf{R} (r)\). If

$$\begin{aligned} \mathcal {X} = \mathcal {U}{*_s}(\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U})^{(1)} {*_l}\mathcal {V}, \end{aligned}$$

where \((\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U})^{(1)}\) is a \(\{1\}\)-inverse of \((\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U})\). Then, the following holds:

  1. (a)

    \(\mathcal {X} \in \mathcal {A}\{1\} \Longleftrightarrow \text{ rshrank }(\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U}) = \prod _{i=1}^r R_i\).

  2. (b)

    \(\mathcal {X} \in \mathcal {A}\{2\}\) and \({\mathscr {R}}(\mathcal {X}) = {\mathscr {R}}(\mathcal {U}) \Longleftrightarrow \text{ rshrank }(\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U})=\text{ rshrank }(\mathcal {U})\).

  3. (c)

    \(\mathcal {X} \in \mathcal {A}\{2\}\) and \({\mathscr {N}}(\mathcal {X}) = {\mathscr {N}}(\mathcal {V}) \Longleftrightarrow \text{ rshrank }(\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U})=\text{ rshrank }(\mathcal {U})=\text{ rshrank }(\mathcal {V})=\prod _{i=1}^r R_i\).

  4. (d)

    \(\mathcal {X} \in \mathcal {A}_{{\mathscr {R}}(\mathcal {U}),{\mathscr {N}}(\mathcal {V})}^{(1,2)}\) if and only if rshrank\((\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U})=\text{ rshrank }(\mathcal {U})=\text{ rshrank }(\mathcal {V})=\prod _{i=1}^r R_i\).

Proof

(a) Let \(\text{ rshrank }(\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U}) = \prod _{i=1}^r R_i\). Since:

$$\begin{aligned} \prod _{i=1}^r R_i = \text{ rshrank }(\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U}) \le \text{ rshrank }(\mathcal {A}{*_n}\mathcal {U}) \le \text{ rshrank }(\mathcal {A}) = \prod _{i=1}^r R_i, \end{aligned}$$

we get \(\text{ rshrank }(\mathcal {A}{*_n}\mathcal {U}) = \text{ rshrank }(\mathcal {A})\). Using Theorem 2.12, we have \({\mathscr {R}}(\mathcal {A}{*_n}\mathcal {U}) = {\mathscr {R}}(\mathcal {A})\), which further implies \(\mathcal {A} = \mathcal {A}{*_n}\mathcal {U}{*_s}\mathcal {Y}\) for appropriate \(\mathcal {Y}\). Now:

$$\begin{aligned} \mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {X}{*_m}\mathcal {A} = \mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U}{*_s}(\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U})^{(1)}{*_s}\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U}{*_s}\mathcal {Y} = \mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U}{*_s}\mathcal {Y} = \mathcal {V}{*_m}\mathcal {A}. \end{aligned}$$

We can easily show \(\text{ rshrank }(\mathcal {A}^T)=\text{ rshrank }(\mathcal {A}^T{*_m}\mathcal {V}^T)\). Again by using Theorem 2.12, we obtain \({\mathscr {R}}(\mathcal {A}^T{*_m}\mathcal {V}^T) = {\mathscr {R}}(\mathcal {A}^T)\). Therefore, by Theorem 2.6, \(\mathcal {A}{*_n}\mathcal {X}{*_m}\mathcal {A} =\mathcal {A}\). To show the sufficient part, let \(\mathcal {X} \in \mathcal {A}\{1\}\). Since:

$$\begin{aligned} \mathcal {A} = \mathcal {A}{*_n}\mathcal {X}{*_m}\mathcal {A}{*_n}\mathcal {X}{*_m}\mathcal {A} = \mathcal {A}{*_n}{\mathcal {X}}{*_m}\mathcal {A}{*_n}\mathcal {U}{*_s}(\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U})^{(1)}{*_s}\mathcal {V}{*_m}\mathcal {A}, \end{aligned}$$

we have \(\text{ rshrank }(\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U}) = \text{ rshrank }(\mathcal {A}) = \mathbf{R} (r)\).

(b) Consider \(\text{ rshrank }(\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U})=\text{ rshrank }(\mathcal {U})\). Since \({\mathscr {R}}(\mathcal {X})\subseteq {\mathscr {R}}(\mathcal {U})\) and:

$$\begin{aligned} \text{ rshrank }(\mathcal {X})= & {} \text{ rshrank }(\mathcal {U}{*_s}(\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U})^{(1)}{*_s}\mathcal {V}) \\\ge & {} \text{ rshrank }(\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U}{*_s}(\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U})^{(1)}{*_s}\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U})\\= & {} \text{ rshrank }(\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U})=\text{ rshrank }(\mathcal {U}). \end{aligned}$$

Therefore, \(\mathscr {R}(\mathcal {X})=\mathscr {R}(\mathcal {U})\). Now:

$$\begin{aligned} \mathcal {X}{*_m}\mathcal {A}{*_n}\mathcal {X}{*_m}\mathcal {A}{*_n}\mathcal {U}= & {} \mathcal {U}{*_s}(\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U})^{(1)}{*_s}\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U} {*_s}(\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U})^{(1)}{*_s}\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U}\\= & {} \mathcal {U}{*_s}(\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U})^{(1)}{*_s}\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U} =\mathcal {X}{*_m}\mathcal {A}{*_n}\mathcal {U}. \end{aligned}$$

Starting from \(\mathscr {R}(\mathcal {X}{*_m}\mathcal {A}{*_n}\mathcal {U})\subseteq \mathscr {R}(\mathcal {X})\), the following can be concluded:

$$\begin{aligned} \text{ rshrank }(\mathcal {X}{*_m}\mathcal {A}{*_n}\mathcal {U})= & {} \text{ rshrank }(\mathcal {U}{*_s}(\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U})^{(1)}{*_s}\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U}) \\\ge & {} \text{ rshrank }(\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U})=\text{ rshrank }(\mathcal {U})=\text{ rshrank }(\mathcal {X}). \end{aligned}$$

Thus, by Corollary 2.7, we have \( \mathcal {X}{*_m}\mathcal {A}{*_n}\mathcal {X}= \mathcal {X}\). Conversely, let \(\mathcal {X} \in \mathcal {A}\{2\}\) and \(\text{ rshrank }(\mathcal {U}) = \text{ rshrank }(\mathcal {X})\). Since:

$$\begin{aligned} \mathcal {X} =\mathcal {X}{*_m}\mathcal {A}{*_n}\mathcal {X} = \mathcal {U}{*_s}(\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U})^{(1)}{*_s}\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U} {*_s}(\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U})^{(1)}{*_s}\mathcal {V}, \end{aligned}$$

one can conclude

$$\begin{aligned} \text{ rshrank }(\mathcal {X}) \le \text{ rshrank }(\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U}) \le \text{ rshrank }(\mathcal {U}) = \text{ rshrank }(\mathcal {X}). \end{aligned}$$

This completes this part of the proof.

(c) The part (c) using similar principles.

(d) It follows from (a),   (b),  and (c). \(\square \)

Lemma 3.6

Let \(\mathcal {A}\in \mathbb {C}^\mathbf{M (m)\times \mathbf{N} (n)}\) and \(\text{ rshrank }(\mathcal {A})=\prod _{i=1}^r R_i\). Assume the existence of a tensor \(\mathcal {G}\in \mathbb {C}^\mathbf{N (n)\times \mathbf{M} (m)} \) satisfying:

$$\begin{aligned} \text{ rshdim }(\mathscr {R}(\mathcal {G}))=\prod _{i=1}^s S_i\le \prod _{i=1}^r R_i,\ \ \text{ rshdim }(\mathscr {N}(\mathcal {G}))=\prod _{i=1}^m M_i-\prod _{i=1}^s S_i. \end{aligned}$$

If \(\mathcal {A}\) has a \(\{2\}\)-inverse \(\mathcal {A}_{\mathscr {R}(\mathcal {G}),\mathscr {N}(\mathcal {G})}^{(2)}\), then:

  1. (a)

    \(\text{ ind }(\mathcal {A}{*_n}\mathcal {G})=\text{ ind }(\mathcal {G}{*_m}\mathcal {A})=1\);

  2. (b)

    \(\mathcal {A}_{\mathscr {R}(\mathcal {G}),\mathscr {N}(\mathcal {G})}^{(2)}=\mathcal {G}{*_m}(\mathcal {A}{*_n}\mathcal {G})^{\#}=(\mathcal {G}{*_m}\mathcal {A})^{\#}{*_n}\mathcal {G}\).

Proof

It is not difficult to show \(\mathscr {R}(\mathcal {A}{*_n}\mathcal {G})=\mathcal {A}{*_n}\mathscr {R}(\mathcal {G})\) and \(\mathscr {N}(\mathcal {G})\subseteq \mathscr {N}(\mathcal {A}{*_n}\mathcal {G})\). Thus by Lemma 3.4, we conclude:

$$\begin{aligned} \text{ rshdim }\mathscr {R}(\mathcal {A}{*_n}\mathcal {G})= & {} \text{ rshdim }(\mathcal {A}{*_n}\mathscr {R}(\mathcal {G}))\\= & {} \prod _{i=1}^m M_i-\left( \prod _{i=1}^m M_i-\prod _{i=1}^s S_i\right) =\prod _{i=1}^s S_i. \end{aligned}$$

Now, rshdim\(\mathscr {N}(\mathcal {A}{*_n}\mathcal {G})=\prod _{i=1}^m M_i-\text{ dim }\,\mathscr {R}(\mathcal {A}{*_n}\mathcal {G})=\prod _{i=1}^m M_i-\prod _{i=1}^s S_i=\text{ dim }\,(\mathscr {N}(\mathcal {G})).\) Thus, \(\mathscr {N}(\mathcal {A}{*_n}\mathcal {G})=\mathscr {N}(\mathcal {G})\), and hence, \(\mathscr {R}(\mathcal {A}{*_n}\mathcal {G})\oplus \mathscr {N}(\mathcal {A}{*_n}\mathcal {G})=\mathcal {A}{*_n}\mathscr {N}(\mathcal {G})\oplus \mathscr {N}(\mathcal {G})=\mathbb {C}^\mathbf{M (m)}\). Therefore, by Theorem 2.4, we obtain ind\((\mathcal {A}{*_n}\mathcal {G})=1\). To find the outer inverse, let us assume \(\mathcal {X}:=\mathcal {G}{*_m}(\mathcal {A}{*_n}\mathcal {G})^{\#}.\) Now:

$$\begin{aligned} \mathcal {X}{*_m}\mathcal {A}{*_n}{X} =\mathcal {G}{*_m}(\mathcal {A}{*_n}\mathcal {G})^{\#}=\mathcal {X}, \end{aligned}$$

\(\mathscr {R}(\mathcal {X})=\mathscr {R}(\mathcal {G}{*_m}(\mathcal {A}{*_n}\mathcal {G})^{\#}) \subseteq \mathscr {R}(\mathcal {G})\), and \(\mathscr {N}(\mathcal {X})=\mathscr {N}\left( \mathcal {G}{*_m}(\mathcal {A}{*_n}\mathcal {G})^{\#}\right) \supseteq \mathscr {N}\left( (\mathcal {A}{*_n}\mathcal {G})^{\#}\right) =\mathscr {N}\left( \mathcal {A}{*_n}\mathcal {G}\right) \supseteq \mathscr {N}(\mathcal {G})\). Since

$$\begin{aligned} \text{ rshrank }(\mathcal {X})=\text{ rshrank }(\mathcal {G}{*_m}(\mathcal {A}{*_n}\mathcal {G})^{\#})=\text{ rank }({{G}}({A}{G})^{\#})\le \text{ rank }(G)=\text{ rshrank }(\mathcal {G}), \end{aligned}$$

it is possible to verify:

$$\begin{aligned} \text{ rshrank }(\mathcal {X})= & {} \text{ rshrank }(\mathcal {G}{*_m}(\mathcal {A}{*_n}\mathcal {G})^{\#})=\text{ rank }({G}({A}{G})^{\#})\\\ge & {} \text{ rank }(AG({A}{G})^{\#})=\text{ rank }(AG)=\text{ dim }(\mathscr {R}(AG))\\= & {} \text{ rshdim }(\mathscr {R}(\mathcal {A}{*_n}\mathcal {G}))=\prod _{i=1}^s S_i=\text{ rshdim }(\mathscr {R}(\mathcal {G}))=\text{ rshrank }(\mathcal {G}). \end{aligned}$$

Therefore, \(\text{ rshrank }(\mathcal {X})=\text{ rshrank }(\mathcal {G})\). Using Theorem 2.12, we conclude \(\mathscr {R}(X)=\mathscr {R}(\mathcal {G}{*_m}(\mathcal {A}{*_n}\mathcal {G})^{\#})=\mathscr {R}(G)\). As \(\mathcal {S}\subseteq \mathscr {N}(\mathcal {X})\) and:

$$\begin{aligned} \text{ rshdim }(\mathscr {N}(\mathcal {X}))=\prod _{i=1}^m M_i-\text{ rshdim }(\mathscr {R}(\mathcal {X}))=\prod _{i=1}^m M_i-\prod _{i=1}^s S_i=\text{ rshdim }(\mathcal {S}), \end{aligned}$$

it follows that \(\mathscr {N}(\mathcal {X})=\mathcal {S}\). Hence, \(\mathcal {A}_{\mathscr {R}(\mathcal {G}),\mathscr {N}(\mathcal {G})}^{(2)}=\mathcal {G}{*_m}(\mathcal {A}{*_n}\mathcal {G})^{\#}\).

The dual statements \(\text{ ind }(\mathcal {G}{*_m}\mathcal {A})=1\) and \(\mathcal {A}_{\mathscr {R}(\mathcal {G}),\mathscr {N}(\mathcal {G})}^{(2)}=(\mathcal {G}{*_m}\mathcal {A})^{\#}{*_n}\mathcal {G}\) can be verified similarly. \(\square \)

Theorem 3.7

(a). The following representations are valid for \(\mathcal {A}\in \mathbb {C}^\mathbf{M (m)\times \mathbf{N} (n)}\):

  1. (i)

    \(\mathcal {A}^\dagger =\mathcal {A}^{(2)}_{\mathscr {R}(\mathcal {A}^*),{\mathscr {N}}(\mathcal {A}^*)}\)

  2. (ii)

    \(\mathcal {A}^\dagger _{\mathcal {M},\mathcal {N}}=\mathcal {A}^{(2)}_{\mathscr {R}(\mathcal {A}^{\sharp }),{\mathscr {N}}(\mathcal {A}^{\sharp })}\), where \(\mathcal {A}^{\sharp }=\mathcal {N}^{-1}{*_m}\mathcal {A}^*{*_n}\mathcal {M}\) and \(\mathcal {M}\) and \(\mathcal {N}\) are Hermitian positive definite tensors.

(b). For \(\mathcal {A}\in \mathbb {C}^\mathbf{N (n)\times \mathbf{N} (n)}\), the following holds:

  1. (i)

    \(\mathcal {A}^{\#}=\mathcal {A}^{(2)}_{\mathscr {R}(\mathcal {A}),{\mathscr {N}}(\mathcal {A})}\) if \(\mathrm {ind}(\mathcal {A})=1\).

  2. (ii)

    \(\mathcal {A}^{{\mathrm D}}=\mathcal {A}^{(2)}_{\mathscr {R}(\mathcal {A}^k),{\mathscr {N}}(\mathcal {A}^k)}\) if \(\mathrm {ind}(\mathcal {A})=k\).

Proof

  1. a(i)

    It is obvious that \(\mathcal {A}^\dagger \!=\!\mathcal {A}^{(2)}\). Now, let \(\mathcal {X}\!\in \! {\mathscr {N}}(\mathcal {A}^\dagger )\). This yields \(\mathcal {A}^\dagger {*_m}\mathcal {X}\!=\!\mathcal {O}\). Pre-multiplying \(\mathcal {A}\) and applying the Moore–Penrose inverse properties, we obtain \((\mathcal {A}^\dagger )^*{*_m}\mathcal {A}^*{*_m}\mathcal {X}=\mathcal {O}\). Again, pre-multiplying the last equality by \(\mathcal {A}^*\), we get \((\mathcal {A}{*_n}\mathcal {A}^\dagger {*_m}\mathcal {A})^*{*_m}\mathcal {X}=\mathcal {O}\). Thus, \(\mathcal {A}^*{*_m}\mathcal {X}=\mathcal {O}\), and hence, \({\mathscr {N}}(\mathcal {A}^\dagger )\subseteq {\mathscr {N}}(\mathcal {A}^*)\). Similarly, we can show \({\mathscr {N}}(\mathcal {A}^*)\subseteq {\mathscr {N}}(\mathcal {A}^\dagger )\). Therefore, \({\mathscr {N}}(\mathcal {A}^\dagger )={\mathscr {N}}(\mathcal {A}^*)\). As \(\mathcal {A}^*=\mathcal {A}^\dagger {*_m}\mathcal {A}{*_n}\mathcal {A}^*\) and \(\mathcal {A}^\dagger =\mathcal {A}^*{*_m}(\mathcal {A}^\dagger )^*{*_n}\mathcal {A}^\dagger \), so from the definition of the range space, we have \(\mathscr {R}(\mathcal {A}^\dagger )=\mathscr {R}(\mathcal {A}^*)\).

  2. a(ii)

    Clearly, \(\mathcal {A}^\dagger _{\mathcal {M},\mathcal {N}}=\mathcal {A}^{(2)}\). Now, let \(\mathcal {X}\in {\mathscr {N}}(\mathcal {A}^\dagger _{\mathcal {M},\mathcal {N}})\). This yields \(\mathcal {A}^\dagger _{\mathcal {M},\mathcal {N}}{*_m}\mathcal {X}=\mathcal {O}\). Pre-multiplying by \(\mathcal {M}{*_m}\mathcal {A}\) and applying the weighted Moore–Penrose inverse definition, we obtain \((\mathcal {A}^\dagger )^*{*_m}\mathcal {A}^*{*_m}\mathcal {M}{*_m}\mathcal {X}=\mathcal {O}\). Again pre-multiplying \(\mathcal {A}^*\), we get \(\mathcal {A}^*{*_m}\mathcal {M}{*_m}\mathcal {X}=\mathcal {O}\). Furthermore, pre-multiplying \(\mathcal {N}^{-1}\), we have \(\mathcal {A}^{\sharp }{*_m}\mathcal {X}=\mathcal {O}\). Thus, \({\mathscr {N}}(\mathcal {A}^\dagger _{\mathcal {M},\mathcal {N}})\subseteq {\mathscr {N}}(\mathcal {A}^{\sharp })\). Similarly, we can show \({\mathscr {N}}(\mathcal {A}^{\sharp })\subseteq {\mathscr {N}}(\mathcal {A}^\dagger _{\mathcal {M},\mathcal {N}})\). Therefore, \({\mathscr {N}}(\mathcal {A}^\dagger _{\mathcal {M},\mathcal {N}})={\mathscr {N}}(\mathcal {A}^{\sharp }).\) Let \(\mathcal {A}^\dagger _{\mathcal {M},\mathcal {N}}=\mathcal {X}\). Now:

    $$\begin{aligned} \mathcal {N}{*_n}\mathcal {X}= & {} \mathcal {N}{*_n}\mathcal {X}{*_m}\mathcal {A}{*_n}\mathcal {X}=\mathcal {A}^*{*_m}\mathcal {X}^*{*_n}\mathcal {N}{*_n}\mathcal {X}\\= & {} \mathcal {A}^*{*_m}\mathcal {X}^*{*_n}\mathcal {A}^*{*_m}\mathcal {M}^*{*_m}\mathcal {M}^{-1}{*_m}\mathcal {X}^*{*_n}\mathcal {N}{*_n}\mathcal {X}\\= & {} \mathcal {A}^*{*_m}\mathcal {M}{*_m}\mathcal {Z}, \text{ where } \mathcal {Z}=\mathcal {A}{*_n}\mathcal {X}{*_m}\mathcal {M}^{-1}{*_m}\mathcal {X}^*{*_n}\mathcal {N}{*_n}\mathcal {X}. \end{aligned}$$

Thus, \({\mathscr {R}}(\mathcal {A}^\dagger _{\mathcal {M},\mathcal {N}})\subseteq {\mathscr {R}}(\mathcal {A}^{\sharp })\). Similarly, we can show the reverse relation.

  1. b(i)

    Starting from \(\mathcal {A}^{\#}=\mathcal {A}^{(2)}\) and using \(\mathcal {A}=\mathcal {A}^{\#}{*_n}\mathcal {A}^2\) and \(\mathcal {A}^{\#}=\mathcal {A}{*_n}(\mathcal {A}^{\#})^2\), one obtains \(\mathscr {R}(\mathcal {A})=\mathscr {R}(\mathcal {A}^{\#})\). Now, let \(\mathcal {X}\in {\mathscr {N}}(\mathcal {A})\). This implies \(\mathcal {A}{*_n}\mathcal {X}=\mathcal {O}\). Pre-multiplying by \((\mathcal {A}^{\#})^2\), we conclude \(\mathcal {A}^{\#}{*_n}\mathcal {X}=\mathcal {O}\). Thus, \({\mathscr {N}}(\mathcal {A})\subseteq {\mathscr {N}}(\mathcal {A}^{\#})\). Similarly, we can show \({\mathscr {N}}(\mathcal {A}^{\#})\subseteq {\mathscr {N}}(\mathcal {A})\). Hence, \({\mathscr {N}}(\mathcal {A}^{\#})={\mathscr {N}}(\mathcal {A})\)

  2. b(ii)

    Since \(\mathcal {A}^k=\mathcal {A}^{D}{*_n}\mathcal {A}^{k+1}\) and \(\mathcal {A}^{D}=\mathcal {A}^k{*_n}(\mathcal {A}^{D})^{k+1}\), we have \(\mathscr {R}(\mathcal {A}^k)=\mathscr {R}(\mathcal {A}^{D})\). The null condition can be obtained using similar lines of b(i). \(\square \)

Lemma 3.8

Let us choose \(\mathcal {A}\in \mathbb {C}^\mathbf{M (m)\times \mathbf{N} (n)}\) with the rank decomposition \(\mathcal {A}=\mathcal {F}{*_l}\mathcal {G}\), where \(\mathcal {F}\in \mathbb {C}^\mathbf{M (m)\times \mathbf{L} (l)}\) and \(\mathcal {G}\in \mathbb {C}^\mathbf{L (l)\times \mathbf{N} (n)}\). Then, it follows that \(\mathscr {R}(\mathcal {A})=\mathscr {R}(\mathcal {F})\) and \(\mathscr {N}(\mathcal {A})=\mathscr {N}(\mathcal {F})\).

Proof

From the definition of range and space, it is trivial that \(\mathscr {R}(\mathcal {A})\subseteq \mathscr {R}(\mathcal {F})\) and \(\mathscr {N}(\mathcal {G})\subseteq \mathscr {N}(\mathcal {A})\). Since \(\mathcal {A} = \mathcal {F}{*_l}\mathcal {G}\) is the full-rank decomposition of \(\mathcal {A}\), it yields \(\mathcal {G}\) is right invertible and \(\mathcal {F}\) is left invertible. Let \(\mathcal {G}_R^{-1}\) be the right inverse of \(\mathcal {G}\) and \(\mathcal {F}_L^{-1}\) be the left inverse of \(\mathcal {F}\). Now, pre-multiplying \(\mathcal {F}_L^{-1}\) to \(\mathcal {A}\), we obtain \(\mathcal {G}=\mathcal {F}_L^{-1}{*_m}\mathcal {A}.\) Thus, \(\mathscr {N}(\mathcal {A})\subseteq \mathscr {N}(\mathcal {G}).\) Post-multiplying \(\mathcal {G}_R^{-1}\) to \(\mathcal {A}\), we again get \(\mathcal {F} = \mathcal {A}{*_n}\mathcal {G}_R^{-1}\). This implies \(\mathscr {R}(\mathcal {F})\subseteq \mathscr {R}(\mathcal {A})\) and finalizes the proof. \(\square \)

Theorem 3.9

Observe \(\mathcal {A}\in \mathbb {C}^\mathbf{M (m)\times \mathbf{N} (n)}\) and consider \(\text{ rshrank }(\mathcal {A})=\prod _{i=1}^r R_i\). Suppose the existence of a tensor \(\mathcal {W}\in \mathbb {C}^\mathbf{N (n)\times \mathbf{M} (m)} \), such that \(\text{ rshdim }(\mathscr {R}(\mathcal {W}))=\prod _{i=1}^s S_i\le \prod _{i=1}^r R_i\) and \(\text{ rshdim }(\mathscr {N}(\mathcal {W}))=\prod _{i=1}^m M_i-\prod _{i=1}^s S_i\). Let \(\mathcal {W}={F}{*_s}\mathcal {G}\) be full-rank factorization of \(\mathcal {W}\). If \(\mathcal {A}\) has a \(\{2\}\)-inverse \(\mathcal {A}_{\mathscr {R}(\mathcal {F}),\mathscr {N}(\mathcal {G})}^{(2)}\), then:

  1. (a)

    \(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F}\) is an invertible tensor;

  2. (b)

    \(\mathcal {A}_{\mathscr {R}(\mathcal {W}),\mathscr {N}(\mathcal {W})}^{(2)}=\mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}=\mathcal {A}_{\mathscr {R}(\mathcal {F}),\mathscr {N}(\mathcal {G})}^{(2)}\).

Proof

Let \(\mathcal {W}={F}{*_s}\mathcal {G}\) be a rank factorization of \(\mathcal {W}\). An application of Lemma 3.8 initiates:

$$\begin{aligned} \mathscr {R}(\mathcal {W})=\mathscr {R}(\mathcal {F}) \text{ and } {\mathscr {N}}(\mathcal {W})={\mathscr {N}}(\mathcal {G}). \end{aligned}$$
(3.1)

Now, \(\mathscr {R}(\mathcal {A}{*_n}\mathcal {F})=\mathcal {A}{*_n}\mathscr {R}(\mathcal {F})=\mathcal {A}{*_n}\mathscr {R}(\mathcal {W})\). Therefore, by Lemma 3.4, we obtain:

$$\begin{aligned} \text{ rshdim }(\mathscr {R}(\mathcal {A}{*_n}\mathcal {F}))=\prod _{i=1}^m M_i-\left( \prod _{i=1}^m M_i-\prod _{i=1}^s S_i\right) =\prod _{i=1}^s S_i. \end{aligned}$$

Thus, \(\text{ rshrank }(\mathcal {A}{*_n}\mathcal {F})=\mathbf{S} (s)=\text{ rshrank }(\mathcal {W})-\text{ rshrank }(\mathcal {G})\). Similarly, argumentation leads to \(\text{ rshrank }(\mathcal {A}{*_n}\mathcal {W})=\prod _{i=1}^s S_i\). This implies that \(\mathcal {A}{*_n}\mathcal {W}=\mathcal {A}{*_n}\mathcal {F}{*_s}\mathcal {G}\) is a full-rank decomposition of \(\mathcal {A}{*_n}\mathcal {W}\). The non-singularity of \(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F}\) follows from Lemma 3.6 and Theorem 2.15. Since

$$\begin{aligned} \mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}=\mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}, \end{aligned}$$

it follows that, \(\mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}\) is a \(\{2\}\)-inverse of \(\mathcal {A}\). From the range and null space definition:

$$\begin{aligned} \mathscr {R}\left( \mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}\right) \subseteq \mathscr {R}(\mathcal {F}) \text{ and } \mathscr {N}(\mathcal {G})\subseteq \mathscr {N}\left( \mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}\right) . \end{aligned}$$
(3.2)

Next, we will show the reverse relation. As

$$\begin{aligned} \mathcal {F}=\mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})=\mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}{*_m}(\mathcal {A}{*_n}\mathcal {F}), \end{aligned}$$

we have:

$$\begin{aligned} \mathscr {R}(\mathcal {F})\subseteq \mathscr {R}\left( \mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}\right) . \end{aligned}$$
(3.3)

Now, let \(\mathcal {X}\in \mathscr {N}\left( \mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}\right) \). This implies \(\mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}{*_m}\mathcal {X}=\mathcal {O}\). Pre-multiplying the last equality by the left inverse of \(\mathcal {F}\), we obtain \((\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}{*_m}\mathcal {X}=\mathcal {O}\). Again, pre-multiplication with \(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F}\) leads to \(\mathcal {G}{*_m}\mathcal {X}=\mathcal {O}\). Thus:

$$\begin{aligned} \mathscr {N}{\left( \mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}\right) }\subseteq \mathscr {N}(\mathcal {G}). \end{aligned}$$
(3.4)

Using (3.1), (3.2), (3.3), and (3.4), we obtain:

$$\begin{aligned} \mathcal {A}_{\mathscr {R}(\mathcal {W}),\mathscr {N}(\mathcal {W})}^{(2)}=\mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}. \end{aligned}$$

To show the second part of (b), it is sufficient to show \({\mathscr {N}}(\mathcal {W})\subseteq {\mathscr {N}}(\mathcal {G})\). Let \(\mathcal {X}\in {\mathscr {N}}(\mathcal {W})\). This implies \(\mathcal {W}{*_m}\mathcal {X}=\mathcal {O}\). This in turn gives \({\mathcal {F}}{*_s}\mathcal {G}{*_m}\mathcal {X}=\mathcal {O}\). Pre-multiplying by the left inverse of \(\mathcal {F}\), we obtain \(\mathcal {G}{*_m}\mathcal {X}=\mathcal {O}\). Thus, \({\mathscr {N}}(\mathcal {W})\subseteq {\mathscr {N}}(\mathcal {G})\), and hence:

$$\begin{aligned} \mathcal {A}_{\mathscr {R}(\mathcal {W}),\mathscr {N}(\mathcal {W})}^{(2)}=\mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}= \mathcal {A}_{\mathscr {R}(\mathcal {F}),\mathscr {N}(\mathcal {G})}^{(2)}. \end{aligned}$$

The proof is completed. \(\square \)

Lemma 3.10

For given tensor \(\mathcal {A}\in \mathbb C^\mathbf{M (m)\times \mathbf{N} (n)}\) and \(\mathcal {X}\in \mathcal {A}\{1\}\), \(\mathcal {X}\in \mathcal {A}\{1,2\}\) if and only if \(\text{ rshrank }(\mathcal {X})=\text{ rshrank }(\mathcal {A})\).

Proof

The sufficient part is trivial. Let \(\text{ rshrank }(\mathcal {X})=\text{ rshrank }(\mathcal {A})\). Clearly, \(\mathscr {R}(\mathcal {X}{*_m}\mathcal {A})\subseteq \mathscr {R}(\mathcal {X})\) and \(\text{ rshrank }(\mathcal {X}{*_m}\mathcal {A})\le \text{ rshrank }(\mathcal {X})\). Now:

$$\begin{aligned} \text{ rshrank }(\mathcal {X}{*_m}\mathcal {A})\ge \text{ rshrank }(\mathcal {A}{*_n}\mathcal {X}{*_m}\mathcal {A})= \text{ rshrank }(\mathcal {A})=\text{ rshrank }(\mathcal {X}). \end{aligned}$$

Therefore, by Theorem 2.12, \(\mathscr {R}(\mathcal {X}{*_m}\mathcal {A})=\mathscr {R}(\mathcal {X})\). This implies \(\mathcal {X}={\mathcal {X}{*_m}\mathcal {A}{*_n}\mathcal {Y}}\) for some \(\mathcal {Y}\). Multiplication by \(\mathcal {A}\) from the left gives \(\mathcal {A}{*_n}\mathcal {X}=\mathcal {A}{*_n}\mathcal {Y}\). Now:

$$\begin{aligned} \mathcal {X}{*_m}\mathcal {A}{*_n}\mathcal {X}=\mathcal {X}{*_m}\mathcal {A}{*_n}\mathcal {Y}=\mathcal {X}. \end{aligned}$$

\(\square \)

Corollary 3.11

Let \(\mathcal {A}\in \mathbb C^\mathbf{M (m)\times \mathbf{N} (n)}\). Then, any two of the following three statements imply the third:

  1. (a)

    \(\mathcal {X}\in \mathcal {A}\{1\}\);

  2. (b)

    \(\mathcal {X}\in \mathcal {A}\{1,2\}\);

  3. (c)

    \(\text{ rshrank }(\mathcal {X})=\text{ rshrank }(\mathcal {A})\).

Proposition 3.12

Let \(\mathcal {A}\in \mathbb C^\mathbf{M (m)\times \mathbf{N} (n)}\) satisfy \(\text{ rshrank }(\mathcal {A})=\prod _{i=1}^r R_i\), and observe \(\mathcal {F}\!\in \!\mathbb C^\mathbf{N (n)\times \mathbf{S} (s)}\), \(\mathcal {G}\!\in \!\mathbb C^\mathbf{S (s)\times \mathbf{M} (m)}\). For \(0<\prod _{i=1}^s S_i\le \prod _{i=1}^r R_i\), the following representations for some outer generalized inverses hold:

Proof

(a). Let \(\mathcal {X}=\mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}\). Now:

$$\begin{aligned} \mathcal {X}{*_m}\mathcal {A}{*_n}\mathcal {X}= & {} \mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}\\= & {} \mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}=\mathcal {X} . \end{aligned}$$

Thus, \(\mathcal {X}\) is an outer inverse of \(\mathcal {A}\). Clearly, \(\mathscr {R}(\mathcal {X})\subseteq \mathscr {R}(\mathcal {F})\) and \({\mathscr {N}}(\mathcal {G})\subseteq {\mathscr {N}}(\mathcal {X})\). Let \(\mathcal {Y}\in {\mathscr {N}}(\mathcal {X})\). This leads \(\mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}{*_m}\mathcal {Y}=\mathcal {O}\). Multiplying on the left by \(\mathcal {G}{*_m}\mathcal {A}\), we obtain \(\mathcal {G}*_m \mathcal {Y}=\mathcal {O}\). Thus, \(\mathcal {Y}\in {\mathscr {N}}(\mathcal {G})\). Hence, \({\mathscr {N}}(\mathcal {G})={\mathscr {N}}(\mathcal {X})\). As \(\mathcal {F}=\mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F}=\mathcal {X}{*_m}\mathcal {A}{*_n}\mathcal {F}\). Therefore, \(\mathscr {R}(\mathcal {F})\subseteq \mathscr {R}(\mathcal {X})\). Therefore, \(\mathscr {R}(\mathcal {F})=\mathscr {R}(\mathcal {X})\).

(b). Let \(\mathcal {X}=(\mathcal {G}{*_m}\mathcal {A})^\dagger {*_s}\mathcal {G}\). Observe:

$$\begin{aligned} \mathcal {X}{*_m}\mathcal {A}{*_n}\mathcal {X}= (\mathcal {G}{*_m}\mathcal {A})^\dagger {*_s}\mathcal {G}{*_m}\mathcal {A}{*_n}(\mathcal {G}{*_m}\mathcal {A})^\dagger {*_s}\mathcal {G}=(\mathcal {G}{*_m}\mathcal {A})^\dagger {*_s}\mathcal {G}=\mathcal {X} \end{aligned}$$

and

$$\begin{aligned} (\mathcal {X}{*_m}\mathcal {A})^*= (\mathcal {G}{*_m}\mathcal {A})^\dagger {*_s}\mathcal {G}{*_m}\mathcal {A}=\mathcal {X}{*_m}\mathcal {A}. \end{aligned}$$

Thus, the second part of (b) is verified.

Again, let \(\mathcal {X}=(\mathcal {G}{*_m}\mathcal {A})^*{*_s}\left( \mathcal {G}{*_m}\mathcal {A}(\mathcal {G}{*_m}\mathcal {A})^*\right) ^{-1}{*_s}\mathcal {G}\). It can be verified easily that \( \mathcal {X}{*_m}\mathcal {A}{*_n}\mathcal {X}=\mathcal {X} \) and \( (\mathcal {X}{*_m}\mathcal {A})^*= \mathcal {X}{*_m}\mathcal {A}\). Also, it is trivial that \(\mathscr {R}(\mathcal {X})\subseteq \mathscr {R}(\mathcal {G}{*_m}\mathcal {A})^*)\) and \({\mathscr {N}}(\mathcal {G})\subseteq {\mathscr {N}}(\mathcal {X})\). Let \(\mathcal {Y}\in {\mathscr {N}}(\mathcal {X})\). This implies:

$$\begin{aligned} (\mathcal {G}{*_m}\mathcal {A})^*{*_s}\left( \mathcal {G}{*_m}\mathcal {A}(\mathcal {G}{*_m}\mathcal {A})^*\right) ^{-1}{*_s}\mathcal {G}{*_m}\mathcal {Y}=\mathcal {O}. \end{aligned}$$

Pre-multiplying \(\mathcal {G}{*_m}\mathcal {A}\), we get \(\mathcal {G}{*_m}\mathcal {Y}=\mathcal {O}\), which yields \({\mathscr {N}}(\mathcal {X})\subseteq {\mathscr {N}}(\mathcal {G})\). Thus, \({\mathscr {N}}(\mathcal {X})={\mathscr {N}}(\mathcal {G})\). Now, \((\mathcal {G}{*_m}\mathcal {A})^*=(\mathcal {G}{*_m}\mathcal {A})^*{*_s}\left( \mathcal {G}{*_m}\mathcal {A}(\mathcal {G}{*_m}\mathcal {A})^*\right) ^{-1}{*_s}\mathcal {G}{*_m}\mathcal {A}(\mathcal {G}{*_m}\mathcal {A})^*=\mathcal {X}{*_m}\mathcal {A}(\mathcal {G}{*_m}\mathcal {A})^*\). Therefore, \(\mathscr {R}(\mathcal {G}{*_m}\mathcal {A})^*)\subseteq \mathscr {R}(\mathcal {X})\), and hence, \(\mathscr {R}(\mathcal {G}{*_m}\mathcal {A})^*)=\mathscr {R}(\mathcal {X})\).

(c) Similar to part (b).

(d) From Corollary 3.11, it is enough to show \(\text{ rshrank }(\mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G})=\mathbf{R} (r)\). Clearly, \(\text{ rshrank }(\mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G})\le \text{ rshrank }((\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1})= \mathbf{R} (r)\). Now:

$$\begin{aligned} \text{ rshrank }(\mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G})\ge & {} \text{ rshrank }(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F}) \\\ge & {} \text{ rshrank }(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})=\mathbf{R} (r). \end{aligned}$$

Hence, it completes the proof. \(\square \)

3.2 Generalized inversion arising from QR decomposition

Computation of generalized inverses based on full-rank factorizations has been studied by several authors in the recent literature (Behera et al. 2018, 2019; Liang and Zheng 2019). In this section, we have extended the work of Stanimirović et al. (2012) to tensors via Einstein product. In the next result, we employed QR decomposition to obtain the outer inverse with specified null and range spaces.

Theorem 3.13

Let \(\mathcal {A}\in \mathbb C^\mathbf{M (m)\times \mathbf{N} (n)}, ~\mathcal {W}\in \mathbb C^\mathbf{N (n)\times \mathbf{M} (m)}\) with \({rshrank}(\mathcal {A})=\mathbf{R} (r)\) and \({rshrank}(\mathcal {W})=\mathbf{S} (s)\), where \(\mathbf{S} (s)\le \mathbf{R} (r)\). Suppose that the \(\mathcal {Q}{*_n}\mathcal {R}\) factorization of \(\mathcal {W}\) is given as:

$$\begin{aligned} \mathcal {W}{*_m}\mathcal {P}=\mathcal {Q}{*_n}\mathcal {R}, \end{aligned}$$
(3.5)

where \(\mathcal {P}\in \mathbb C^\mathbf{M (m)\times \mathbf{M} (m)}\) is a permutation tensor, \(\mathcal {Q}\in \mathbb C^\mathbf{N (n)\times \mathbf{N} (n)}\), \(\mathcal {Q}^*{*_n}\mathcal {Q}=\mathcal {I}\) and \(\mathcal {R}\in \mathbb C^\mathbf{N (n)\times \mathbf{M} (m)}\) with \({rshrank}(\mathcal {R})=\mathbf{S} (s)\). Assume that \(\mathcal {P}\) partitions \(\mathcal {Q}\) and \(\mathcal {R}\) in the form:

$$\begin{aligned} \mathcal {Q}= \left[ \begin{array}{cc} \mathcal {Q}_{1}&\mathcal {Q}_2 \end{array}\right] ,\quad \mathcal {R}= \left[ \begin{array}{cc} \mathcal {R}_{11} &{}\ \mathcal {R}_{12} \\ \mathcal {O} &{}\ \mathcal {O} \end{array}\right] = \left[ \begin{array}{c} \mathcal {R}_{1} \\ \mathcal {O} \end{array}\right] , \end{aligned}$$
(3.6)

where \(\mathcal {Q}_1\in \mathbb C^\mathbf{N (n)\times \mathbf{S} (s)}\) and \(\mathcal {R}_{11}\in \mathbb C^\mathbf{S (s)\times \mathbf{S} (s)}\) is nonsingular.

If \(\mathcal {A}\) has a \(\{2\}\)-inverse \(\mathcal {A}_{\mathscr {R}(\mathcal {W}),{\mathscr {N}}(\mathcal {W})}^{(2)}\), then:

  1. (a)

    \(\mathcal {R}_1{*_m}\mathcal {P}^*{*_m}\mathcal {A}{*_n}\mathcal {Q}_1\) is invertible;

  2. (b)

    \(\mathcal {A}_{\mathscr {R}(\mathcal {W}),{\mathscr {N}}(\mathcal {W})}^{(2)} = \mathcal {Q}_1{*_s}(\mathcal {R}_1{*_m}\mathcal {P}^*{*_m}\mathcal {A}{*_n}\mathcal {Q}_1)^{-1}{*_s}\mathcal {R}_1{*_m}\mathcal {P}^*\);

  3. (c)

    \(\mathcal {A}_{\mathscr {R}(\mathcal {W}),{\mathscr {N}}(\mathcal {W})}^{(2)} = \mathcal {A}_{\mathscr {R}(\mathcal {Q}_1),{\mathscr {N}}(\mathcal {R}_1{*_m}\mathcal {P}^*)}^{(2)}\);

  4. (d)

    \(\mathcal {A}_{\mathscr {R}(\mathcal {W}),{\mathscr {N}}(\mathcal {W})}^{(2)} = \mathcal {Q}_1{*_s}(\mathcal {Q}_1^*{*_n}\mathcal {W}{*_m}\mathcal {A}{*_n}\mathcal {Q}_1)^{-1}{*_s}\mathcal {Q}_1^*{*_n}\mathcal {W}\);

  5. (e)

    \(\mathcal {A}_{\mathscr {R}(\mathcal {W}),{\mathscr {N}}(\mathcal {W})}^{(2)}\in \mathcal {A}\{2\}_\mathbf{S (s)}\).

Proof

  1. (a)

    Using the partition of \(\mathcal {Q}\) and \(\mathcal {R}\), we have \(\mathcal {W}=\mathcal {Q}{*_n}\mathcal {R}{*_m}\mathcal {P}^*=\mathcal {Q}_1{*_s}(\mathcal {R}_1{*_m}\mathcal {P}^*)\). It is easy to verify \(\text{ rshrank }(\mathcal {Q}_1)=\mathbf{S} (s)\) and \(\text{ rshrank }(\mathcal {R}_1)=\text{ rshrank }(\mathcal {R})=\mathbf{S} (s)\). Thus, \(\mathcal {W}=\mathcal {Q}_1{*_s}(\mathcal {R}_1{*_m}\mathcal {P}^*)\) is a full-rank factorization of W. By Theorem 3.9 (b), and the fact that \(\mathcal {A}_{\mathscr {R}(\mathcal {W}),{\mathscr {N}}(\mathcal {W})}^{(2)}\) is a \(\{2\}\)-inverse \(\mathcal {A}\), we deduce invertibility of \(\mathcal {R}_1{*_m}\mathcal {P}^*{*_m}\mathcal {A}{*_n}\mathcal {Q}_1\).

  2. (b), (c)

    Using similar technique of part (b) in Theorem 3.9, the parts (b) and (c) can be verified.

  3. (d)

    Since \(\mathcal {R}_1{*_m}\mathcal {P}^*=\mathcal {Q}_1^*{*_n}\mathcal {W}\), we have:

    $$\begin{aligned} \mathcal {Q}_1{*_s}(\mathcal {R}_1{*_m}\mathcal {P}^*{*_m}\mathcal {A}{*_n}\mathcal {Q}_1)^{-1}{*_s}\mathcal {R}_1{*_m}\mathcal {P}^*= \mathcal {Q}_1{*_s}(\mathcal {Q}_1^*{*_n}\mathcal {W}{*_m}\mathcal {A}{*_n}\mathcal {Q}_1)^{-1}{*_s}\mathcal {Q}_1^*{*_n}\mathcal {W}. \end{aligned}$$
    (3.7)

    Substituting Eq. (3.7) in part (b), we get: \(\mathcal {A}_{\mathscr {R}(\mathcal {W}),{\mathscr {N}}(\mathcal {W})}^{(2)} = \mathcal {Q}_1{*_s}(\mathcal {Q}_1^*{*_n}\mathcal {W}{*_m}\mathcal {A}{*_n}\mathcal {Q}_1)^{-1}{*_s}\mathcal {Q}_1^*{*_n}\mathcal {W}\).

  4. (e)

    The proof of this part follows from the part (a) of Proposition 3.12.

\(\square \)

Suppose that the \(\mathcal {Q}{*_n}\mathcal {R}\) factorization of \(\mathcal {W}\) is of the form:

$$\begin{aligned} \mathcal {W}{*_m}\mathcal {P}=\mathcal {Q}{*_n}\mathcal {R}, \end{aligned}$$
(3.8)

where \(\mathcal {P}\), \(\mathcal {Q}\), and \(\mathcal {R}\) are as the same as in Theorem 3.13. Then, using Proposition 3.12 and Theorem 3.7, we get the following characterization of the generalized inverses for various outer inverse.

Corollary 3.14

Observe \(\mathcal {A}\in \mathbb C^\mathbf{M (m)\times \mathbf{N} (n)}\) and choose \(\mathcal {W}\in \mathbb C^\mathbf{N (n)\times \mathbf{M} (m)}\) satisfying \(\text{ rshrank }(\mathcal {A})=\mathbf{R} (r)\) and \(\text{ rshrank }(\mathcal {W})=\mathbf{S} (s)\), where \(\mathbf{S} (s)\le \mathbf{R} (r)\). Suppose that the full-rank representation of \(\mathcal {W}\):

$$\begin{aligned} \mathcal {W}=\mathcal {Q}_1{*_s}(\mathcal {R}_1{*_m}\mathcal {P}^*), \end{aligned}$$
(3.9)

arises from its \(\mathcal {Q}{*_n}\mathcal {R}\) decomposition (3.8). Then, the following statements are valid:

$$\begin{aligned} \mathcal {A}_{\mathscr {R}(\mathcal {Q}_1),{\mathscr {N}}(\mathcal {R}_1{*_m}\mathcal {P}^*)}^{(2)} = \left\{ \begin{array}{ll} \mathcal {A}^\dagger , &{} \mathcal {W}=\mathcal {A}^*;\\ \mathcal {A}^\dagger _{\mathcal {M},\mathcal {N}}, &{} \mathcal {W}=\mathcal {A}^{\sharp };\\ \mathcal {A}^\#, &{} \mathcal {W}=\mathcal {A};\\ \mathcal {A}^D, &{} \mathcal {W}=\mathcal {A}^k,\ k\ge ind(\mathcal {A}). \end{array} \right. \end{aligned}$$
(3.10)

Using the uniqueness of the outer inverse (with specified null and range space) along with Theorems 3.13 (b) and 3.9 (b), we conclude the following corollary.

Corollary 3.15

Let all assumptions of Theorem 3.13 be retained and \(\mathcal {W}=\mathcal {F}{*_s}\mathcal {G}\) be a full-rank decomposition of \(\mathcal {W}\). Then:

$$\begin{aligned} \mathcal {Q}_1{*_s}(\mathcal {R}_1{*_m}\mathcal {P}^*{*_m}\mathcal {A}{*_n}\mathcal {Q}_1)^{-1}{*_s}\mathcal {R}_1{*_m}\mathcal {P}^*=\mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}. \end{aligned}$$
(3.11)

Lemma 3.16

Consider \(\mathcal {A}\in \mathbb C^\mathbf{M (m)\times \mathbf{N} (n)}\) satisfying \(\text{ rshrank }(\mathcal {A})=\mathbf{S} (s)\). If \(\mathcal {A} = \mathcal {F}_1{*_s}\mathcal {G}_1 = \mathcal {F}_2{*_s}\mathcal {G}_2\) are two full-rank factorizations of \(\mathcal {A}\), then there exists an invertible tensor \(\mathcal {T}\in \mathbb C^\mathbf{S (s)\times \mathbf{S} (s)}\), such that \(\mathcal {F}_2 = \mathcal {F}_1{*_s}\mathcal {T}\) and \(\mathcal {G}_2 = \mathcal {T}^{-1}{*_s}\mathcal {G}_1\).

Proof

Let \(\mathcal {A} = \mathcal {F}_1{*_s}\mathcal {G}_1 = \mathcal {F}_2{*_s}\mathcal {G}_2\) be two full factorizations of \(\mathcal {A}\). Then, \(\mathcal {F}_i^\dagger \) is left inverse of \(\mathcal {F}_i\) and \(\mathcal {G}_i^\dagger \) is right inverse of \(\mathcal {G}_i\) , \(i=1,2.\) Since \(\mathcal {F}_2{*_s}\mathcal {G}_2=\mathcal {F}_1{*_s}\mathcal {G}_1\), pre-multiplying \(\mathcal {F}_2^\dagger \), we obtain \(\mathcal {G}_2=\mathcal {F}_2^\dagger {*_m}\mathcal {F}_1{*_s}\mathcal {G}_1\). Now:

$$\begin{aligned} \mathbf{S} (s)=\text{ rshrank }(\mathcal {G}_2)=\text{ rshrank }(\mathcal {F}_2^{\dagger }{*_m}\mathcal {F}_1{*_s}\mathcal {G}_1)\le \text{ rshrank }(\mathcal {F}_2^{\dagger }{*_m}\mathcal {F}_1)\le \mathbf{S} (s). \end{aligned}$$

Therefore, \(\mathcal {F}_2^\dagger {*_m}\mathcal {F}_1\) invertible. Similarly, we can show \(\mathcal {G}_1{*_n}\mathcal {G}_2^\dagger \) is invertible. Let \(\mathcal {S} =\mathcal {F}_2^\dagger {*_m}\mathcal {F}_1\) and \(\mathcal {T}=\mathcal {G}_1{*_n}\mathcal {G}_2^{\dagger }\). Then:

$$\begin{aligned} \mathcal {S}{*_s}\mathcal {T} = \mathcal {F}_2^\dagger {*_m}\mathcal {F}_1{*_s}\mathcal {G}_1{*_n}\mathcal {G}_2^\dagger =\mathcal {F}_2^\dagger {*_m}\mathcal {A}{*_n}\mathcal {G}_2^\dagger =\mathcal {F}_2^\dagger {*_m}\mathcal {F}_2{*_s}\mathcal {G}_2{*_n}\mathcal {G}_2^\dagger =\mathcal {I}. \end{aligned}$$

Thus, \(\mathcal {S}=\mathcal {T}^{-1}\) with \(\mathcal {G}_2 = \mathcal {S}{*_s}\mathcal {G}_1 =\mathcal {T}^{-1}{*_s}\mathcal {G}_1 \text{ and } \mathcal {F}_2 = \mathcal {F}{*_s}\mathcal {G}{*_n}\mathcal {G}^\dagger = \mathcal {F}{*_s}\mathcal {T}.\)\(\square \)

Corollary 3.17

Let \(\mathcal {W}=\mathcal {F}{*_s}\mathcal {G}\) be an arbitrary rank factorization of \(\mathcal {W}\) in accordance with conditions of Theorem 3.9. If \(\mathcal {W}=\mathcal {Q}_1*_r(\mathcal {R}_1*_m\mathcal {P}^*)\) is another rank factorization of \(\mathcal {W}\) satisfying the assumption of Theorem 3.13, then:

$$\begin{aligned} \mathcal {F}=\mathcal {Q}_1{*_s}\mathcal {T},\ \ \mathcal {G}=\mathcal {T}^{-1}{*_s}\mathcal {R}_1{*_m}\mathcal {P}^*, \end{aligned}$$
(3.12)

where the tensor \(\mathcal {T}\) is equal to \(\mathcal {T}=\mathcal {Q}_1^*{*_n}\mathcal {F}\).

In view of Corollary 3.15, if \(\mathcal {W}\) has two different full-rank representations, \(\mathcal {W}=\mathcal {F}{*_s}\mathcal {G}\) and \(\mathcal {W}=\mathcal {Q}_1{*_s}(\mathcal {R}_1{*_m}\mathcal {P}^*)\)\(\mathcal {W}\), then we have:

$$\begin{aligned} \mathcal {A}_{\mathscr {R}(\mathcal {W}),{\mathscr {N}}(\mathcal {W})}^{(2)}=\mathcal {Q}_1{*_s}(\mathcal {R}_1{*_m}\mathcal {P}^*{*_m}\mathcal {A}{*_n}\mathcal {Q}_1)^{-1}{*_s}\mathcal {R}_1{*_m}\mathcal {P}^*=\mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}. \end{aligned}$$

To avoid the high complexity involved in tensor multiplication while computing \(\mathcal {A}_{\mathscr {R}(\mathcal {W}),{\mathscr {N}}(\mathcal {W})}^{(2)}\), we can follow the following alternating approaches. For \(\mathfrak {M}<\mathfrak {N}\), we compute the tensor \(\mathcal {X}=(\mathcal {R}_1{*_m}\mathcal {P}^*{*_m}\mathcal {A}{*_n}\mathcal {Q}_1)^{-1}{*_s}\mathcal {R}_1{*_m}\mathcal {P}^*\) by solving the below multilinear system:

$$\begin{aligned} \mathcal {R}_1{*_m}\mathcal {P}^*{*_m}\mathcal {A}{*_n}\mathcal {Q}_1{*_s}\mathcal {X} = \mathcal {R}_1{*_m}\mathcal {P}^*. \end{aligned}$$
(3.13)

Similarly, for \(\mathfrak {N}<\mathfrak {M}\), we solve the following multilinear system:

$$\begin{aligned} \mathcal {Q}_1^*{*_n}\mathcal {W}{*_m}\mathcal {A}{*_n}\mathcal {Q}_1{*_s}\mathcal {X} = \mathcal {Q}_1^*{*_n}\mathcal {W}. \end{aligned}$$
(3.14)

Obtaining \(\mathcal {X}\) from Eq. (3.13) or (3.14), we compute the outer inverse as follows:

$$\begin{aligned} \mathcal {A}_{\mathscr {R}(\mathcal {Q}_1),{\mathscr {N}}(\mathcal {R}_1{*_m}\mathcal {P}^*)}^{(2)}=\mathcal {A}_{\mathscr {R}(\mathcal {Q}_1),{\mathscr {N}}(\mathcal {Q}_1^*{*_n}\mathcal {W})}^{(2)} = \mathcal {Q}_1{*_s}\mathcal {X}. \end{aligned}$$
(3.15)

Corollary 3.18

Let \(\mathcal {A}\!\in \!\mathbb {C}^\mathbf{M (m)\times \mathbf{N} (n)}\) satisfy \(\text{ rshkrank }(\mathcal {A})=\prod _{i=1}^r R_i\), \(\prod _{i=1}^s S_i\le \prod _{i=1}^r R_i\) and the tensors \(\mathcal {F},~\mathcal {G}\) are chosen as in Proposition 3.12. Suppose \(\mathcal {W}\in \mathbb {C}^\mathbf{N (n)\times \mathbf{M} (m)}\) is an arbitrary tensor with \(\text{ rshkrank }(\mathcal {W})=\prod _{i=1}^s S_i\).

  1. (a)

    If (3.9) is a full-rank factorization of \(\mathcal {W}=(\mathcal {G}{*_m}\mathcal {A})^*{*_s}\mathcal {G}\), then:

    $$\begin{aligned} \mathcal {A}_{\mathscr {R}(\mathcal {Q}_1),{\mathscr {N}}(\mathcal {R}_1{*_m}\mathcal {P}^*)}^{(2,4)} = \mathcal {Q}_1{*_s}(\mathcal {R}_1{*_m}\mathcal {P}^*{*_m}\mathcal {A}{*_n}\mathcal {Q}_1)^{-1}{*_s}\mathcal {R}_1{*_m}\mathcal {P}^*=(\mathcal {G}{*_m}\mathcal {A})^\dagger {*_s}\mathcal {G}. \end{aligned}$$
    (3.16)
  2. (b)

    If (3.9) is a full–rank factorization of \(\mathcal {W}=\mathcal {F}{*_s}(\mathcal {A}{*_n}\mathcal {F})^* \), then:

    $$\begin{aligned} \mathcal {A}_{\mathscr {R}(\mathcal {Q}_1),{\mathscr {N}}(\mathcal {R}_1{*_m}\mathcal {P}^*)}^{(2,3)} = \mathcal {Q}_1{*_s}(\mathcal {R}_1{*_m}\mathcal {P}^*{*_m}\mathcal {A}{*_n}\mathcal {Q}_1)^{-1}{*_s}\mathcal {R}_1{*_m}\mathcal {P}^*=\mathcal {F}{*_s}(\mathcal {A}{*_n}\mathcal {F})^\dagger \end{aligned}$$
    (3.17)

Proof

  1. (a)

    In view of Proposition 3.12 and Corollary 3.15, the result is follows from:

    $$\begin{aligned} \mathcal {Q}_1{*_s}(\mathcal {R}_1{*_m}\mathcal {P}^*{*_m}\mathcal {A}{*_n}\mathcal {Q}_1)^{-1}{*_s}\mathcal {R}_1{*_m}\mathcal {P}^*= & {} (\mathcal {G}{*_m}\mathcal {A})^*{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}(\mathcal {G}{*_m}\mathcal {A})^*)^{-1}{*_s}\mathcal {G}\\= & {} (\mathcal {G}{*_m}\mathcal {A})^\dagger {*_s}\mathcal {G}. \end{aligned}$$
  2. (b)

    This part of the proof can be verified in a similarly.

\(\square \)

Utilizing QR decomposition, the computation of the outer inverse \(\mathcal {A}_{\mathcal {T},\mathcal {S}}^{(2)}\), of a given tensor \(\mathcal {A}\) is elaborated in Algorithm 1.

figure a

4 Numerical examples and applications

This section is aimed to numerical verification of Algorithm  1. In addition, an application of the tensor generalized inversion in image deblurring is presented.

4.1 Examples

Example 4.1

Let \(\mathcal {A} = (a_{ijkl})_{{1 \le i\le 3, 1\le j,k,l \le 2}} \in \mathbb {R}^{3\times 2\times 2\times 2}\) with:

$$\begin{aligned} a_{ij11} = \begin{pmatrix} 1 &{} 3\\ 1 &{} 4\\ 2 &{} 6\\ \end{pmatrix},~ a_{ij21} = \begin{pmatrix} 2 &{} 4\\ 3 &{} 5\\ 3 &{} 6\\ \end{pmatrix},~ a_{ij12} = \begin{pmatrix} 3 &{} 5\\ 4 &{} 6\\ 4 &{} 7\\ \end{pmatrix},~ a_{ij22} = \begin{pmatrix} 4 &{} 6\\ 6 &{} 7\\ 5 &{} 7\\ \end{pmatrix}. \end{aligned}$$

Choose the tensors \(\mathcal {F} = (f_{ijk}) \in \mathbb {R}^{2\times 2\times 2}\) and \(\mathcal {G} = (g_{ijkl}) \in \mathbb {R}^{ 2\times 3\times 2}\), such that:

$$\begin{aligned}&f_{ij1} = \begin{pmatrix} 1 &{} 0\\ 2 &{} 0\\ \end{pmatrix},~ f_{ij2} = \begin{pmatrix} 1 &{} 0\\ 1 &{} 0\\ \end{pmatrix},~ g_{ij1} = \begin{pmatrix} 1 &{} 0 &{} 0\\ 0 &{} 0 &{} 1 \end{pmatrix},~ g_{ij2} = \begin{pmatrix} 1 &{} -4 &{} 1\\ -1 &{} 8 &{} -1 \end{pmatrix}. \end{aligned}$$

Consider the tensor \(\mathcal {W}\) satisfying \(\mathcal {W}=\mathcal {F}{*_n}\mathcal {G}\). The tensors \(\mathcal {Q}_1,\mathcal {R}_1\) and \(\mathcal {P}\) as defined in Eq. (3.6), are, respectively, given by:

$$\begin{aligned} \mathcal {Q}_1(:,:,1)= & {} \begin{pmatrix} \frac{1}{\sqrt{2}} &{} 0\\ -\frac{1}{\sqrt{2}}&{} 0\\ \end{pmatrix},~ \mathcal {Q}_1(:,:,2) = \begin{pmatrix} \frac{1}{\sqrt{2}} &{} 0\\ \frac{1}{\sqrt{2}} &{} 0\\ \end{pmatrix},~\\ \mathcal {R}_1(:,:,1)= & {} \begin{pmatrix} -\frac{1}{\sqrt{2}} &{} 0 &{} 0\\ \frac{3}{\sqrt{2}} &{} 0 &{} 1 \end{pmatrix},~ \mathcal {R}_1(:,:,2) = \begin{pmatrix} 0 &{} -\frac{1}{\sqrt{2}}\\ \sqrt{2} &{} \frac{1}{\sqrt{2}} \end{pmatrix}, \end{aligned}$$

and \(\mathcal {P}\) is the identity tensor of order \(3\times 2\times 3\times 2\). From Theorem 3.13 (b) and (c), \(A^{(2)}_{\mathscr {R}(\mathcal {W}),{\mathscr {N}}(\mathcal {W})} = (x_{ijk{l}})\in \mathbb {R}^{2\times 2\times 3 \times 2}\) is equal to:

$$\begin{aligned} x_{ij11}= & {} \begin{pmatrix} 33/2 &{} 0\\ -25/2 &{} 0\\ \end{pmatrix},~ x_{ij21} = \begin{pmatrix} 0 &{} 0\\ 0 &{} 0\\ \end{pmatrix},~ x_{ij31} = \begin{pmatrix} 4 &{} 0\\ -3 &{} 0\\ \end{pmatrix},\\ x_{ij12}= & {} \begin{pmatrix} 25/2 &{} 0\\ -19/2 &{} 0\\ \end{pmatrix},~ x_{ij22} = \begin{pmatrix} -34 &{} 0\\ 26 &{} 0\\ \end{pmatrix}, x_{ij32} = \begin{pmatrix} 25/2 &{} 0\\ -19/2 &{} 0\\ \end{pmatrix}. \end{aligned}$$

Example 4.2

Consider the tensor \(\mathcal {A}\) as in Example 4.1 and \(\mathcal {W} = \mathcal {A}^{\mathrm T}\in \mathbb {R}^{2\times 2\times 3 \times 2}\). The tensors \(\mathcal {Q}_1 \in \mathbb {R}^{2\times 2\times 3}, \mathcal {R}_1 \in \mathbb {R}^{3\times 3\times 2}\) and \(\mathcal {P} \in \mathbb {R}^{3\times 2\times 3 \times 2}\) satisfying (3.6) are given by:

$$\begin{aligned} \mathcal {Q}_1(:,:,1)= & {} \begin{pmatrix} -2057/4470 &{} -677/1261\\ -2057/4470 &{} -677/1261 \end{pmatrix},~ \mathcal {Q}_1(:,:,2) = \begin{pmatrix} 265/362 &{} -155/3792 \\ 91/907 &{} -265/394 \end{pmatrix},\\ \mathcal {Q}_1(:,:,3)= & {} \begin{pmatrix} -161/3323 &{} -719/1060 \\ 508/699 &{} 347/3581 \end{pmatrix},~\\ \mathcal {R}_1(:,:,1)= & {} \begin{pmatrix} -8827/677 &{} -3958/549 &{} -1927/375 \\ 0 &{} -3147/994 &{} -1916/1019 \\ 0 &{} 0 &{} -508/2097 \end{pmatrix},~\\ \mathcal {R}_1(:,:,2)= & {} \begin{pmatrix} -4244/465 &{} -3125/281 &{} -1719/241 \\ -1741/1060 &{} -1359/892 &{} -155/88 \\ -161/3323 &{} 161/3323 &{} -508/3495 \end{pmatrix}, \end{aligned}$$

and \(\mathcal {P}\) is the identity tensor of order \(3\times 2\times 3\times 2\). Using Theorem 3.13 (b) and (c), we obtain the outer inverse \(\mathcal {A}^{(2)}_{\mathscr {R}(\mathcal {W}),{\mathscr {N}}(\mathcal {W})} = (x_{ijkl}) \in \mathbb {R}^{2\times 2\times 3 \times 2}\), where:

$$\begin{aligned} x_{ij11}= & {} \begin{pmatrix} 11/76 &{} 2\\ -163/76 &{} -11/38 \end{pmatrix},~~ x_{ij21} = \begin{pmatrix} -23/76 &{} -5/4 \\ 99/76 &{} 27/76 \end{pmatrix},~~ x_{ij31} = \begin{pmatrix} 5/76 &{} 1\\ -81/76 &{} -5/38 \end{pmatrix},\\ x_{ij12}= & {} \begin{pmatrix} -1/76 &{} 0\\ 1/76 &{} 1/38 \end{pmatrix},~~ x_{ij22} = \begin{pmatrix} -7/76 &{} -1 \\ 83/76 &{} 7/38 \end{pmatrix},~~ x_{ij32} = \begin{pmatrix} 15/76 &{} 1/4\\ -15/76 &{} -11/76 \end{pmatrix}. \end{aligned}$$

By Corollary 3.14, we obtain the Moore–Penrose inverse \(\mathcal {A}^\dagger =A^{(2)}_{\mathscr {R}(\mathcal {W}),{\mathscr {N}}(\mathcal {W})}\).

Example 4.3

Observe \(\mathcal {A} = (a_{ijkl})_{{1 \le i,k\le 3, 1\le j,l \le 2}} \in \mathbb {R}^{3\times 2\times 3\times 2}\) defined as:

$$\begin{aligned} a_{ij11} = \begin{pmatrix} 1 &{} 0\\ 0 &{} 0\\ 0 &{} 1 \end{pmatrix},~ a_{ij21} = \begin{pmatrix} 2 &{} 1\\ 0 &{} 0\\ 1 &{} 0\\ \end{pmatrix},~ a_{ij31} = \begin{pmatrix} 0 &{} 0\\ 1 &{} 0\\ 0 &{} 0\\ \end{pmatrix},~\\ a_{ij12} = \begin{pmatrix} 0 &{} 0\\ 0 &{} 0\\ 0 &{} 0\\ \end{pmatrix}, a_{ij22} = \begin{pmatrix} 1 &{} 0\\ 1 &{} 2\\ 1 &{} 1\\ \end{pmatrix},~ a_{ij32} = \begin{pmatrix} 0 &{} 0\\ 0 &{} 0\\ 0 &{} 0\\ \end{pmatrix}, \end{aligned}$$

and \(\mathcal {W} = \mathcal {A}\). The tensors \(\mathcal {Q}_1 \in \mathbb {R}^{3 \times 2\times 2\times 2}, \mathcal {R}_1 \in \mathbb {R}^{2 \times 2\times 3\times 2}\) and \(\mathcal {P} \in \mathbb {R}^{3\times 2\times 3 \times 2}\) as defined in (3.6) are equal to:

$$\begin{aligned} \mathcal {Q}_1(:,:,1,1)= & {} \begin{pmatrix} -1189/3363 &{} 0 \\ -1189/3363 &{} -985/1393 \\ -1189/3363 &{} -1189/3363 \end{pmatrix},~ \mathcal {Q}_1(:,:,1,2)= \begin{pmatrix} 523/1704 &{} -433/1834 \\ -523/3692 &{} -523/1846\\ -1046/2769 &{} 194/249 \end{pmatrix},~\\ \mathcal {Q}_1(:,:,2,1)= & {} \begin{pmatrix} 1483/2015 &{} 529/1168 \\ -954/5617 &{} -1908/5617\\ 1590/5617 &{} -954/5617 \end{pmatrix},~ \mathcal {Q}_1(:,:,2,2) = \begin{pmatrix} -220/4599 &{} -220/4599 \\ -419/461 &{} 1760/4599 \\ 220/1533 &{} 220/4599 \end{pmatrix},\\ \mathcal {R}_1(:,:,1,1)= & {} \begin{pmatrix} -3363/1189 &{} 0\\ 0 &{} 0 \end{pmatrix},~ \mathcal {R}_1(:,:,2,1) = \begin{pmatrix} -1189/1121 &{} 0 \\ 1168/529 &{} 0 \end{pmatrix},\\ \mathcal {R}_1(:,:,3,1)= & {} \begin{pmatrix} -985/1393 &{} 568/523 \\ 3180/5617 &{} 0 \end{pmatrix},\\ \mathcal {R}_1(:,:,1,2)= & {} \begin{pmatrix} -1189/3363 &{} -523/3692 \\ -954/5617 &{} -419/461 \end{pmatrix},~ \mathcal {R}_1(:,:,2,2) = \begin{pmatrix} 0 &{} 0\\ 0 &{} 0\\ \end{pmatrix}, \mathcal {R}_1(:,:,3,2) = \begin{pmatrix} 0 &{} 0\\ 0 &{} 0\\ \end{pmatrix}, \end{aligned}$$

and \(\mathcal {P}\) is the identity tensor of order \(3\times 2\times 3\times 2\). From Theorem 3.13 (b) and (c), we obtain \(\mathcal {A}^{(2)}_{\mathscr {R}(\mathcal {W}),{\mathscr {N}}(\mathcal {W})}=(x_{ijk{l}})\in \mathbb {R}^{2\times 2\times 3 \times 2}\), where:

$$\begin{aligned} x_{ij11} = \begin{pmatrix} 1 &{} 0\\ 0 &{} 0\\ 0 &{} 1\\ \end{pmatrix},~ x_{ij21} = \begin{pmatrix} 0 &{} 1\\ 0 &{} 0\\ 1 &{} -2\\ \end{pmatrix},~ x_{ij31} = \begin{pmatrix} -2 &{} 0\\ 1 &{} 0\\ 0 &{} -2\\ \end{pmatrix},~\\ x_{ij12} = \begin{pmatrix} 0 &{} 0\\ 0 &{} 0\\ 0 &{} 0\\ \end{pmatrix}, x_{ij22} = \begin{pmatrix} 1/2 &{} -3/4\\ -1/2 &{} 1/2\\ -1/2 &{} 2\\ \end{pmatrix},~ x_{ij32} = \begin{pmatrix} 0 &{} 0\\ 0 &{} 0\\ 0 &{} 0\\ \end{pmatrix}, \end{aligned}$$

Applying Corollary 3.14, one can observe that \(\mathcal {A}^{\#}=A^{(2)}_{\mathscr {R}(\mathcal {W}),{\mathscr {N}}(\mathcal {W})}\).

4.2 Image deblurring

The discrete model for two-dimensional (2D) image blurring is represented as:

$$\begin{aligned} Ax = b, \end{aligned}$$
(4.1)

where A represents the blurring matrix of some specific structure. For instance A may a banded, Toeplitz, or block-Toeplitz matrix (Calvetti et al. 1999; Kilmer and Martin 2011). Here, x is the true image and b is the blurred image. In practice, b is corrupted by a noise. Mostly, the blurred matrix A is ill-conditioned, so even if A is theoretically invertible, the exact solution x will be contaminated by the noise. The formulation in (4.1) is used in many image restoration problems. For more information on image deblurring and restoration (see Andrews and Hunt 1977; Hansen et al. 2006; Lagendijk and Biemond 2012).

In case of three-dimensional (3D) colour image blurring problem, often occurring in medical or geographic imaging, it can be written as a tensor equation:

$$\begin{aligned} \mathcal {A}*_1\mathcal {X}=\mathcal {B}, \end{aligned}$$
(4.2)

where \(\mathcal {A}\) is the known blurring tensor, \(\mathcal {X}\) and \(\mathcal {B}\) are tensors representing the true image, and the blurred image, often corrupted by the noise. The main objective in the image restoration is to establish a blurred free image that requires the approximate solution of the multilinear system (4.2). To find the approximate solution of the ill-posed system (4.2), several iterative methods such as tensor form of the global GMRES, MINIRES, and SYMMLQ are presented in Huang et al. (2019). Furthermore, a few iterative methods (called LSQR, and LSMR) have been discussed by the authors of Huang and Ma (2020).

Fig. 1
figure 1

a and d true image, b and e blurred noisy image, c and f reconstruction image

In our test examples, we consider two \(512\times 512\times 3 \) colour image and formed a blurring image \(\mathcal {B}\) by computing \(\mathcal {A}*_1\mathcal {X}\), where \(\mathcal {X}\) is the true image. Then, the Gaussian noise was added to \(\mathcal {B}\) with the noise level of 0.1 percent. The original images and blurred noisy images are, respectively, shown in Fig. 1a, d, b, e. We reconstructed the true image by the least square solution \(\mathcal {A}^\dagger *_{1}\mathcal {B}\) directly, where the Moore-Penrose inverse is computed using the Algorithm  1 by taking \(\mathcal {W}=\mathcal {A}^T\). The resulting reconstructed images are displayed in Fig. 1c, f.

5 Conclusion

The tensor form of QR decomposition was developed via the Einstein product. Several representations and characterizations of the outer inverse with specified range and nullity conditions are developed and analysed. Appropriate algorithms are developed and the computation of the outer inverse is explained with the help of the tensor QR decomposition. Subsequently, we discuss the relation of this outer inverse with other generalized inverses such as Moore–Penrose inverse, group inverse, Drazin inverse, Weighted Moore–Penrose inverse. The generality of the developed algorithm is reflected in the capability that many kinds of other generalized inverses can be obtained as appropriate special cases. This gives us the flexibility to choose generalized inverses depending on applications. In addition to that, we have discussed one application in the reconstruction of blurred noisy image through least square solution.