Abstract
In this paper, we introduce new representations and characterizations of the outer inverse of tensors through QR decomposition. Derived representations are usable in generating corresponding representations of main tensor generalized inverses. Some results on reshape operation of a tensor are added to the existing theory. An effective algorithm for computing outer inverses of tensors is proposed and applied. The power of the proposed method is demonstrated by its application in 3D color image deblurring.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The set of all order n complex (real) tensor is denoted by \(\mathbb {C}^{I_1\times \cdots \times I_n}\) (resp. \(\mathbb {R}^{I_1\times \cdots \times I_n}\)). The element of the tensor represents \(\mathcal {A}_{i_1...i_n}\). The summation of tensors \(\mathcal {A}, ~\mathcal {B}\in \mathbb {C}^{I_1\times \cdots \times I_n \times J_1 \times \cdots \times J_n }\) is defined as:
However, there is no unique way to represent product of two tensors. Different tensor products have been investigated frequently (Sun et al. 2016). This paper will concentrate on one such class of product ’Einstein product’ (see Einstein 1916; Lai et al. 2009). The Einstein product of \(\mathcal {A} \in \mathbb {C}^{I_1\times \cdots \times I_n \times K_1 \times \cdots \times K_n }\) and \(\mathcal {B} \in \mathbb {C}^{K_1\times \cdots \times K_n \times J_1 \times \cdots \times J_m}\) is denoted as \( \mathcal {A}{*_n}\mathcal {B}\) and is defined by:
Inverses and generalized inverses of matrices and tensors have significantly impacted many areas of theoretical and computational mathematics (Sun et al. 2018; Behera et al. 2019; Sahoo et al. 2020). Brazell et al. (2013) introduced the tensor inverse via the Einstein product. Then, Sun et al. (2016) and subsequently Behera and Mishra (2017) extended to the generalized inverse of tensor, since tensor decomposition is necessary to develop an efficient algorithm for solving multilinear systems. Liang et al. (2019) discussed the Schur and the \(\mathcal {L}\, \mathcal {U}\) decompositions of tensor via the Einstein product. Also, Ji and Wei (2017) discussed the weighted Moore–Penrose inverse and least-squares solutions for a multilinear system. Thereafter, the authors of Behera et al. (2018), Liang and Zheng (2019) added a few more representations through full-rank splitting of arbitrary tensors. The representation of the null and range space was discussed in Ji and Wei (2018), Stanimirović et al. (2018) via Einstein product. Recently, Stanimirović et al. (2018) investigated conditions for the existence of tensor outer inverses using specific algebraic approach.
It is well known that the outer inverse of matrices (Drazin 2012) have been making their presence felt in many pure and applied areas of mathematics (i.e., in iterative methods for solving singular operator equations (Nashed and Chen 1993), stable approximations of ill-posed problems, rank-deficient generalized inverse (Nashed 1976; Zheng and Bapat 2004), and statistics (Chen and Chen 1985). Several characterization of the outer inverse can be found in Chen and Chen (2000), Sheng and Chen (2007), Wei (1998), and Wei and Zhang (2004). As the tensors are generalizations of matrices and vectors, the study of generalized tensor inversion will initiate a multitude of different research topics. Many attractive mathematical properties of generalized tensor inverses motivate us to study outer inverses of tensors via the Einstein product. Well-known results on the numerical methods for computing outer inverses are surveyed in the monographs (Wang et al. 2018; Wei et al. 2018). Additional results about the perturbation bounds for the Moore–Penrose inverse, Core, Core-EP, CMP, and the DMP inverses of tensors via Einstein product can be found in references (Ding and Wei 2016; Jin et al. 2017; Ma 2018; Ma et al. 2019; Miao et al. 2020; Wang et al. 2020; Du et al. 2019; Wang et al. 2020; Xie et al. 2019; Wang and Wei 2017).
A summary of the main facets of this discussion may be given the following bullet points.
-
Further elaboration on reshape operation and characterizations of different generalized inverses of tensors.
-
A few representations of outer inverse of tensors and relationships with other generalized inverses, like group inverses, Moore-Penrose inverses, Drazin inverse, weighted Moore–Penrose inverse, and other generalized inverses are examined.
-
Derive computationally effective representations of generalized inverses of tensors through the tensor QR decomposition and write an effective algorithm for computing the outer inverses of tensors.
-
An application of outer inverse of tensors is demonstrated in a colour image deblurring.
An important element of this study is to achieve the conditions for which the Moore–Penrose inverses, weighted Moore–Penrose inverses, group inverses, and the Drazin inverses are the same. The main advantage of this study is that the algorithm developed for outer inverse of tensor can be easily extended to many important classes of generalized inverses.
The global organization of the manuscript by sections can be described as follows. Section 2 describes preliminary information and provides an overview of notations and definitions. In Sect. 3, we discuss our main results and it has two parts. In the first part, a few results on reshape operations along with some characterizations of outer inverses are established. The computation of outer inverse and other generalized inverse through the tensor QR decomposition are discussed in the second part. A few numerical examples and one application to image reconstruction are discussed in Sect. 4. Finally, we concluded the work in Sect. 5.
2 Preliminaries
To increase the efficiency of the presentation, we use some additional notations, which will be used here on wards:
Using the above notation, the range and nullity space of a tensor are defined (Ji and Wei 2018; Stanimirović et al. 2018) as follows:
where \(\mathcal {A}\in \mathbb {C}^\mathbf{M (m) \times \mathbf{N} (n)}\) and \(\mathcal {O}\) denotes a zero tensor of proper dimensions. Using the facts about the range space of the tensor, Ji and Wei (2018) explored the index of a tensor, which is denoted by \(\mathrm {ind}(\mathcal {A})\), and it is defined as the smallest non-negative integer m, such that: \(\mathscr {R}(\mathcal {A}^m)=\mathscr {R}(\mathcal {A}^{m+1}).\)
Next, we present the definition of the tensor Moore–Penrose inverse which is introduced by the authors of Sun et al. (2016). Let \(\mathcal {A}\in \mathbb {C}^\mathbf{N (n) \times \mathbf{N} (n)}\). The tensor \(\mathcal {Y}\in \mathbb {C}^\mathbf{N (n) \times \mathbf{N} (n)}\) satisfying:
is called Moore-Penrose inverse of \(\mathcal {A}\) and it is denoted by \(\mathcal {A}^\dagger \). Furthermore, we recall the definition of the outer inverse from (Stanimirović et al. 2018).
Definition 2.1
A tensor \(\mathcal {Y}\) represents the outer inverse of \(\mathcal {A}\in \mathbb {C}^\mathbf{M (m) \times \mathbf{N} (n)}\) with prescribed range space \({\mathscr {R}}(\mathcal {B})\) and null space \({\mathscr {N}}(\mathcal {A})\) if it satisfies:
and denoted by \(\mathcal {A}^{(2)}_{{\mathscr {R}}(\mathcal {B}),{\mathscr {N}}({\mathcal {C}})}\).
Let \(S\subseteq \{1,2,3,4\}\). The set of tensors satisfying the tensor equations defined by the set S is denoted by \(\mathcal {A}^{(S)}\). If a tensor \(\mathcal {Y}\) from the set \(\mathcal {A}\{S\}\) satisfying \({\mathscr {R}}(\mathcal {X})={\mathscr {R}}(\mathcal {B})\) (resp. \({\mathscr {N}}(\mathcal {X}) = {\mathscr {N}}(\mathcal {C})\) is denoted by \(\mathcal {A}^{(S)}_{{\mathscr {R}}(\mathcal {B}),*}\) (resp. \(\mathcal {A}^{(S)}_{*,{\mathscr {N}}(\mathcal {C})}\)). The weighted Moore–Penrose inverse of a tensor was introduced in Ji and Wei (2017).
Definition 2.2
(Ji and Wei 2017) Let \(\mathcal {A} \in \mathbb {C}^\mathbf{M (m) \times \mathbf{N} (n)} \) and \(\mathcal {Y}\in \mathbb {C}^\mathbf{N (n) \times \mathbf{M} (m)}\). Suppose \(\mathcal {M}\) and \(\mathcal {N}\) are Hermitian positive-definite tensors, such that \(\mathcal {M} \in \mathbb {C}^\mathbf{M (m) \times \mathbf{M} (m)}\) and \(\mathcal {N} \in \mathbb {C}^\mathbf{N (n) \times \mathbf{N} (n)}\). Then, the tensor \(\mathcal {Y} = \mathcal {A}_{\mathcal {M},\mathcal {N}}^{\dagger }\in \mathcal {A}\{1,2\} \) is called the weighted Moore–Penrose inverse of \(\mathcal {A}\) if it satisfies:
Definition 2.3
(Ji and Wei 2018) Let \(\mathcal {A}\in \mathbb {C}^\mathbf{N (n) \times \mathbf{N} (n)}\) with \(\mathrm {ind}({\mathcal {A}})=k.\) A tensor \(\mathcal {X}\in \mathbb {C}^\mathbf{N (n) \times \mathbf{N} (n)}\) is called the Drazin inverse of \(\mathcal {A}\) if it satisfies:
When \(k=1\), we \(\mathcal {Y}\) becomes the group inverse of \(\mathcal {A}\). The group and Drazin inverse of \(\mathcal {A}\) are labeled by \(\mathcal {A}^{\#}\) and \(\mathcal {A}^{\mathrm D}\), respectively.
Some preliminary results about the tensor range and null space can be collected from (Behera and Sahoo 2020; Ji and Wei 2018; Stanimirović et al. 2018).
Theorem 2.4
(Ji and Wei 2018, Theorem 3.2 (2)) Let \(\mathcal {A}\in \mathbb {C}^\mathbf{N (n)\times \mathbf{N} (n)}\). Then, for any positive integer p, \(\mathscr {N}(\mathcal {A}^p)=\mathscr {N}(\mathcal {A}^{p+1})\) is equivalent to:
and \(\mathrm {ind}({\mathcal {A}})\) of a non-invertible tensor \(\mathcal {A}\) is the smallest positive integer p, such that (2.1) holds.
Lemma 2.5
(Stanimirović et al. 2018, Lemma 2.2.) Let \(\mathcal {S}\in \mathbb {C}^\mathbf{K (k)\times \mathbf{M} (m)}\), \(\mathcal {T}\in \mathbb {c}^\mathbf{K (k)\times \mathbf{N} (n)}.\) Then, \(\mathscr {R}(\mathcal {T})\subseteq \mathscr {R}(\mathcal {S})\) if and only if \(\mathcal {T}=\mathcal {S}{*_n}\mathcal {Z}\) for some tensor \(\mathcal {Z}\in \mathbb {C}^\mathbf{M (m)\times \mathbf{N} (n)}\).
The following results discussed in the framework of Boolean tensors in Behera and Sahoo (2020). However, these two results are true for an arbitrary-order tensor. The proof follows similarly.
Theorem 2.6
(Behera and Sahoo 2020, Theorem 3.3) Let \(\mathcal {A}\in \mathbb {C}^\mathbf{M (m) \times \mathbf{N} (n) }\), \(\mathcal {B}\in \mathbb {C}^\mathbf{N (n) \times \mathbf{S} (s) } \), \(\mathcal {C}\in \mathbb {C}^\mathbf{S (s) \times \mathbf{L} (l) }\) and \(\mathcal {D}\in \mathbb {C}^\mathbf{S (s) \times \mathbf{L} (l) }\) be tensors with \(\mathcal {A}{*_n}\mathcal {B}{*_s}\mathcal {C}=\mathcal {A}{*_n}\mathcal {B}{*_s}\mathcal {D}.\) If \({\mathscr {R}}(\mathcal {B}^{\mathrm T})={\mathscr {R}}(\mathcal {B}^T{*_n}\mathcal {A}^{\mathrm T}),\) then \(\mathcal {B}{*_s}\mathcal {C}=\mathcal {B}{*_s}\mathcal {D}.\)
Corollary 2.7
(Behera and Sahoo 2020, Corollary 3.2) Let \(\mathcal {A}\in \mathbb {C}^\mathbf{M (m) \times \mathbf{N} (n) }\), \(\mathcal {B}\in \mathbb {C}^\mathbf{N (n) \times \mathbf{L} (l) } \), \(\mathcal {C}\in \mathbb {C}^\mathbf{S (s) \times \mathbf{M} (m) }\), and \(\mathcal {D}\in \mathbb {C}^\mathbf{S (s) \times \mathbf{M} (m) }\) be tensors with \(\mathcal {C}{*_m}\mathcal {A}{*_n}\mathcal {B}=\mathcal {D}{*_m}\mathcal {A}{*_n}\mathcal {B}.\) If \({\mathscr {R}}(\mathcal {A})={\mathscr {R}}(\mathcal {A}{*_n}\mathcal {B}),\) then \(\mathcal {C}{*_m}\mathcal {A}=\mathcal {D}{*_m}\mathcal {A}\).
2.1 Tensor decomposition and reshape rank
The reshape operation is an restructures the elements of a tensor into a matrix and the elements of a matrix into a tensor (Stanimirović et al. 2018). This is denoted by rsh and defined as follows.
Definition 2.8
(Definition 3.1, Stanimirović et al. 2018): Let \(\mathfrak {M}=\prod _{i=1}^m M_i\) and \(\mathfrak {N}=\prod _{j=1}^n N_j\). For \( \mathcal {A} \in \mathbb {C}^\mathbf{M (m) \times \mathbf{N} (n)}\) and \(A \in \mathbb {C}^{\mathfrak {M} \times \mathfrak {N}}\), there exist a unique bijective map, called reshape map, denoted by \(\text{ rsh }:\mathbb {C}^\mathbf{M (m) \times \mathbf{N} (n)} \longrightarrow \mathbb {C}^{\mathfrak {M} \times \mathfrak {N}}\), and defined as:
Moreover, the inverse reshape operation is defined by the following bijective map:
We now state the following result from the above definition.
Lemma 2.9
Observe the tensor \(\mathcal {A}\in \mathbb {C}^\mathbf{M (m)\times \mathbf{N} (n)}\) and integers \(\mathfrak {{M}}=\prod _{i=1}^{m}M_i,~\mathfrak {N}=\prod _{j=1}^{n}N_j\). If \(A=\text{ rsh }(\mathcal {A})\in \mathbb C^{\mathfrak {M}\times \mathfrak {N}}\) and \(Y=\text{ rsh }(\mathcal {Y})\), then:
In connection with reshape of a tensor with range and nullity spaces of a tensor, we define the notions termed as reshape range and reshape null space.
Definition 2.10
Let \(\mathcal {A}\in \mathbb {C}^\mathbf{M (m)\times \mathbf{N} (n)}\), \(\mathfrak {{M}}=\prod _{i=1}^{m}M_i,~\mathfrak {N}=\prod _{j=1}^{n}N_j\), and \(A=\text{ rsh }(\mathcal {A})\). The reshape range and reshape null space of \(\mathcal {A}\) is denoted by \(\text{ RSH }(\mathscr {R}(\mathcal {A}))\) and \(\text{ RSH }(\mathscr {N}(\mathcal {A}))\), respectively, and defined as:
Definition 2.11
Let \(\mathcal {A}\in \mathbb {C}^\mathbf{M (m)\times \mathbf{N} (n)}\), \(\mathfrak {{M}}=\prod _{i=1}^{m}M_i,~\mathfrak {N}=\prod _{j=1}^{n}N_j\), and \(A=\text{ rsh }(\mathcal {A})\). The reshape dimension is denoted by rshdim, and defined as:
Now, the reshape dimension of an arbitrary subspace \(\Omega \subseteq \mathbb {C}^\mathbf{M (m)}\) can be defined as \(\text{ rshdim }(\Omega )=\text{ dim }(\text{ RSH }(\Omega ))\), where \(\text{ RSH }(\Omega )=\{\text{ rsh }(\mathcal {V}):~\mathcal {V}\in \Omega \}\).
The following results from (Behera et al. 2018; Liang and Zheng 2019; Stanimirović et al. 2018) will be useful to prove the main results.
Theorem 2.12
(Stanimirović et al. 2018, Theorem 3.1) Consider \(\mathcal {A}\in \mathbb {C}^\mathbf{M (m)\times \mathbf{N} (n)}\) and \(\mathcal {B}\in \mathbb {C}^\mathbf{N (n)\times \mathbf{L} (l)}\). Then:
-
(a)
\(\mathscr {R}(\mathcal {A}{*_n}\mathcal {B})=\mathscr {R}(\mathcal {A}) \Longleftrightarrow \text{ rshrank }(\mathcal {A}{*_n}\mathcal {B})=\text{ rshrank }(\mathcal {A})\);
-
(b)
\(\mathscr {N}(\mathcal {A}{*_n}\mathcal {B})=\mathscr {N}(\mathcal {B}) \Longleftrightarrow \text{ rshrank }(\mathcal {A}{*_n}\mathcal {B})=\text{ rshrank }(\mathcal {B})\),
where \(\mathrm {rshrank}()\) denotes the reshaping rank of a tensor, as it is defined in Stanimirović et al. (2018).
Lemma 2.13
(Stanimirović et al. 2018) Let \(\mathcal {A} \in \mathbb {C}^\mathbf{M (m)\times \mathbf{N} (n)}\) and \(\mathcal {B} \in \mathbb {C}^\mathbf{N (n)\times \mathbf{L} (l)}\) be given tensors, integers, \(\mathfrak {M}=\prod _{i=1}^m M_i\), \(\mathfrak {N}=\prod _{j=1}^n N_j\), and \(\mathfrak {L}=\prod _{k=1}^l L_k\). Then:
where \(A=\mathrm {rsh}\left( \mathcal {A}\right) \in \mathbb C^{\mathfrak {M}\times \mathfrak {N}},B=\mathrm {rsh}\left( \mathcal {B}\right) \in \mathbb C^{\mathfrak {N}\times \mathfrak {L}}\).
Theorem 2.14
(Liang and Zheng 2019, Theorem 3.6.) For \( \mathcal {A} \in \mathbb {C}^\mathbf{M (m) \times \mathbf{N} (n)} \), there is a left invertible tensor \( \mathcal {F}\in \mathbb {C}^\mathbf{M (m) \times \mathbf{R} (r)}\) and a right invertible tensor \( \mathcal {G}\in \mathbb {C}^\mathbf{R (r) \times \mathbf{N} (n)}\) which satisfy:
The decomposition (2.6) is called a full rank decomposition (or full rank factorization) of the tensor \(\mathcal {A} \).
Theorem 2.15
(Behera et al. 2019, Theorem 3.24) Let \(\mathcal {F}\in \mathbb {C}^\mathbf{N (n) \times \mathbf{R} (r)}\) and \(\mathcal {G}\in \mathbb {C}^\mathbf{R (r) \times \mathbf{N} (n)}\) be two tensors, such that \(\mathcal {A} = \mathcal {F}{*_r}\mathcal {G}\) be a full-rank decomposition of the tensor \(\mathcal {A}\in \mathbb {C}^\mathbf{N (n) \times \mathbf{N} (n) }\). Then, the following statements are equivalent:
-
(a)
\(\mathcal {A}\) is group invertible;
-
(b)
\(\mathcal {G}{*_n}\mathcal {F}\) is invertible.
3 Main results
3.1 Further results on reshape operations
In view of Lemma 2.9 and Eq. (2.4), we have the following result.
Lemma 3.1
Let us notice \(\mathcal {A}\in \mathbb {C}^\mathbf{M (m)\times \mathbf{N} (n)}\) and \(\mathfrak {{M}}=\prod _{i=1}^{m}M_i,~\mathfrak {N}=\prod _{j=1}^{n}N_j\), and \(A=\text{ rsh }(\mathcal {A})\). Then:
Proof
The second part can be proved similarly. \(\square \)
Consider two tensor spaces \(\Omega _1\) and \(\Omega _2\), such that \(\Omega _1 \subseteq \Omega _2\). Now, using the definition of reshape operation, we obtain \(\text{ RSH }(\Omega _1)\subseteq \text{ RSH }(\Omega _2)\). Furthermore, since \(\text{ dim }(\text{ RSH }(\Omega _1))=\text{ dim }(\text{ RSH }(\Omega _2))\), it follows that \(\text{ RSH }(\Omega _1)=\text{ RSH }(\Omega _2)\). Therefore, \(\Omega _1=\Omega _2\). This is stated in the next result.
Theorem 3.2
Let \(\Omega _1\) and \(\Omega _2\) be two subspaces of a tensor space \(\mathbb {C}^\mathbf{M (m)}\). If \(\Omega _1 \subseteq \Omega _2\) and \(\text{ rshdim }(\Omega _1)=\text{ rshdim }(\Omega _2)\), then \(\Omega _1=\Omega _2\).
Using Definition 2.11 and Lemma 3.1, one can obtain the next tensor generalization of the rank nullity theorem.
Theorem 3.3
Arbitrary tensor \(\mathcal {A}\in \mathbb {C}^\mathbf{M (m)\times \mathbf{N} (n)}\) satisfies \(\text{ rshdim }(\mathscr {R}(\mathcal {A}))+\text{ rshdim }(\mathscr {N}(\mathcal {A}))=\mathbf{N} (n).\)
Lemma 3.4
Let \(\mathcal {A}\in \mathbb {C}^\mathbf{M (m)\times \mathbf{N} (n)}\) and \(\text{ rshrank }(\mathcal {A})=\mathbf {R} (r)=\prod _{i=1}^{r}R_i.\) Let \(\Omega _1\) be a subspace of \(\mathbb {C}^\mathbf{N (n)}\) with \(\text{ rshdim }(\Omega _1)=\mathbf{S} (s)\le \mathbf{R} (r)\), and \(\Omega _2\) be a subspace of \(\mathbb {C}^\mathbf{M (m)}\) with \(\text{ rshdim }(\Omega _2)=\mathbf{M} (m)-\mathbf{S} (s)\). If \(\mathcal {A}\) has a \(\{2\}\)-inverse \(\mathcal {X}\), such that \(\mathscr {R}(\mathcal {X})=\Omega _1\) and \(\mathscr {N}(\mathcal {X})=\Omega _2\), then \(\mathcal {A}{*_n}\Omega _1\oplus \Omega _2=\mathbb {C}^\mathbf{M (m)}\).
Proof
Since \(\mathcal {X}\) is \(\{2\}\)-inverse of \(\mathcal {A}\), \((\mathcal {A}{*_n}\mathcal {X})^2=\mathcal {A}{*_n}\mathcal {X}{*_m}\mathcal {A}{*_n}\mathcal {X}=\mathcal {A}{*_n}\mathcal {X}\). Thus, \(\mathcal {A}{*_n}\mathcal {X}\) is idempotent. Hence, \({\mathscr {R}}(\mathcal {A}{*_n}\mathcal {X})\) and \({\mathscr {N}}(\mathcal {A}{*_n}\mathcal {X})\) are complementary. Furthermore, \(\mathcal {A}{*_n}\mathcal {X}\) is the projection on \({\mathscr {R}}(\mathcal {A}{*_n}\mathcal {X})\) along \({\mathscr {N}}(\mathcal {A}{*_n}\mathcal {X})\). As \(\mathcal {X}\) is an outer inverse of \(\mathcal {A}\), it is easy to verify \({\mathscr {N}}(\mathcal {X})={\mathscr {N}}(\mathcal {A}{*_n}\mathcal {X})\). Therefore, \(\Omega _2={\mathscr {N}}(\mathcal {A}{*_n}\mathcal {X})\). The result is follows from the fact that \(\mathcal {A}{*_n}\Omega _1=\mathcal {A}{*_n}{\mathscr {R}}(\mathcal {X})={\mathscr {R}}(\mathcal {A}{*_n}\mathcal {X})\). \(\square \)
Next, we consider characterizations of tensor generalized inverses with specified range and nullity.
Theorem 3.5
Let us observe \(\mathcal {A} \in \mathbb {C}^\mathbf{M (m) \times \mathbf{N} (n) }, \mathcal {U} \in \mathbb {C}^\mathbf{N (n) \times \mathbf{S} (s) }, \mathcal {V} \in \mathbb {C}^\mathbf{L (l) \times \mathbf{M} (m) }\), and \(\mathrm {rshrank}(\mathcal {A})=\mathbf{R} (r)\). If
where \((\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U})^{(1)}\) is a \(\{1\}\)-inverse of \((\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U})\). Then, the following holds:
-
(a)
\(\mathcal {X} \in \mathcal {A}\{1\} \Longleftrightarrow \text{ rshrank }(\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U}) = \prod _{i=1}^r R_i\).
-
(b)
\(\mathcal {X} \in \mathcal {A}\{2\}\) and \({\mathscr {R}}(\mathcal {X}) = {\mathscr {R}}(\mathcal {U}) \Longleftrightarrow \text{ rshrank }(\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U})=\text{ rshrank }(\mathcal {U})\).
-
(c)
\(\mathcal {X} \in \mathcal {A}\{2\}\) and \({\mathscr {N}}(\mathcal {X}) = {\mathscr {N}}(\mathcal {V}) \Longleftrightarrow \text{ rshrank }(\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U})=\text{ rshrank }(\mathcal {U})=\text{ rshrank }(\mathcal {V})=\prod _{i=1}^r R_i\).
-
(d)
\(\mathcal {X} \in \mathcal {A}_{{\mathscr {R}}(\mathcal {U}),{\mathscr {N}}(\mathcal {V})}^{(1,2)}\) if and only if rshrank\((\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U})=\text{ rshrank }(\mathcal {U})=\text{ rshrank }(\mathcal {V})=\prod _{i=1}^r R_i\).
Proof
(a) Let \(\text{ rshrank }(\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U}) = \prod _{i=1}^r R_i\). Since:
we get \(\text{ rshrank }(\mathcal {A}{*_n}\mathcal {U}) = \text{ rshrank }(\mathcal {A})\). Using Theorem 2.12, we have \({\mathscr {R}}(\mathcal {A}{*_n}\mathcal {U}) = {\mathscr {R}}(\mathcal {A})\), which further implies \(\mathcal {A} = \mathcal {A}{*_n}\mathcal {U}{*_s}\mathcal {Y}\) for appropriate \(\mathcal {Y}\). Now:
We can easily show \(\text{ rshrank }(\mathcal {A}^T)=\text{ rshrank }(\mathcal {A}^T{*_m}\mathcal {V}^T)\). Again by using Theorem 2.12, we obtain \({\mathscr {R}}(\mathcal {A}^T{*_m}\mathcal {V}^T) = {\mathscr {R}}(\mathcal {A}^T)\). Therefore, by Theorem 2.6, \(\mathcal {A}{*_n}\mathcal {X}{*_m}\mathcal {A} =\mathcal {A}\). To show the sufficient part, let \(\mathcal {X} \in \mathcal {A}\{1\}\). Since:
we have \(\text{ rshrank }(\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U}) = \text{ rshrank }(\mathcal {A}) = \mathbf{R} (r)\).
(b) Consider \(\text{ rshrank }(\mathcal {V}{*_m}\mathcal {A}{*_n}\mathcal {U})=\text{ rshrank }(\mathcal {U})\). Since \({\mathscr {R}}(\mathcal {X})\subseteq {\mathscr {R}}(\mathcal {U})\) and:
Therefore, \(\mathscr {R}(\mathcal {X})=\mathscr {R}(\mathcal {U})\). Now:
Starting from \(\mathscr {R}(\mathcal {X}{*_m}\mathcal {A}{*_n}\mathcal {U})\subseteq \mathscr {R}(\mathcal {X})\), the following can be concluded:
Thus, by Corollary 2.7, we have \( \mathcal {X}{*_m}\mathcal {A}{*_n}\mathcal {X}= \mathcal {X}\). Conversely, let \(\mathcal {X} \in \mathcal {A}\{2\}\) and \(\text{ rshrank }(\mathcal {U}) = \text{ rshrank }(\mathcal {X})\). Since:
one can conclude
This completes this part of the proof.
(c) The part (c) using similar principles.
(d) It follows from (a), (b), and (c). \(\square \)
Lemma 3.6
Let \(\mathcal {A}\in \mathbb {C}^\mathbf{M (m)\times \mathbf{N} (n)}\) and \(\text{ rshrank }(\mathcal {A})=\prod _{i=1}^r R_i\). Assume the existence of a tensor \(\mathcal {G}\in \mathbb {C}^\mathbf{N (n)\times \mathbf{M} (m)} \) satisfying:
If \(\mathcal {A}\) has a \(\{2\}\)-inverse \(\mathcal {A}_{\mathscr {R}(\mathcal {G}),\mathscr {N}(\mathcal {G})}^{(2)}\), then:
-
(a)
\(\text{ ind }(\mathcal {A}{*_n}\mathcal {G})=\text{ ind }(\mathcal {G}{*_m}\mathcal {A})=1\);
-
(b)
\(\mathcal {A}_{\mathscr {R}(\mathcal {G}),\mathscr {N}(\mathcal {G})}^{(2)}=\mathcal {G}{*_m}(\mathcal {A}{*_n}\mathcal {G})^{\#}=(\mathcal {G}{*_m}\mathcal {A})^{\#}{*_n}\mathcal {G}\).
Proof
It is not difficult to show \(\mathscr {R}(\mathcal {A}{*_n}\mathcal {G})=\mathcal {A}{*_n}\mathscr {R}(\mathcal {G})\) and \(\mathscr {N}(\mathcal {G})\subseteq \mathscr {N}(\mathcal {A}{*_n}\mathcal {G})\). Thus by Lemma 3.4, we conclude:
Now, rshdim\(\mathscr {N}(\mathcal {A}{*_n}\mathcal {G})=\prod _{i=1}^m M_i-\text{ dim }\,\mathscr {R}(\mathcal {A}{*_n}\mathcal {G})=\prod _{i=1}^m M_i-\prod _{i=1}^s S_i=\text{ dim }\,(\mathscr {N}(\mathcal {G})).\) Thus, \(\mathscr {N}(\mathcal {A}{*_n}\mathcal {G})=\mathscr {N}(\mathcal {G})\), and hence, \(\mathscr {R}(\mathcal {A}{*_n}\mathcal {G})\oplus \mathscr {N}(\mathcal {A}{*_n}\mathcal {G})=\mathcal {A}{*_n}\mathscr {N}(\mathcal {G})\oplus \mathscr {N}(\mathcal {G})=\mathbb {C}^\mathbf{M (m)}\). Therefore, by Theorem 2.4, we obtain ind\((\mathcal {A}{*_n}\mathcal {G})=1\). To find the outer inverse, let us assume \(\mathcal {X}:=\mathcal {G}{*_m}(\mathcal {A}{*_n}\mathcal {G})^{\#}.\) Now:
\(\mathscr {R}(\mathcal {X})=\mathscr {R}(\mathcal {G}{*_m}(\mathcal {A}{*_n}\mathcal {G})^{\#}) \subseteq \mathscr {R}(\mathcal {G})\), and \(\mathscr {N}(\mathcal {X})=\mathscr {N}\left( \mathcal {G}{*_m}(\mathcal {A}{*_n}\mathcal {G})^{\#}\right) \supseteq \mathscr {N}\left( (\mathcal {A}{*_n}\mathcal {G})^{\#}\right) =\mathscr {N}\left( \mathcal {A}{*_n}\mathcal {G}\right) \supseteq \mathscr {N}(\mathcal {G})\). Since
it is possible to verify:
Therefore, \(\text{ rshrank }(\mathcal {X})=\text{ rshrank }(\mathcal {G})\). Using Theorem 2.12, we conclude \(\mathscr {R}(X)=\mathscr {R}(\mathcal {G}{*_m}(\mathcal {A}{*_n}\mathcal {G})^{\#})=\mathscr {R}(G)\). As \(\mathcal {S}\subseteq \mathscr {N}(\mathcal {X})\) and:
it follows that \(\mathscr {N}(\mathcal {X})=\mathcal {S}\). Hence, \(\mathcal {A}_{\mathscr {R}(\mathcal {G}),\mathscr {N}(\mathcal {G})}^{(2)}=\mathcal {G}{*_m}(\mathcal {A}{*_n}\mathcal {G})^{\#}\).
The dual statements \(\text{ ind }(\mathcal {G}{*_m}\mathcal {A})=1\) and \(\mathcal {A}_{\mathscr {R}(\mathcal {G}),\mathscr {N}(\mathcal {G})}^{(2)}=(\mathcal {G}{*_m}\mathcal {A})^{\#}{*_n}\mathcal {G}\) can be verified similarly. \(\square \)
Theorem 3.7
(a). The following representations are valid for \(\mathcal {A}\in \mathbb {C}^\mathbf{M (m)\times \mathbf{N} (n)}\):
-
(i)
\(\mathcal {A}^\dagger =\mathcal {A}^{(2)}_{\mathscr {R}(\mathcal {A}^*),{\mathscr {N}}(\mathcal {A}^*)}\)
-
(ii)
\(\mathcal {A}^\dagger _{\mathcal {M},\mathcal {N}}=\mathcal {A}^{(2)}_{\mathscr {R}(\mathcal {A}^{\sharp }),{\mathscr {N}}(\mathcal {A}^{\sharp })}\), where \(\mathcal {A}^{\sharp }=\mathcal {N}^{-1}{*_m}\mathcal {A}^*{*_n}\mathcal {M}\) and \(\mathcal {M}\) and \(\mathcal {N}\) are Hermitian positive definite tensors.
(b). For \(\mathcal {A}\in \mathbb {C}^\mathbf{N (n)\times \mathbf{N} (n)}\), the following holds:
-
(i)
\(\mathcal {A}^{\#}=\mathcal {A}^{(2)}_{\mathscr {R}(\mathcal {A}),{\mathscr {N}}(\mathcal {A})}\) if \(\mathrm {ind}(\mathcal {A})=1\).
-
(ii)
\(\mathcal {A}^{{\mathrm D}}=\mathcal {A}^{(2)}_{\mathscr {R}(\mathcal {A}^k),{\mathscr {N}}(\mathcal {A}^k)}\) if \(\mathrm {ind}(\mathcal {A})=k\).
Proof
-
a(i)
It is obvious that \(\mathcal {A}^\dagger \!=\!\mathcal {A}^{(2)}\). Now, let \(\mathcal {X}\!\in \! {\mathscr {N}}(\mathcal {A}^\dagger )\). This yields \(\mathcal {A}^\dagger {*_m}\mathcal {X}\!=\!\mathcal {O}\). Pre-multiplying \(\mathcal {A}\) and applying the Moore–Penrose inverse properties, we obtain \((\mathcal {A}^\dagger )^*{*_m}\mathcal {A}^*{*_m}\mathcal {X}=\mathcal {O}\). Again, pre-multiplying the last equality by \(\mathcal {A}^*\), we get \((\mathcal {A}{*_n}\mathcal {A}^\dagger {*_m}\mathcal {A})^*{*_m}\mathcal {X}=\mathcal {O}\). Thus, \(\mathcal {A}^*{*_m}\mathcal {X}=\mathcal {O}\), and hence, \({\mathscr {N}}(\mathcal {A}^\dagger )\subseteq {\mathscr {N}}(\mathcal {A}^*)\). Similarly, we can show \({\mathscr {N}}(\mathcal {A}^*)\subseteq {\mathscr {N}}(\mathcal {A}^\dagger )\). Therefore, \({\mathscr {N}}(\mathcal {A}^\dagger )={\mathscr {N}}(\mathcal {A}^*)\). As \(\mathcal {A}^*=\mathcal {A}^\dagger {*_m}\mathcal {A}{*_n}\mathcal {A}^*\) and \(\mathcal {A}^\dagger =\mathcal {A}^*{*_m}(\mathcal {A}^\dagger )^*{*_n}\mathcal {A}^\dagger \), so from the definition of the range space, we have \(\mathscr {R}(\mathcal {A}^\dagger )=\mathscr {R}(\mathcal {A}^*)\).
-
a(ii)
Clearly, \(\mathcal {A}^\dagger _{\mathcal {M},\mathcal {N}}=\mathcal {A}^{(2)}\). Now, let \(\mathcal {X}\in {\mathscr {N}}(\mathcal {A}^\dagger _{\mathcal {M},\mathcal {N}})\). This yields \(\mathcal {A}^\dagger _{\mathcal {M},\mathcal {N}}{*_m}\mathcal {X}=\mathcal {O}\). Pre-multiplying by \(\mathcal {M}{*_m}\mathcal {A}\) and applying the weighted Moore–Penrose inverse definition, we obtain \((\mathcal {A}^\dagger )^*{*_m}\mathcal {A}^*{*_m}\mathcal {M}{*_m}\mathcal {X}=\mathcal {O}\). Again pre-multiplying \(\mathcal {A}^*\), we get \(\mathcal {A}^*{*_m}\mathcal {M}{*_m}\mathcal {X}=\mathcal {O}\). Furthermore, pre-multiplying \(\mathcal {N}^{-1}\), we have \(\mathcal {A}^{\sharp }{*_m}\mathcal {X}=\mathcal {O}\). Thus, \({\mathscr {N}}(\mathcal {A}^\dagger _{\mathcal {M},\mathcal {N}})\subseteq {\mathscr {N}}(\mathcal {A}^{\sharp })\). Similarly, we can show \({\mathscr {N}}(\mathcal {A}^{\sharp })\subseteq {\mathscr {N}}(\mathcal {A}^\dagger _{\mathcal {M},\mathcal {N}})\). Therefore, \({\mathscr {N}}(\mathcal {A}^\dagger _{\mathcal {M},\mathcal {N}})={\mathscr {N}}(\mathcal {A}^{\sharp }).\) Let \(\mathcal {A}^\dagger _{\mathcal {M},\mathcal {N}}=\mathcal {X}\). Now:
$$\begin{aligned} \mathcal {N}{*_n}\mathcal {X}= & {} \mathcal {N}{*_n}\mathcal {X}{*_m}\mathcal {A}{*_n}\mathcal {X}=\mathcal {A}^*{*_m}\mathcal {X}^*{*_n}\mathcal {N}{*_n}\mathcal {X}\\= & {} \mathcal {A}^*{*_m}\mathcal {X}^*{*_n}\mathcal {A}^*{*_m}\mathcal {M}^*{*_m}\mathcal {M}^{-1}{*_m}\mathcal {X}^*{*_n}\mathcal {N}{*_n}\mathcal {X}\\= & {} \mathcal {A}^*{*_m}\mathcal {M}{*_m}\mathcal {Z}, \text{ where } \mathcal {Z}=\mathcal {A}{*_n}\mathcal {X}{*_m}\mathcal {M}^{-1}{*_m}\mathcal {X}^*{*_n}\mathcal {N}{*_n}\mathcal {X}. \end{aligned}$$
Thus, \({\mathscr {R}}(\mathcal {A}^\dagger _{\mathcal {M},\mathcal {N}})\subseteq {\mathscr {R}}(\mathcal {A}^{\sharp })\). Similarly, we can show the reverse relation.
-
b(i)
Starting from \(\mathcal {A}^{\#}=\mathcal {A}^{(2)}\) and using \(\mathcal {A}=\mathcal {A}^{\#}{*_n}\mathcal {A}^2\) and \(\mathcal {A}^{\#}=\mathcal {A}{*_n}(\mathcal {A}^{\#})^2\), one obtains \(\mathscr {R}(\mathcal {A})=\mathscr {R}(\mathcal {A}^{\#})\). Now, let \(\mathcal {X}\in {\mathscr {N}}(\mathcal {A})\). This implies \(\mathcal {A}{*_n}\mathcal {X}=\mathcal {O}\). Pre-multiplying by \((\mathcal {A}^{\#})^2\), we conclude \(\mathcal {A}^{\#}{*_n}\mathcal {X}=\mathcal {O}\). Thus, \({\mathscr {N}}(\mathcal {A})\subseteq {\mathscr {N}}(\mathcal {A}^{\#})\). Similarly, we can show \({\mathscr {N}}(\mathcal {A}^{\#})\subseteq {\mathscr {N}}(\mathcal {A})\). Hence, \({\mathscr {N}}(\mathcal {A}^{\#})={\mathscr {N}}(\mathcal {A})\)
-
b(ii)
Since \(\mathcal {A}^k=\mathcal {A}^{D}{*_n}\mathcal {A}^{k+1}\) and \(\mathcal {A}^{D}=\mathcal {A}^k{*_n}(\mathcal {A}^{D})^{k+1}\), we have \(\mathscr {R}(\mathcal {A}^k)=\mathscr {R}(\mathcal {A}^{D})\). The null condition can be obtained using similar lines of b(i). \(\square \)
Lemma 3.8
Let us choose \(\mathcal {A}\in \mathbb {C}^\mathbf{M (m)\times \mathbf{N} (n)}\) with the rank decomposition \(\mathcal {A}=\mathcal {F}{*_l}\mathcal {G}\), where \(\mathcal {F}\in \mathbb {C}^\mathbf{M (m)\times \mathbf{L} (l)}\) and \(\mathcal {G}\in \mathbb {C}^\mathbf{L (l)\times \mathbf{N} (n)}\). Then, it follows that \(\mathscr {R}(\mathcal {A})=\mathscr {R}(\mathcal {F})\) and \(\mathscr {N}(\mathcal {A})=\mathscr {N}(\mathcal {F})\).
Proof
From the definition of range and space, it is trivial that \(\mathscr {R}(\mathcal {A})\subseteq \mathscr {R}(\mathcal {F})\) and \(\mathscr {N}(\mathcal {G})\subseteq \mathscr {N}(\mathcal {A})\). Since \(\mathcal {A} = \mathcal {F}{*_l}\mathcal {G}\) is the full-rank decomposition of \(\mathcal {A}\), it yields \(\mathcal {G}\) is right invertible and \(\mathcal {F}\) is left invertible. Let \(\mathcal {G}_R^{-1}\) be the right inverse of \(\mathcal {G}\) and \(\mathcal {F}_L^{-1}\) be the left inverse of \(\mathcal {F}\). Now, pre-multiplying \(\mathcal {F}_L^{-1}\) to \(\mathcal {A}\), we obtain \(\mathcal {G}=\mathcal {F}_L^{-1}{*_m}\mathcal {A}.\) Thus, \(\mathscr {N}(\mathcal {A})\subseteq \mathscr {N}(\mathcal {G}).\) Post-multiplying \(\mathcal {G}_R^{-1}\) to \(\mathcal {A}\), we again get \(\mathcal {F} = \mathcal {A}{*_n}\mathcal {G}_R^{-1}\). This implies \(\mathscr {R}(\mathcal {F})\subseteq \mathscr {R}(\mathcal {A})\) and finalizes the proof. \(\square \)
Theorem 3.9
Observe \(\mathcal {A}\in \mathbb {C}^\mathbf{M (m)\times \mathbf{N} (n)}\) and consider \(\text{ rshrank }(\mathcal {A})=\prod _{i=1}^r R_i\). Suppose the existence of a tensor \(\mathcal {W}\in \mathbb {C}^\mathbf{N (n)\times \mathbf{M} (m)} \), such that \(\text{ rshdim }(\mathscr {R}(\mathcal {W}))=\prod _{i=1}^s S_i\le \prod _{i=1}^r R_i\) and \(\text{ rshdim }(\mathscr {N}(\mathcal {W}))=\prod _{i=1}^m M_i-\prod _{i=1}^s S_i\). Let \(\mathcal {W}={F}{*_s}\mathcal {G}\) be full-rank factorization of \(\mathcal {W}\). If \(\mathcal {A}\) has a \(\{2\}\)-inverse \(\mathcal {A}_{\mathscr {R}(\mathcal {F}),\mathscr {N}(\mathcal {G})}^{(2)}\), then:
-
(a)
\(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F}\) is an invertible tensor;
-
(b)
\(\mathcal {A}_{\mathscr {R}(\mathcal {W}),\mathscr {N}(\mathcal {W})}^{(2)}=\mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}=\mathcal {A}_{\mathscr {R}(\mathcal {F}),\mathscr {N}(\mathcal {G})}^{(2)}\).
Proof
Let \(\mathcal {W}={F}{*_s}\mathcal {G}\) be a rank factorization of \(\mathcal {W}\). An application of Lemma 3.8 initiates:
Now, \(\mathscr {R}(\mathcal {A}{*_n}\mathcal {F})=\mathcal {A}{*_n}\mathscr {R}(\mathcal {F})=\mathcal {A}{*_n}\mathscr {R}(\mathcal {W})\). Therefore, by Lemma 3.4, we obtain:
Thus, \(\text{ rshrank }(\mathcal {A}{*_n}\mathcal {F})=\mathbf{S} (s)=\text{ rshrank }(\mathcal {W})-\text{ rshrank }(\mathcal {G})\). Similarly, argumentation leads to \(\text{ rshrank }(\mathcal {A}{*_n}\mathcal {W})=\prod _{i=1}^s S_i\). This implies that \(\mathcal {A}{*_n}\mathcal {W}=\mathcal {A}{*_n}\mathcal {F}{*_s}\mathcal {G}\) is a full-rank decomposition of \(\mathcal {A}{*_n}\mathcal {W}\). The non-singularity of \(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F}\) follows from Lemma 3.6 and Theorem 2.15. Since
it follows that, \(\mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}\) is a \(\{2\}\)-inverse of \(\mathcal {A}\). From the range and null space definition:
Next, we will show the reverse relation. As
we have:
Now, let \(\mathcal {X}\in \mathscr {N}\left( \mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}\right) \). This implies \(\mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}{*_m}\mathcal {X}=\mathcal {O}\). Pre-multiplying the last equality by the left inverse of \(\mathcal {F}\), we obtain \((\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}{*_m}\mathcal {X}=\mathcal {O}\). Again, pre-multiplication with \(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F}\) leads to \(\mathcal {G}{*_m}\mathcal {X}=\mathcal {O}\). Thus:
Using (3.1), (3.2), (3.3), and (3.4), we obtain:
To show the second part of (b), it is sufficient to show \({\mathscr {N}}(\mathcal {W})\subseteq {\mathscr {N}}(\mathcal {G})\). Let \(\mathcal {X}\in {\mathscr {N}}(\mathcal {W})\). This implies \(\mathcal {W}{*_m}\mathcal {X}=\mathcal {O}\). This in turn gives \({\mathcal {F}}{*_s}\mathcal {G}{*_m}\mathcal {X}=\mathcal {O}\). Pre-multiplying by the left inverse of \(\mathcal {F}\), we obtain \(\mathcal {G}{*_m}\mathcal {X}=\mathcal {O}\). Thus, \({\mathscr {N}}(\mathcal {W})\subseteq {\mathscr {N}}(\mathcal {G})\), and hence:
The proof is completed. \(\square \)
Lemma 3.10
For given tensor \(\mathcal {A}\in \mathbb C^\mathbf{M (m)\times \mathbf{N} (n)}\) and \(\mathcal {X}\in \mathcal {A}\{1\}\), \(\mathcal {X}\in \mathcal {A}\{1,2\}\) if and only if \(\text{ rshrank }(\mathcal {X})=\text{ rshrank }(\mathcal {A})\).
Proof
The sufficient part is trivial. Let \(\text{ rshrank }(\mathcal {X})=\text{ rshrank }(\mathcal {A})\). Clearly, \(\mathscr {R}(\mathcal {X}{*_m}\mathcal {A})\subseteq \mathscr {R}(\mathcal {X})\) and \(\text{ rshrank }(\mathcal {X}{*_m}\mathcal {A})\le \text{ rshrank }(\mathcal {X})\). Now:
Therefore, by Theorem 2.12, \(\mathscr {R}(\mathcal {X}{*_m}\mathcal {A})=\mathscr {R}(\mathcal {X})\). This implies \(\mathcal {X}={\mathcal {X}{*_m}\mathcal {A}{*_n}\mathcal {Y}}\) for some \(\mathcal {Y}\). Multiplication by \(\mathcal {A}\) from the left gives \(\mathcal {A}{*_n}\mathcal {X}=\mathcal {A}{*_n}\mathcal {Y}\). Now:
\(\square \)
Corollary 3.11
Let \(\mathcal {A}\in \mathbb C^\mathbf{M (m)\times \mathbf{N} (n)}\). Then, any two of the following three statements imply the third:
-
(a)
\(\mathcal {X}\in \mathcal {A}\{1\}\);
-
(b)
\(\mathcal {X}\in \mathcal {A}\{1,2\}\);
-
(c)
\(\text{ rshrank }(\mathcal {X})=\text{ rshrank }(\mathcal {A})\).
Proposition 3.12
Let \(\mathcal {A}\in \mathbb C^\mathbf{M (m)\times \mathbf{N} (n)}\) satisfy \(\text{ rshrank }(\mathcal {A})=\prod _{i=1}^r R_i\), and observe \(\mathcal {F}\!\in \!\mathbb C^\mathbf{N (n)\times \mathbf{S} (s)}\), \(\mathcal {G}\!\in \!\mathbb C^\mathbf{S (s)\times \mathbf{M} (m)}\). For \(0<\prod _{i=1}^s S_i\le \prod _{i=1}^r R_i\), the following representations for some outer generalized inverses hold:
Proof
(a). Let \(\mathcal {X}=\mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}\). Now:
Thus, \(\mathcal {X}\) is an outer inverse of \(\mathcal {A}\). Clearly, \(\mathscr {R}(\mathcal {X})\subseteq \mathscr {R}(\mathcal {F})\) and \({\mathscr {N}}(\mathcal {G})\subseteq {\mathscr {N}}(\mathcal {X})\). Let \(\mathcal {Y}\in {\mathscr {N}}(\mathcal {X})\). This leads \(\mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}{*_m}\mathcal {Y}=\mathcal {O}\). Multiplying on the left by \(\mathcal {G}{*_m}\mathcal {A}\), we obtain \(\mathcal {G}*_m \mathcal {Y}=\mathcal {O}\). Thus, \(\mathcal {Y}\in {\mathscr {N}}(\mathcal {G})\). Hence, \({\mathscr {N}}(\mathcal {G})={\mathscr {N}}(\mathcal {X})\). As \(\mathcal {F}=\mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F}=\mathcal {X}{*_m}\mathcal {A}{*_n}\mathcal {F}\). Therefore, \(\mathscr {R}(\mathcal {F})\subseteq \mathscr {R}(\mathcal {X})\). Therefore, \(\mathscr {R}(\mathcal {F})=\mathscr {R}(\mathcal {X})\).
(b). Let \(\mathcal {X}=(\mathcal {G}{*_m}\mathcal {A})^\dagger {*_s}\mathcal {G}\). Observe:
and
Thus, the second part of (b) is verified.
Again, let \(\mathcal {X}=(\mathcal {G}{*_m}\mathcal {A})^*{*_s}\left( \mathcal {G}{*_m}\mathcal {A}(\mathcal {G}{*_m}\mathcal {A})^*\right) ^{-1}{*_s}\mathcal {G}\). It can be verified easily that \( \mathcal {X}{*_m}\mathcal {A}{*_n}\mathcal {X}=\mathcal {X} \) and \( (\mathcal {X}{*_m}\mathcal {A})^*= \mathcal {X}{*_m}\mathcal {A}\). Also, it is trivial that \(\mathscr {R}(\mathcal {X})\subseteq \mathscr {R}(\mathcal {G}{*_m}\mathcal {A})^*)\) and \({\mathscr {N}}(\mathcal {G})\subseteq {\mathscr {N}}(\mathcal {X})\). Let \(\mathcal {Y}\in {\mathscr {N}}(\mathcal {X})\). This implies:
Pre-multiplying \(\mathcal {G}{*_m}\mathcal {A}\), we get \(\mathcal {G}{*_m}\mathcal {Y}=\mathcal {O}\), which yields \({\mathscr {N}}(\mathcal {X})\subseteq {\mathscr {N}}(\mathcal {G})\). Thus, \({\mathscr {N}}(\mathcal {X})={\mathscr {N}}(\mathcal {G})\). Now, \((\mathcal {G}{*_m}\mathcal {A})^*=(\mathcal {G}{*_m}\mathcal {A})^*{*_s}\left( \mathcal {G}{*_m}\mathcal {A}(\mathcal {G}{*_m}\mathcal {A})^*\right) ^{-1}{*_s}\mathcal {G}{*_m}\mathcal {A}(\mathcal {G}{*_m}\mathcal {A})^*=\mathcal {X}{*_m}\mathcal {A}(\mathcal {G}{*_m}\mathcal {A})^*\). Therefore, \(\mathscr {R}(\mathcal {G}{*_m}\mathcal {A})^*)\subseteq \mathscr {R}(\mathcal {X})\), and hence, \(\mathscr {R}(\mathcal {G}{*_m}\mathcal {A})^*)=\mathscr {R}(\mathcal {X})\).
(c) Similar to part (b).
(d) From Corollary 3.11, it is enough to show \(\text{ rshrank }(\mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G})=\mathbf{R} (r)\). Clearly, \(\text{ rshrank }(\mathcal {F}{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1}{*_s}\mathcal {G})\le \text{ rshrank }((\mathcal {G}{*_m}\mathcal {A}{*_n}\mathcal {F})^{-1})= \mathbf{R} (r)\). Now:
Hence, it completes the proof. \(\square \)
3.2 Generalized inversion arising from QR decomposition
Computation of generalized inverses based on full-rank factorizations has been studied by several authors in the recent literature (Behera et al. 2018, 2019; Liang and Zheng 2019). In this section, we have extended the work of Stanimirović et al. (2012) to tensors via Einstein product. In the next result, we employed QR decomposition to obtain the outer inverse with specified null and range spaces.
Theorem 3.13
Let \(\mathcal {A}\in \mathbb C^\mathbf{M (m)\times \mathbf{N} (n)}, ~\mathcal {W}\in \mathbb C^\mathbf{N (n)\times \mathbf{M} (m)}\) with \({rshrank}(\mathcal {A})=\mathbf{R} (r)\) and \({rshrank}(\mathcal {W})=\mathbf{S} (s)\), where \(\mathbf{S} (s)\le \mathbf{R} (r)\). Suppose that the \(\mathcal {Q}{*_n}\mathcal {R}\) factorization of \(\mathcal {W}\) is given as:
where \(\mathcal {P}\in \mathbb C^\mathbf{M (m)\times \mathbf{M} (m)}\) is a permutation tensor, \(\mathcal {Q}\in \mathbb C^\mathbf{N (n)\times \mathbf{N} (n)}\), \(\mathcal {Q}^*{*_n}\mathcal {Q}=\mathcal {I}\) and \(\mathcal {R}\in \mathbb C^\mathbf{N (n)\times \mathbf{M} (m)}\) with \({rshrank}(\mathcal {R})=\mathbf{S} (s)\). Assume that \(\mathcal {P}\) partitions \(\mathcal {Q}\) and \(\mathcal {R}\) in the form:
where \(\mathcal {Q}_1\in \mathbb C^\mathbf{N (n)\times \mathbf{S} (s)}\) and \(\mathcal {R}_{11}\in \mathbb C^\mathbf{S (s)\times \mathbf{S} (s)}\) is nonsingular.
If \(\mathcal {A}\) has a \(\{2\}\)-inverse \(\mathcal {A}_{\mathscr {R}(\mathcal {W}),{\mathscr {N}}(\mathcal {W})}^{(2)}\), then:
-
(a)
\(\mathcal {R}_1{*_m}\mathcal {P}^*{*_m}\mathcal {A}{*_n}\mathcal {Q}_1\) is invertible;
-
(b)
\(\mathcal {A}_{\mathscr {R}(\mathcal {W}),{\mathscr {N}}(\mathcal {W})}^{(2)} = \mathcal {Q}_1{*_s}(\mathcal {R}_1{*_m}\mathcal {P}^*{*_m}\mathcal {A}{*_n}\mathcal {Q}_1)^{-1}{*_s}\mathcal {R}_1{*_m}\mathcal {P}^*\);
-
(c)
\(\mathcal {A}_{\mathscr {R}(\mathcal {W}),{\mathscr {N}}(\mathcal {W})}^{(2)} = \mathcal {A}_{\mathscr {R}(\mathcal {Q}_1),{\mathscr {N}}(\mathcal {R}_1{*_m}\mathcal {P}^*)}^{(2)}\);
-
(d)
\(\mathcal {A}_{\mathscr {R}(\mathcal {W}),{\mathscr {N}}(\mathcal {W})}^{(2)} = \mathcal {Q}_1{*_s}(\mathcal {Q}_1^*{*_n}\mathcal {W}{*_m}\mathcal {A}{*_n}\mathcal {Q}_1)^{-1}{*_s}\mathcal {Q}_1^*{*_n}\mathcal {W}\);
-
(e)
\(\mathcal {A}_{\mathscr {R}(\mathcal {W}),{\mathscr {N}}(\mathcal {W})}^{(2)}\in \mathcal {A}\{2\}_\mathbf{S (s)}\).
Proof
-
(a)
Using the partition of \(\mathcal {Q}\) and \(\mathcal {R}\), we have \(\mathcal {W}=\mathcal {Q}{*_n}\mathcal {R}{*_m}\mathcal {P}^*=\mathcal {Q}_1{*_s}(\mathcal {R}_1{*_m}\mathcal {P}^*)\). It is easy to verify \(\text{ rshrank }(\mathcal {Q}_1)=\mathbf{S} (s)\) and \(\text{ rshrank }(\mathcal {R}_1)=\text{ rshrank }(\mathcal {R})=\mathbf{S} (s)\). Thus, \(\mathcal {W}=\mathcal {Q}_1{*_s}(\mathcal {R}_1{*_m}\mathcal {P}^*)\) is a full-rank factorization of W. By Theorem 3.9 (b), and the fact that \(\mathcal {A}_{\mathscr {R}(\mathcal {W}),{\mathscr {N}}(\mathcal {W})}^{(2)}\) is a \(\{2\}\)-inverse \(\mathcal {A}\), we deduce invertibility of \(\mathcal {R}_1{*_m}\mathcal {P}^*{*_m}\mathcal {A}{*_n}\mathcal {Q}_1\).
-
(b), (c)
Using similar technique of part (b) in Theorem 3.9, the parts (b) and (c) can be verified.
-
(d)
Since \(\mathcal {R}_1{*_m}\mathcal {P}^*=\mathcal {Q}_1^*{*_n}\mathcal {W}\), we have:
$$\begin{aligned} \mathcal {Q}_1{*_s}(\mathcal {R}_1{*_m}\mathcal {P}^*{*_m}\mathcal {A}{*_n}\mathcal {Q}_1)^{-1}{*_s}\mathcal {R}_1{*_m}\mathcal {P}^*= \mathcal {Q}_1{*_s}(\mathcal {Q}_1^*{*_n}\mathcal {W}{*_m}\mathcal {A}{*_n}\mathcal {Q}_1)^{-1}{*_s}\mathcal {Q}_1^*{*_n}\mathcal {W}. \end{aligned}$$(3.7)Substituting Eq. (3.7) in part (b), we get: \(\mathcal {A}_{\mathscr {R}(\mathcal {W}),{\mathscr {N}}(\mathcal {W})}^{(2)} = \mathcal {Q}_1{*_s}(\mathcal {Q}_1^*{*_n}\mathcal {W}{*_m}\mathcal {A}{*_n}\mathcal {Q}_1)^{-1}{*_s}\mathcal {Q}_1^*{*_n}\mathcal {W}\).
-
(e)
The proof of this part follows from the part (a) of Proposition 3.12.
\(\square \)
Suppose that the \(\mathcal {Q}{*_n}\mathcal {R}\) factorization of \(\mathcal {W}\) is of the form:
where \(\mathcal {P}\), \(\mathcal {Q}\), and \(\mathcal {R}\) are as the same as in Theorem 3.13. Then, using Proposition 3.12 and Theorem 3.7, we get the following characterization of the generalized inverses for various outer inverse.
Corollary 3.14
Observe \(\mathcal {A}\in \mathbb C^\mathbf{M (m)\times \mathbf{N} (n)}\) and choose \(\mathcal {W}\in \mathbb C^\mathbf{N (n)\times \mathbf{M} (m)}\) satisfying \(\text{ rshrank }(\mathcal {A})=\mathbf{R} (r)\) and \(\text{ rshrank }(\mathcal {W})=\mathbf{S} (s)\), where \(\mathbf{S} (s)\le \mathbf{R} (r)\). Suppose that the full-rank representation of \(\mathcal {W}\):
arises from its \(\mathcal {Q}{*_n}\mathcal {R}\) decomposition (3.8). Then, the following statements are valid:
Using the uniqueness of the outer inverse (with specified null and range space) along with Theorems 3.13 (b) and 3.9 (b), we conclude the following corollary.
Corollary 3.15
Let all assumptions of Theorem 3.13 be retained and \(\mathcal {W}=\mathcal {F}{*_s}\mathcal {G}\) be a full-rank decomposition of \(\mathcal {W}\). Then:
Lemma 3.16
Consider \(\mathcal {A}\in \mathbb C^\mathbf{M (m)\times \mathbf{N} (n)}\) satisfying \(\text{ rshrank }(\mathcal {A})=\mathbf{S} (s)\). If \(\mathcal {A} = \mathcal {F}_1{*_s}\mathcal {G}_1 = \mathcal {F}_2{*_s}\mathcal {G}_2\) are two full-rank factorizations of \(\mathcal {A}\), then there exists an invertible tensor \(\mathcal {T}\in \mathbb C^\mathbf{S (s)\times \mathbf{S} (s)}\), such that \(\mathcal {F}_2 = \mathcal {F}_1{*_s}\mathcal {T}\) and \(\mathcal {G}_2 = \mathcal {T}^{-1}{*_s}\mathcal {G}_1\).
Proof
Let \(\mathcal {A} = \mathcal {F}_1{*_s}\mathcal {G}_1 = \mathcal {F}_2{*_s}\mathcal {G}_2\) be two full factorizations of \(\mathcal {A}\). Then, \(\mathcal {F}_i^\dagger \) is left inverse of \(\mathcal {F}_i\) and \(\mathcal {G}_i^\dagger \) is right inverse of \(\mathcal {G}_i\) , \(i=1,2.\) Since \(\mathcal {F}_2{*_s}\mathcal {G}_2=\mathcal {F}_1{*_s}\mathcal {G}_1\), pre-multiplying \(\mathcal {F}_2^\dagger \), we obtain \(\mathcal {G}_2=\mathcal {F}_2^\dagger {*_m}\mathcal {F}_1{*_s}\mathcal {G}_1\). Now:
Therefore, \(\mathcal {F}_2^\dagger {*_m}\mathcal {F}_1\) invertible. Similarly, we can show \(\mathcal {G}_1{*_n}\mathcal {G}_2^\dagger \) is invertible. Let \(\mathcal {S} =\mathcal {F}_2^\dagger {*_m}\mathcal {F}_1\) and \(\mathcal {T}=\mathcal {G}_1{*_n}\mathcal {G}_2^{\dagger }\). Then:
Thus, \(\mathcal {S}=\mathcal {T}^{-1}\) with \(\mathcal {G}_2 = \mathcal {S}{*_s}\mathcal {G}_1 =\mathcal {T}^{-1}{*_s}\mathcal {G}_1 \text{ and } \mathcal {F}_2 = \mathcal {F}{*_s}\mathcal {G}{*_n}\mathcal {G}^\dagger = \mathcal {F}{*_s}\mathcal {T}.\)\(\square \)
Corollary 3.17
Let \(\mathcal {W}=\mathcal {F}{*_s}\mathcal {G}\) be an arbitrary rank factorization of \(\mathcal {W}\) in accordance with conditions of Theorem 3.9. If \(\mathcal {W}=\mathcal {Q}_1*_r(\mathcal {R}_1*_m\mathcal {P}^*)\) is another rank factorization of \(\mathcal {W}\) satisfying the assumption of Theorem 3.13, then:
where the tensor \(\mathcal {T}\) is equal to \(\mathcal {T}=\mathcal {Q}_1^*{*_n}\mathcal {F}\).
In view of Corollary 3.15, if \(\mathcal {W}\) has two different full-rank representations, \(\mathcal {W}=\mathcal {F}{*_s}\mathcal {G}\) and \(\mathcal {W}=\mathcal {Q}_1{*_s}(\mathcal {R}_1{*_m}\mathcal {P}^*)\)\(\mathcal {W}\), then we have:
To avoid the high complexity involved in tensor multiplication while computing \(\mathcal {A}_{\mathscr {R}(\mathcal {W}),{\mathscr {N}}(\mathcal {W})}^{(2)}\), we can follow the following alternating approaches. For \(\mathfrak {M}<\mathfrak {N}\), we compute the tensor \(\mathcal {X}=(\mathcal {R}_1{*_m}\mathcal {P}^*{*_m}\mathcal {A}{*_n}\mathcal {Q}_1)^{-1}{*_s}\mathcal {R}_1{*_m}\mathcal {P}^*\) by solving the below multilinear system:
Similarly, for \(\mathfrak {N}<\mathfrak {M}\), we solve the following multilinear system:
Obtaining \(\mathcal {X}\) from Eq. (3.13) or (3.14), we compute the outer inverse as follows:
Corollary 3.18
Let \(\mathcal {A}\!\in \!\mathbb {C}^\mathbf{M (m)\times \mathbf{N} (n)}\) satisfy \(\text{ rshkrank }(\mathcal {A})=\prod _{i=1}^r R_i\), \(\prod _{i=1}^s S_i\le \prod _{i=1}^r R_i\) and the tensors \(\mathcal {F},~\mathcal {G}\) are chosen as in Proposition 3.12. Suppose \(\mathcal {W}\in \mathbb {C}^\mathbf{N (n)\times \mathbf{M} (m)}\) is an arbitrary tensor with \(\text{ rshkrank }(\mathcal {W})=\prod _{i=1}^s S_i\).
-
(a)
If (3.9) is a full-rank factorization of \(\mathcal {W}=(\mathcal {G}{*_m}\mathcal {A})^*{*_s}\mathcal {G}\), then:
$$\begin{aligned} \mathcal {A}_{\mathscr {R}(\mathcal {Q}_1),{\mathscr {N}}(\mathcal {R}_1{*_m}\mathcal {P}^*)}^{(2,4)} = \mathcal {Q}_1{*_s}(\mathcal {R}_1{*_m}\mathcal {P}^*{*_m}\mathcal {A}{*_n}\mathcal {Q}_1)^{-1}{*_s}\mathcal {R}_1{*_m}\mathcal {P}^*=(\mathcal {G}{*_m}\mathcal {A})^\dagger {*_s}\mathcal {G}. \end{aligned}$$(3.16) -
(b)
If (3.9) is a full–rank factorization of \(\mathcal {W}=\mathcal {F}{*_s}(\mathcal {A}{*_n}\mathcal {F})^* \), then:
$$\begin{aligned} \mathcal {A}_{\mathscr {R}(\mathcal {Q}_1),{\mathscr {N}}(\mathcal {R}_1{*_m}\mathcal {P}^*)}^{(2,3)} = \mathcal {Q}_1{*_s}(\mathcal {R}_1{*_m}\mathcal {P}^*{*_m}\mathcal {A}{*_n}\mathcal {Q}_1)^{-1}{*_s}\mathcal {R}_1{*_m}\mathcal {P}^*=\mathcal {F}{*_s}(\mathcal {A}{*_n}\mathcal {F})^\dagger \end{aligned}$$(3.17)
Proof
-
(a)
In view of Proposition 3.12 and Corollary 3.15, the result is follows from:
$$\begin{aligned} \mathcal {Q}_1{*_s}(\mathcal {R}_1{*_m}\mathcal {P}^*{*_m}\mathcal {A}{*_n}\mathcal {Q}_1)^{-1}{*_s}\mathcal {R}_1{*_m}\mathcal {P}^*= & {} (\mathcal {G}{*_m}\mathcal {A})^*{*_s}(\mathcal {G}{*_m}\mathcal {A}{*_n}(\mathcal {G}{*_m}\mathcal {A})^*)^{-1}{*_s}\mathcal {G}\\= & {} (\mathcal {G}{*_m}\mathcal {A})^\dagger {*_s}\mathcal {G}. \end{aligned}$$ -
(b)
This part of the proof can be verified in a similarly.
\(\square \)
Utilizing QR decomposition, the computation of the outer inverse \(\mathcal {A}_{\mathcal {T},\mathcal {S}}^{(2)}\), of a given tensor \(\mathcal {A}\) is elaborated in Algorithm 1.
4 Numerical examples and applications
This section is aimed to numerical verification of Algorithm 1. In addition, an application of the tensor generalized inversion in image deblurring is presented.
4.1 Examples
Example 4.1
Let \(\mathcal {A} = (a_{ijkl})_{{1 \le i\le 3, 1\le j,k,l \le 2}} \in \mathbb {R}^{3\times 2\times 2\times 2}\) with:
Choose the tensors \(\mathcal {F} = (f_{ijk}) \in \mathbb {R}^{2\times 2\times 2}\) and \(\mathcal {G} = (g_{ijkl}) \in \mathbb {R}^{ 2\times 3\times 2}\), such that:
Consider the tensor \(\mathcal {W}\) satisfying \(\mathcal {W}=\mathcal {F}{*_n}\mathcal {G}\). The tensors \(\mathcal {Q}_1,\mathcal {R}_1\) and \(\mathcal {P}\) as defined in Eq. (3.6), are, respectively, given by:
and \(\mathcal {P}\) is the identity tensor of order \(3\times 2\times 3\times 2\). From Theorem 3.13 (b) and (c), \(A^{(2)}_{\mathscr {R}(\mathcal {W}),{\mathscr {N}}(\mathcal {W})} = (x_{ijk{l}})\in \mathbb {R}^{2\times 2\times 3 \times 2}\) is equal to:
Example 4.2
Consider the tensor \(\mathcal {A}\) as in Example 4.1 and \(\mathcal {W} = \mathcal {A}^{\mathrm T}\in \mathbb {R}^{2\times 2\times 3 \times 2}\). The tensors \(\mathcal {Q}_1 \in \mathbb {R}^{2\times 2\times 3}, \mathcal {R}_1 \in \mathbb {R}^{3\times 3\times 2}\) and \(\mathcal {P} \in \mathbb {R}^{3\times 2\times 3 \times 2}\) satisfying (3.6) are given by:
and \(\mathcal {P}\) is the identity tensor of order \(3\times 2\times 3\times 2\). Using Theorem 3.13 (b) and (c), we obtain the outer inverse \(\mathcal {A}^{(2)}_{\mathscr {R}(\mathcal {W}),{\mathscr {N}}(\mathcal {W})} = (x_{ijkl}) \in \mathbb {R}^{2\times 2\times 3 \times 2}\), where:
By Corollary 3.14, we obtain the Moore–Penrose inverse \(\mathcal {A}^\dagger =A^{(2)}_{\mathscr {R}(\mathcal {W}),{\mathscr {N}}(\mathcal {W})}\).
Example 4.3
Observe \(\mathcal {A} = (a_{ijkl})_{{1 \le i,k\le 3, 1\le j,l \le 2}} \in \mathbb {R}^{3\times 2\times 3\times 2}\) defined as:
and \(\mathcal {W} = \mathcal {A}\). The tensors \(\mathcal {Q}_1 \in \mathbb {R}^{3 \times 2\times 2\times 2}, \mathcal {R}_1 \in \mathbb {R}^{2 \times 2\times 3\times 2}\) and \(\mathcal {P} \in \mathbb {R}^{3\times 2\times 3 \times 2}\) as defined in (3.6) are equal to:
and \(\mathcal {P}\) is the identity tensor of order \(3\times 2\times 3\times 2\). From Theorem 3.13 (b) and (c), we obtain \(\mathcal {A}^{(2)}_{\mathscr {R}(\mathcal {W}),{\mathscr {N}}(\mathcal {W})}=(x_{ijk{l}})\in \mathbb {R}^{2\times 2\times 3 \times 2}\), where:
Applying Corollary 3.14, one can observe that \(\mathcal {A}^{\#}=A^{(2)}_{\mathscr {R}(\mathcal {W}),{\mathscr {N}}(\mathcal {W})}\).
4.2 Image deblurring
The discrete model for two-dimensional (2D) image blurring is represented as:
where A represents the blurring matrix of some specific structure. For instance A may a banded, Toeplitz, or block-Toeplitz matrix (Calvetti et al. 1999; Kilmer and Martin 2011). Here, x is the true image and b is the blurred image. In practice, b is corrupted by a noise. Mostly, the blurred matrix A is ill-conditioned, so even if A is theoretically invertible, the exact solution x will be contaminated by the noise. The formulation in (4.1) is used in many image restoration problems. For more information on image deblurring and restoration (see Andrews and Hunt 1977; Hansen et al. 2006; Lagendijk and Biemond 2012).
In case of three-dimensional (3D) colour image blurring problem, often occurring in medical or geographic imaging, it can be written as a tensor equation:
where \(\mathcal {A}\) is the known blurring tensor, \(\mathcal {X}\) and \(\mathcal {B}\) are tensors representing the true image, and the blurred image, often corrupted by the noise. The main objective in the image restoration is to establish a blurred free image that requires the approximate solution of the multilinear system (4.2). To find the approximate solution of the ill-posed system (4.2), several iterative methods such as tensor form of the global GMRES, MINIRES, and SYMMLQ are presented in Huang et al. (2019). Furthermore, a few iterative methods (called LSQR, and LSMR) have been discussed by the authors of Huang and Ma (2020).
In our test examples, we consider two \(512\times 512\times 3 \) colour image and formed a blurring image \(\mathcal {B}\) by computing \(\mathcal {A}*_1\mathcal {X}\), where \(\mathcal {X}\) is the true image. Then, the Gaussian noise was added to \(\mathcal {B}\) with the noise level of 0.1 percent. The original images and blurred noisy images are, respectively, shown in Fig. 1a, d, b, e. We reconstructed the true image by the least square solution \(\mathcal {A}^\dagger *_{1}\mathcal {B}\) directly, where the Moore-Penrose inverse is computed using the Algorithm 1 by taking \(\mathcal {W}=\mathcal {A}^T\). The resulting reconstructed images are displayed in Fig. 1c, f.
5 Conclusion
The tensor form of QR decomposition was developed via the Einstein product. Several representations and characterizations of the outer inverse with specified range and nullity conditions are developed and analysed. Appropriate algorithms are developed and the computation of the outer inverse is explained with the help of the tensor QR decomposition. Subsequently, we discuss the relation of this outer inverse with other generalized inverses such as Moore–Penrose inverse, group inverse, Drazin inverse, Weighted Moore–Penrose inverse. The generality of the developed algorithm is reflected in the capability that many kinds of other generalized inverses can be obtained as appropriate special cases. This gives us the flexibility to choose generalized inverses depending on applications. In addition to that, we have discussed one application in the reconstruction of blurred noisy image through least square solution.
References
Andrews HC, Hunt BR (1977) Digital image restoration. Prentice-Hall, New Jersey
Behera R, Maji S, Mohapatra R (2018) Weighted Moore-Penrose inverses of arbitrary-order tensors. arXiv preprint arXiv:1812.03052
Behera R, Mishra D (2017) Further results on generalized inverses of tensors via the Einstein product. Linear Multilinear Algebra 65(8):1662–1682
Behera R, Sahoo JK (2020) Generalized inverses of Boolean tensors via the Einstein product. Linear Multilinear Algebra, :1–26, https://doi.org/10.1080/03081087.2020.1737630
Behera R, Nandi AK, Sahoo, JK (2019) Further results on the Drazin inverse of even order tensors. arXiv preprint arXiv:1904.10783, accepted for publication in Numerical Linear Algebra with Applications
Brazell M, Li N, Navasca C, Tamon C (2013) Solving multilinear systems via tensor inversion. SIAM J Matrix Anal Appl 34(2):542–570
Calvetti D, Reichel L, Zhang Q (1999) Iterative solution methods for large linear discrete ill-posed problems. In: Applied and computational control, signals, and circuits, Vol. 1, volume 1 of Appl. Comput. Control Signals Circuits Birkhäuser Boston, Boston, pp 313–367
Chen Y-L, Chen X (1985) The [2]-inverse with applications to satistics. Linear Algebra Appl. 70(1–3):241–248
Chen Y-L, Chen X (2000) Representation and approximation of the outer inverse \(A^{(2)}_{T, S}\) of a matrix \(A\). Linear Algebra Appl. 308(1–3):85–107
Ding W, Wei Y (2016) Solving multi-linear systems with M-tensors. J Sci Comput 68:689–715
Drazin MP (2012) A class of outer generalized inverses. Linear Algebra Appl. 436(7):1909–1923
Du H, Wang B, Ma H (2019) Perturbation theory for core and core-EP Inverses of tensor via Einstein product. Filomat 33(16):5207–5217
Einstein A et al (1916) The foundation of the general theory of relativity. Annalen der Physik 49(7):769–822
Hansen PC, Nagy JG, O’leary DP (2006) Deblurring images: matrices, spectra, and filtering. SIAM, Philadelphia
Huang B, Ma C (2020) Global least squares methods based on tensor form to solve a class of generalized Sylvester tensor equations. Appl. Math. Comput. 369:124892 (16)
Huang B, Xie Y, Ma C (2019) Krylov subspace methods to solve a class of tensor equations via the Einstein product. Numer Linear Algebra Appl 26(4):e2254, 22
Ji J, Wei Y (2017) Weighted Moore-Penrose inverses and fundamental theorem of even-order tensors with Einstein product. Front Math China 12(6):1319–1337
Ji J, Wei Y (2018) The Drazin inverse of an even-order tensor and its application to singular tensor equations. Comput Math Appl 75(9):3402–3413
Jin H, Bai M, Benítez J, Liu X (2017) The generalized inverses of tensors and an application to linear models. Comput Math Appl 74:385–397
Kilmer ME, Martin CD (2011) Factorization strategies for third-order tensors. Linear Algebra Appl 435(3):641–658
Lagendijk RL, Biemond J (2012) Iterative identification and restoration of images. Springer Science & Business Media, Berlin
Lai WM, Rubin DH, Krempl E, Rubin D (2009) Introduction to continuum mechanics. Butterworth-Heinemann, Oxford
Liang M, Zheng B (2019) Further results on Moore-Penrose inverses of tensors with application to tensor nearness problems. Comput. Math. Appl. 77(5):1282–1293
Liang M-L, Zheng B, Zhao RJ (2019) Tensor inversion and its application to the tensor equations with Einstein product. Linear Multilinear Algebra 67(4):843–870
Ma H (2018) Optimal perturbation bounds for the core inverse. Appl Math Comput 336:176–181
Ma H, Li N, Stanimirović PS, Katsikis V (2019) Perturbation theory for Moore-Penrose inverse of tensor via Einstein product. Comp Appl Math 38:Article number 111
Miao Y, Qi L, Wei Y (2020) Generalized tensor function via the tensor singular value decomposition based on the T-product. Linear Algebra Appl. 590:258–303
Nashed MZ (1976) Generalized Inverse and Applications. Academic Press, New York
Nashed MZ, Chen X (1993) Convergence of Newton-like methods for singular operator equations using outer inverses. Numer Math 66(2):235–257
Sahoo JK, Behera R, Stanimirović PS, Katsikis VN, Ma H (2020) Core and core-EP inverses of tensors. Comp Appl Math 39:9. https://doi.org/10.1007/s40314-019-0983-5
Sheng X, Chen G (2007) Full-rank representation of generalized inverse \(A^{(2)}_{T, S}\) and its application. Comput Math Appl 54(11–12):1422–1430
Stanimirović PS, Ćirić M, Katsikis VN, Li C, Ma H (2018) Outer and (b, c) inverses of tensors. Linear Multilinear Algebra. https://doi.org/10.1080/03081087.2018.1521783
Stanimirović PS, Pappas D, Katsikis VN, Stanimirović IP (2012) Full-rank representations of outer inverses based on the QR decomposition. Appl Math Comput 218(20):10321–10333
Sun L, Zheng B, Bu C, Wei Y (2016) Moore-Penrose inverse of tensors via Einstein product. Linear Multilinear Algebra 64(4):686–698
Sun L, Zheng B, Wei Y, Bu C (2018) Generalized inverses of tensors via a general product of tensors. Front Math China 13(4):893–911
Wang B, Du H, Ma H (2020) Perturbation bounds for DMP and CMP inverses of tensors via Einstein product, Comput Appl Math 39:article 28
Wang G, Wei Y, Qiao S (2018) Generalized Inverses: Theory and Computations, 2nd edn. Springer, Singapore and Science Press Beijing,
Wang W, Wei Y (2017) Mixed and componentwise condition numbers for matrix decompositions. Theoret Comput Sci 681:199–216
Wang X, Che M, Wei Y (2020) Tensor neural network models for tensor singular value decompositions. Comput Opt Appl 75:753–777
Wei Y (1998) A characterization and representation of the generalized inverse \(A^{(2)}_{T, S}\) and its applications. Linear Algebra Appl. 280(2–3):87–96
Wei Y, Zhang N (2004) A note on the representation and approximation of the outer inverse \(A^{(2)}_{T, S}\) of a matrix \(A\). Appl Math Comput 147(3):837–841
Wei Y, Stanimirović PS, Petković M (2018) Numerical and symbolic computations of generalized inverses. World Scientific Publishing Co. Pte. Ltd., Hackensack
Xie P, Xiang H, Wei Y (2019) Randomized algorithms for total least squares problems. Numer Linear Algebra Appl 26:e2219
Zheng B, Bapat RB (2004) Generalized inverse \(A^{(2)}_{T, S}\) and a rank equation. Appl Math Comput 155(2):407–415
Acknowledgements
Ratikanta Behera is grateful to the Mohapatra Family Foundation and the College of Graduate Studies, University of Central Florida, Orlando, for their financial support for this research.
Predrag Stanimirović gratefully acknowledges support from the Ministry of Education, Science and Technological Development, Republic of Serbia, Grant No. 174013.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Jinyun Yuan.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Sahoo, J.K., Behera, R., Stanimirović, P.S. et al. Computation of outer inverses of tensors using the QR decomposition. Comp. Appl. Math. 39, 199 (2020). https://doi.org/10.1007/s40314-020-01225-4
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s40314-020-01225-4