1 Introduction

1.1 Background and motivation

Tensors or hypermatrix are multidimensional generalizations of vectors and matrices, and have attracted tremendous interest in recent years (see Kolda and Bader 2009; Martin and Loan 2008; Ragnarsson and Loan 2012; Qi 2005; Shao 2013). Indeed, multilinear systems are closely related to tensors and such systems are encountered in a number of fields of practical interest, i.e., signal processing (see Lathauwer et al. 2000; Sidiropoulos et al. 2017; Coppi and Bolasco 1989), scientific computing (see Beylkin and Mohlenkamp 2005; Shi et al. 2013; Brazell et al. 2013), data mining (Chew et al. 2007), data compression and retrieval of large structured data (see de Silva and Lim 2008; Che et al. 2018). Further, the Moore–Penrose inverse of tensors plays an important role in solving such multilinear systems (see Behera and Mishra 2017; Jin et al. 2017; Ma et al. 2019) and the reverse-order law for the Moore–Penrose inverses of tensors yields a class of interesting problems that are fundamental in the theory of generalized inverses of tensors (see Panigrahy et al. 2020; Sahoo and Behera 2020). In view of these, multilinear algebra is drawing more and more attention from researchers (see Jin et al. 2017; Bader and Kolda 2006; Martin and Loan 2008; Kruskal 1977; Lathauwer et al. 2000), specifically, the recent findings in (see Behera and Mishra 2017; Brazell et al. 2013; Ji and Wei 2017; Panigrahy et al. 2020; Stanimirović et al. 2020; Sun et al. 2016; Behera et al. 2020), motivate us to study this subject in the framework of arbitrary-order tensors.

Let \({\mathbb {C}}^{I_1\times \cdots \times I_N} ({\mathbb {R}}^{I_1\times \cdots \times I_N})\) be the set of order N and dimension \(I_1 \times \cdots \times I_N\) tensors over the complex (real) field \({\mathbb {C}}({\mathbb {R}})\). Let \(\mathcal {A} \in {\mathbb {C}}^{I_1\times \cdots \times I_N}\) be a multiway array with Nth order tensor, and \(I_1, I_2, \ldots , I_N\) be dimensions of the first, second,\(\ldots \), Nth way, respectively. Indeed, a matrix is a second-order tensor, and a vector is a first-order tensor. We denote \({{{\mathbb {R}}}}^{m \times n}\) to be the set of all \({m \times n}\) matrices with real entries. Note that throughout the paper, tensors are represented in calligraphic letters like \(\mathcal {A}\), and the notation \((\mathcal {A})_{i_1\ldots i_N}= a_{i_1\ldots i_N}\) represents the scalars. Each entry of \(\mathcal {A}\) is denoted by \(a_{i_1\ldots i_N}\). The Einstein product (see Einstein 2007) \( \mathcal {A}{*_N}\mathcal {B} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times K_1 \times \cdots \times K_L }\) of tensors \(\mathcal {A} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N }\) and \(\mathcal {B} \in {\mathbb {C}}^{J_1\times \cdots \times J_N \times K_1 \times \cdots \times K_L }\) is defined by the operation \({*_N}\) via

$$\begin{aligned} (\mathcal {A}{*_N}\mathcal {B})_{i_1\ldots i_M k_1\ldots k_L} =\displaystyle \sum _{j_1\ldots j_N}a_{{i_1\ldots i_M}{j_1\ldots j_N}}b_{{j_1\ldots j_N}{k_1\ldots k_L}}. \end{aligned}$$
(1)

The Einstein product is not commutative but associative, and distributes with respect to tensor addition. Further, cancellation does not work but there is a multiplicative identity tensor \(\mathcal {I}\). This type of product of tensors is used in the study of the theory of relativity (Einstein 2007) and also used in the area of continuum mechanics (Lai et al. 2009).

On the other hand, one of the most successful developments in the world of linear algebra is the concept of Singular Value Decomposition (SVD) of matrices (Ben-Israel and Greville 1974). This concept gives us important information about a matrix such as its rank, an orthonormal basis for the column or row space, and reduction to a diagonal form (Tian and Cheng 2004). Recently, this concept is also used in low rank matrix approximations (Grasedyck 2004; Ishteva et al. 2011; Ye 2005). Since tensors are natural multidimensional generalizations of matrices, there are many applications involving arbitrary-order tensors. Further, the problem of decomposing tensors is approached in a variety of ways by extending the SVD, and extensive studies have exposed many aspects of such decomposition and its applications (see, for example, Chen et al. 2017; Kolda and Bader 2009; Kruskal 1977; Lathauwer et al. 2000; Sidiropoulos et al. 2017; Liang and Zheng 2019). However, the existing framework of SVD of tensors appears to be insufficient and/or inadequate in several situations.

The aim of this paper is to present a proper generalization of the SVD of arbitrary-order tensors under Einstein tensor product. In fact, the existing form (Brazell et al. 2013) of the SVD is well suited for square tensors, which is defined as follows:

Definition 1

(Definition 2.8, Brazell et al. 2013): The transformation defined as

\(f : {\mathbb {T}}_{I,J,I,J}({\mathbb {R}}) \longrightarrow {\mathbb {M}}_{IJ,IJ}({\mathbb {R}})\) with \(f(\mathcal {A}) = A\) and defined component wise as

figure a

where \({\mathbb {T}}_{I,J,I,J}(R) = \{ \mathcal {A} \in \mathcal {R}^{I\times J \times I \times J}~:~det(f(\mathcal {A})) \ne 0 \}\). In general, for any even order tensor, the transformation is defined as \(f : {\mathbb {T}}_{I_1,\ldots ,I_N,I_1,\ldots ,J_N}({\mathbb {R}}) \longrightarrow {\mathbb {M}}_{I_1\ldots I_N,J_1\ldots J_N}({\mathbb {R}})\)

figure b

Using the above Definition and Theorem 3.17 in Brazell et al. (2013), we obtain the SVD of a tensor \(\mathcal {A}\in {\mathbb {R}}^{I\times J \times I \times J}\), which can be extended only to any square tensor, i.e., for \(\mathcal {A}\in {\mathbb {R}}^{I_1\times I_2 \times \cdots \times I_N \times I_1\times I_2 \times \cdots \times I_N}\). Extension of the SVD for an arbitrary-order tensor using this method (Brazell et al. 2013) is impossible, since f is not a homomorphism for even-order and/or arbitrary-order tensors. In fact, the Einstein product is not defined for the following two even-order tensors, \(\mathcal {A}\in {\mathbb {R}}^{I_1\times I_2 \times J_1\times J_2} \) and \(\mathcal {B}\in {\mathbb {R}}^{I_1\times I_2 \times J_1 \times J_2} \), i.e., \(\mathcal {A}*_2\mathcal {B}\) is not defined. Therefore, our aim in this paper is to find the SVD for any arbitrary order tensors using reshape operation, which is discussed in the next section.

In addition, recently there has been increasing interest in analyzing inverses and generalized inverses of tensors based on different tensor products (see Sahoo et al. 2020; Ji and Wei 2018; Jin et al. 2017; Brazell et al. 2013; Sun et al. 2016). The representations and properties of the ordinary tensor inverse were introduced in Brazell et al. (2013). This interpretation is extended to the Moore–Penrose inverse of tensors in Sun et al. (2016) and investigated for a few characterizations of different generalized inverses of tensors via Einstein product in Behera and Mishra (2017). Appropriately, Behera and Mishra (2017) posed the open question: “Does there exist a full rank decomposition of tensors ? If so, can this be used to compute the Moore–Penrose inverse of a tensor”? It is worth mentioning that Liang and Zheng (2019) investigated this question and discussed the computation of Moore–Pensore inverse of tensors using full rank decomposition.

In this paper, we study singular value decomposition and full-rank decomposition of arbitrary-order tensors through reshape operation. Derived representations are usable in generating corresponding representations of the Moore–Penrose inverse and weighted Moore–Penrose inverse arbitrary-order tensors. However, until now, these decomposition and representation have been limited to special kinds of tensors. The multiplication of two tensors with arbitrary-order is impossible with existing tensor multiplication techniques. The multiplication of two tensors \(\mathcal {A} *\mathcal {B}\) for \(\mathcal {A}, \mathcal {B} \in {\mathcal {R}}^{N_1\times N_2\times \cdots \times N_p}\) using t-product (see Braman 2010; Liang and Zheng 2019; Martin et al. 2013) requires \(N_1 =N_2\). Further, if \(N_1 =N_2\) and others are different, then product is not possible. For example, if \(\mathcal {A}\in {\mathcal {R}}^{2\times 3\times 4\times 5}\) and \(\mathcal {B}\in {\mathcal {R}}^{2\times 3\times 7\times 8}\) then \(\mathcal {A}*\mathcal {B}\) is not defined. The drawback of multiplication of two arbitrary-order tensors using the Einstein product is mentioned a previous paragraph. Hence, SVD and full-rank decomposition (see Brazell et al. 2013; Sun et al. 2016; Liang and Zheng 2019) of arbitrary-order tensors are not possible in several applications. The main advantage of the reshaping operation of tensors is to establish a general framework for multiplying arbitrary-order tensors. The beauty of the reshape operation is that the number of elements are rearranged from the tensor case into the matrix case and vice versa. Thus, it gives us the freedom and flexibility to choose the order of the tensors. For example, consider a tensor \(\mathcal {A} \in {\mathcal {R}}^{3\times 4\times 5\times 6\times 7}\). Then the tensor can be represented in a different form of tensor and matrices.

  • The tensor \(\mathcal {B}_1=reshape(\mathcal {A}) \in {\mathcal {R}}^{12\times 5\times 6\times 7} \), i.e., transform the fifth-order tensor to the fourth-order tensor.

  • The tensor \(\mathcal {B}_2=reshape(\mathcal {A}) \in {\mathcal {R}}^{5\times 7\times 6\times 3\times 4} \), i.e., transform the fifth-order tensor to the fifth-order tensor with different size.

  • The matrix \(B_3=reshape(\mathcal {A}) \in {\mathcal {R}}^{60\times 42}\), i.e., transform the fifth-order tensor to the matrix.

  • The matrix \(B_4=reshape(\mathcal {A}) \in {\mathcal {R}}^{21\times 120}\), i.e., transform the fifth-order tensor to the matrix.

A summary of the main facets of this discussion may be listed in the following way:

  1. 1.

    We have studied singular value decomposition and full-rank decomposition of arbitrary-order tensors through reshape operation. Then the weighted Moore–Penrose inverse of an arbitrary tensor is introduced.

  2. 2.

    We have further studied the range- and null-space of tensors. We have also added a few characterizations of the Moore–Penrose inverse and weighted Moore–Penrose inverse of arbitrary-order tensors via the Einstein product to the existing theory.

  3. 3.

    We have discussed some necessary and sufficient conditions for the reverse-order law to hold for weighted Moore–Penrose inverses of arbitrary-order tensors.

  4. 4.

    Application of singular value decomposition and the Moore–Penrose inverse to a few 3D color images is presented.

Recently, Panigrahy and Mishra (2020) investigated the Moore–Penrose inverse of a product of two tensors via Einstein product. Using such theory of Einstein product, Stanimirović et al. (2020) also introduced some basic properties of the range and null space of multidimensional arrays, and the effective definition of the tensor rank, termed as reshaping rank. Recently, Sahoo et al. (2020) added a few results on reshape operation of a tensor to the existing theory. In this respect, Panigrahy et al. (2020) obtained a few necessary and sufficient conditions for the reverse order law for the Moore–Penrose inverses of tensors, which can be used to simplify various tensor expressions that involve inverses of tensor products (Ding and Wei 2016). Since then, many authors investigate the reverse order law for various classes of generalized inverses of tensors (Che and Wei 2020; Panigrahy and Mishra 2020; Sahoo and Behera 2020). At the same time, the representations of the weighted Moore–Penrose inverse (Ji and Wei 2017) of an even-order tensor was introduced via the Einstein product. In this context, we focus our attention on exploring some characterizations and representation of weighted Moore–Penrose inverses of arbitrary-order tensors.

In this paper, we study the weighted Moore–Penrose inverse of an arbitrary-order tensor. This study can lead to the enhancement of the computation of SVD and full rank decomposition of arbitrary-order tensor using reshape operation. With that in mind, we discuss some identities involving the weighted Moore–Penrose inverses of tensors and then obtain a few necessary and sufficient conditions of the reverse order law for the weighted Moore–Penrose inverses of arbitrary-order tensors via the Einstein product.

1.2 Outline

We organize the paper as follows: In the next subsection, we introduce some notations and definitions which are helpful in proving the main results of this paper. In Sect. 2, we provide the main results of the paper. To do so, we introduce SVD and full rank decomposition of an arbitrary-order tensor using reshape operation. Within this framework, the Moore–Penrose and the generalized weighted Moore–Penrose inverse for arbitrary-order tensor is defined. Furthermore, we obtain several identities involving the weighted Moore–Penrose inverses of tensors via Einstein product. Section 3 contains a few necessary and sufficient conditions of the reverse-order law for the weighted Moore–Penrose inverses of tensors.

1.3 Notations and definitions

For convenience, we first briefly explain a few essential facts about the Einstein product of tensors, which are found in Behera and Mishra (2017), Brazell et al. (2013) and Sun et al. (2016). For a tensor \( \mathcal {A}=(a_{{i_1}\ldots {i_M}{j_1}\ldots {j_N}}) \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N}\), the tensor \(\mathcal {B} =(b_{{j_1}\ldots {j_N}{i_1}\ldots {i_M}}) \in {\mathbb {C}}^{J_1\times \cdots \times J_N \times I_1 \times \cdots \times I_M}\) is said to be conjugate transpose of \(\mathcal {A}\), if \( b_{{j_1}\ldots {j_N}{i_1}\ldots {i_M}} ={\overline{a}}_{{i_1}\ldots {i_M}{j_1}\ldots {j_N}}\) and \(\mathcal {B}\) is denoted by \(\mathcal {A}^*\). When \(b_{{j_1}\ldots {j_N}{i_1}\ldots {i_M}} = {a}_{{i_1}\ldots {i_M}{j_1}\ldots {j_N}}\), \(\mathcal {B}\) is the transpose of \(\mathcal {A}\), denoted by \(\mathcal {A}^T\). The Frobenius norm \(||.||_F\) is defined ( Sun et al. 2016) as follows:

$$\begin{aligned} ||\mathcal {A}||_F = \left( \displaystyle \displaystyle \sum _{i_1\ldots i_Nj_1\ldots j_N} |a_{{i_1\ldots i_N}{j_1\ldots j_N}}|^2\right) ^{\frac{1}{2}}~~~ \text {for}~~~\mathcal {A} \in {\mathbb {C}}^{I_1\times \cdots \times I_N\times J_1\times \cdots \times J_N}. \end{aligned}$$

The definition of the diagonal tensor is borrowed from Sun et al. (2016), and is obtained by generalizing Definition 3.12, Brazell et al. (2013).

Definition 2

A tensor \( \mathcal {D} \in {\mathbb {C}}^{I_1\times \cdots \times I_M\times J_1 \times \cdots \times J_N} \) with entries \( d_{{i_1}\ldots {i_M}{j_1}\ldots {j_N}}\)

is called a diagonal tensor if \(d_{{i_1}\ldots {i_M}{j_1}\ldots {j_N}} = 0 \), when

\( {\left[ i_1+\sum _{k=2}^{M} ({i_k - 1}) \prod _{l=1}^{k-1} I_l] \ne [j_1+\sum _{k=2}^{N} (j_k-1) \prod _{l=1}^{k-1} J_l\right] }\).

Now we recall the definition of an identity tensor below.

Definition 3

(Definition 3.13, Brazell et al. 2013) A tensor \( \mathcal {I}_N \in {\mathbb {C}}^{J_1\times \cdots \times J_N\times J_1 \times \cdots \times J_N} \) with entries \( (\mathcal {I}_N)_{i_1i_2 \cdots i_Nj_1j_2\cdots j_N} = \prod _{k=1}^{N} \delta _{i_k j_k}\), where

$$\begin{aligned} {\left\{ \begin{array}{ll} {\delta _{i_kj_k}=} 1, &{} i_k = j_k,\\ 0, &{} i_k \ne j_k . \end{array}\right. } \end{aligned}$$

is called a unit tensor or identity tensor.

Note that throughout the paper, we denote \(\mathcal {I}_M \),\( \mathcal {I}_L \) and \( \mathcal {I}_R \) as identity tensors in the space \( {\mathbb {C}}^{I_1\times \cdots \times I_M\times I_1 \times \cdots \times I_M} \), \( {\mathbb {C}}^{K_1\times \cdots \times K_L\times K_1 \times \cdots \times K_L} \) and \({\mathbb {C}}^{H_1\times \cdots \times H_R\times H_1 \times \cdots \times H_R} \), respectively. Further, a tensor \(\mathcal {O}\) denotes the zero tensor if all the entries are zero. A tensor \(\mathcal {A}\in {\mathbb {C}}^{I_1\times \cdots \times I_N \times I_1 \times \cdots \times I_N}\) is Hermitian if \(\mathcal {A}=\mathcal {A}^*\) and skew-Hermitian if \(\mathcal {A}= - \mathcal {A}^*\). Subsequently, a tensor \(\mathcal {A}\in {\mathbb {C}}^{I_1\times \cdots \times I_N \times I_1 \times \cdots \times I_N}\) is unitary if \(\mathcal {A}{*_N}\mathcal {A}^*=\mathcal {A}^*{*_N}\mathcal {A}=\mathcal {I}_N \), and idempotent if \(\mathcal {A} {*_N}\mathcal {A}= \mathcal {A}.\) In the case of tensors of real entries, Hermitian, skew-Hermitian and unitary tensors are called symmetric (see Definition 3.16, Brazell et al. 2013), skew-symmetric and orthogonal (see Definition 3.15, Brazell et al. 2013) tensors, respectively. Next we present the definition of the reshape operation, which was introduced earlier in Stanimirović et al. (2020). This is a more general way of rearranging the entries in a tensor (it is also a standard Matlab function), as follows:

Definition 4

(Definition 3.1, Stanimirović et al. 2020): The 1–1 and onto reshape map, rsh, is defined as

\(rsh : {\mathbb {C}}^{I_1 \times \cdots \times I_M \times J_1 \times \cdots \times J_N} \longrightarrow {\mathbb {C}}^{I_1\cdots I_M \times J_1\cdots J_N}\) with

$$\begin{aligned} rsh(\mathcal {A}) = A = reshape(\mathcal {A},I_1\cdots I_M,J_1\cdots J_N), \end{aligned}$$
(4)

where \( \mathcal {A} \in {\mathbb {C}}^{I_1 \times \cdots \times I_M \times J_1 \times \cdots \times J_N} \) and the matrix \( A \in {\mathbb {C}}^{I_1\cdots I_M \times J_1\cdots J_N}\). Further, the inverse reshaping is the mapping defined as \(rsh^{-1} : {\mathbb {C}}^{I_1\cdots I_M \times J_1\cdots J_N} \longrightarrow {\mathbb {C}}^{I_1 \times \cdots \times I_M \times J_1 \times \cdots \times J_N}\) with

$$\begin{aligned} rsh^{-1}(A) = \mathcal {A} = reshape(A,I_1,\cdots ,I_M,J_1,\cdots ,J_N), \end{aligned}$$
(5)

where the matrix \( A \in {\mathbb {C}}^{I_1\cdots I_M \times J_1\cdots J_N}\) and the tensor \(\mathcal {A} \in {\mathbb {C}}^{I_1 \times \cdots \times I_M \times J_1 \times \cdots \times J_N}\).

Further, Lemma 3.2 in Stanimirović et al. (2020) defined the rank of a tensor, \(\mathcal {A} \), denoted by \( rshrank(\mathcal {A}) \) as

$$\begin{aligned} rshrank(\mathcal {A}) = rank(rsh(\mathcal {A})). \end{aligned}$$
(6)

Continuing this research, Stanimirović et al. (2020) discussed the homomorphism properties of the rsh function, as follows:

Lemma 1

(Lemma 3.1 Stanimirović et al. 2020) Let \( \mathcal {A}\in {\mathbb {C}}^{I_1\times \cdots \times I_M\times J_1 \times \cdots \times J_N } \) and \( \mathcal {B} \in {\mathbb {C}}^{J_1 \times \cdots \times J_N \times K_1 \times \cdots \times K_L} \) be given tensors. Then

$$\begin{aligned} rsh(\mathcal {A}{*_N}\mathcal {B}) = rsh(\mathcal {A})rsh(\mathcal {B}) = AB \in {\mathbb {C}}^{I_1\cdots I_M \times K_1\cdots K_L }, \end{aligned}$$
(7)

where \( A = rsh(\mathcal {A}) \in {\mathbb {C}}^{I_1\cdots I_M \times J_1 \cdots J_N}, B = rsh(\mathcal {B}) \in {\mathbb {C}}^{J_1\cdots J_N \times K_1\cdots K_L} \).

An immediate consequence of the above Lemma is the following:

$$\begin{aligned} \mathcal {A}{*_N}\mathcal {B} = rsh^{-1}(AB), ~~i.e.,~~ rsh^{-1}(AB) = rsh^{-1}(A){*_N}rsh^{-1}(B). \end{aligned}$$
(8)

Existence of SVD of any square tensor is discussed in Brazell et al. (2013). Using this framework, Ji and Wei (2017) defined Hermitian positive definite tensors, as follows:

Definition 5

(Definition 1, Ji and Wei 2017) For \(\mathcal {P} \in {\mathbb {C}}^{I_1\times \cdots \times I_N \times I_1 \times \cdots \times I_N} \), if there exists a unitary tensor \( \mathcal {U} \in {\mathbb {C}}^{I_1 \times \cdots \times I_N \times I_1 \times \cdots \times I_N} \) such that

$$\begin{aligned} \mathcal {P} =\mathcal {U} *_N \mathcal {D} *_N \mathcal {U}^*, \end{aligned}$$
(9)

where \( \mathcal {D} \in {\mathbb {C}}^{I_1 \times \cdots \times I_N \times I_1 \times \cdots \times I_N} \) is a diagonal tensor with positive diagonal entries, then \(\mathcal {P}\) is said to be a Hermitian positive definite tensor.

Further, Ji and Wei (2017) defined the square root of a Hermitian positive definite tensor, \(\mathcal {P}\) as follows:

$$\begin{aligned} \mathcal {P}^{1/2} = \mathcal {U}*_N \mathcal {D}^{1/2} *_N \mathcal {U}^*, \end{aligned}$$

where \(\mathcal {D}^{1/2}\) is the diagonal tensor, which obtained from \(\mathcal {D}\) by taking the square root of all its diagonal entries. Notice that \(\mathcal {P}^{1/2}\) is always non-singular and its inverse is denoted by \(\mathcal {P}^{-1/2}\).

we now recall the definition of the range and the null space of arbitrary order tensors.

Definition 6

( Definition 2.1, Stanimirović et al. 2020): The null space and the range space of a tensor \( \mathcal {A} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N} \) are defined as follows:

$$\begin{aligned} \mathcal {N}(\mathcal {A}) = \{\mathcal {X} : \mathcal {A}*_N\mathcal {X} = \mathcal {O} \in {\mathbb {C}}^{I_1\times \cdots \times I_M}\},~ and ~ {\mathfrak {R}}(\mathcal {A}) = \{ \mathcal {A}*_N\mathcal {X} : \mathcal {X} \in {\mathbb {C}}^{J_1 \times \cdots \times J_N } \}. \end{aligned}$$

It is easily seen that \(\mathcal {N}(A)\) is a subspace of \({\mathbb {C}}^{J_1\times \cdots \times J_N}\) and \({\mathfrak {R}}(\mathcal {A})\) is a subspace of \({\mathbb {C}}^{I_1 \times \cdots \times I_M}\). In particular, \(\mathcal {N}(\mathcal {A}) = \{ \mathcal {O} \}\) if and only if \(\mathcal {A}\) is left invertible via \(*_M\) operation and \({\mathfrak {R}}(\mathcal {A}) = {\mathbb {C}}^{I_1 \times \cdots \times I_M} \) if and only if \(\mathcal {A}\) is right invertible via \(*_N\) operation.

2 Main results

Mathematical modelling of problems in science and engineering typically involves solving multilinear systems; this becomes particularly challenging for problems having an arbitrary-order tensor. However, the existing framework on Moore–Penrose inverses of arbitrary-order tensor appears to be insufficient and/or inappropriate. It is thus of interest to study the theory of Moore–Penrose inverse of an arbitrary-order tensor via the Einstein product.

2.1 Moore–Penrose inverses

One of the most widely used methods is the SVD to compute Moore–Penrose inverse. Here we present a generalization of the SVD via the Einstein product.

Lemma 2

Let \( \mathcal {A}\in {\mathbb {C}}^{I_1\times \cdots \times I_M\times J_1 \times \cdots \times J_N } \) with \( rshrank(\mathcal {A}) = r \). Then the SVD for tensor \( \mathcal {A}\) has the form

$$\begin{aligned} \mathcal {A} = \mathcal {U}{*_M}\mathcal {D}{*_N}\mathcal {V}^*, \end{aligned}$$
(10)

where  \( \mathcal {U} \in {\mathbb {C}}^{I_1 \times \cdots \times I_M \times I_1 \times \cdots \times I_M} \) and \( \mathcal {V} \in {\mathbb {C}}^{J_1 \times \cdots \times J_N \times J_1 \times \cdots \times J_N} \) are unitary tensors, and \( \mathcal {D} \in {\mathbb {C}}^{I_1 \times \cdots \times I_M \times J_1 \times \cdots \times J_N } \) is a diagonal tensor, defined by

$$\begin{aligned} (\mathcal {D})_{i_1 \cdots i_M j_1 \cdots j_N} = \left\{ \, \begin{array}{ll} \sigma _{I} >0, &{} {\text {if}} {I} = {J} \in {\{1,2,\ldots ,r \}}, \\ 0, &{} {\text {otherwise}}, \end{array} \right. \end{aligned}$$

where \( I = [i_1+\sum _{k=2}^{M} ({i_k - 1}) \prod _{l=1}^{k-1} I_l] \) and \( {J} = [j_1+\sum _{k=2}^{N} (j_k-1) \prod _{l=1}^{k-1} J_l] \).

Proof

Let \( A = rsh(\mathcal {A}) \in {\mathbb {C}}^{I_1\cdots I_M \times J_1 \cdots J_N} \). In the context of the SVD of the matrix A, one can write \( A = U D V^* \), where \( U \in {\mathbb {C}}^{I_1\cdots I_M \times I_1 \cdots I_M}\) and \( V \in {\mathbb {C}}^{J_1 \cdots J_N \times J_1 \cdots J_N} \) are unitary matrices and \( D \in {\mathbb {C}}^{I_1\cdots I_M \times J_1 \cdots J_N} \) is a diagonal matrix with

$$\begin{aligned} (D)_{I,J} = \left\{ \, \begin{array}{ll} \sigma _{I} >0, &{} {\text {if}} {I} = {J} \in {\{1,2,\ldots ,r \}}, \\ 0, &{} {\text {otherwise}} \end{array} \right. \end{aligned}$$

From relations (7) and (8), we can write

$$\begin{aligned} \mathcal {A}= & {} rsh^{-1}(A) = rsh^{-1}(U D V^*) \nonumber \\= & {} rsh^{-1}(U){*_M}rsh^{-1}(D){*_N}rsh^{-1}(V^*) = \mathcal {U}{*_M}\mathcal {D}{*_N}\mathcal {V}^*, \end{aligned}$$
(11)

where \(\mathcal {U} = rsh^{-1}(U), \mathcal {V} = rsh^{-1}(V)\) and \(\mathcal {D} = rsh^{-1}(D) \). Further, \(\mathcal {U}{*_M}\mathcal {U}^* = rsh^{-1}(U U^*) = rsh^{-1}(I) = \mathcal {I}_M \) and \( \mathcal {V}{*_N}\mathcal {V}^* = rsh^{-1}(V V^*) = rsh^{-1}(I) = \mathcal {I}_N \) gives \(\mathcal {A} = \mathcal {U}{*_M}\mathcal {D}{*_N}\mathcal {V}^* \), where \( \mathcal {U}\) and \(\mathcal {V} \) are unitary tensors and \( \mathcal {D} \) diagonal tensor. \(\square \)

Remark 1

The authors of the paper Liang and Zheng (2019) has proved Theorem 3.2 for a square tensor. Here we proved for an arbitrary-order tensor.

Continuing this study, we recall the definition of the Moore–Penrose inverse of tensors in \({\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N}\) via the Einstein product, which was introduced in Liang and Zheng (2019) for arbitrary-order.

Definition 7

Let \(\mathcal {A} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N}\). The tensor \(\mathcal {X} \in {\mathbb {C}}^{J_1\times \cdots \times J_N \times I_1 \times \cdots \times I_M} \) satisfying the following four tensor equations:

$$\begin{aligned}&(1)~\mathcal {A}{*_N}\mathcal {X}{*_M}\mathcal {A} = \mathcal {A};\\&(2)~\mathcal {X}{*_M}\mathcal {A}{*_N}\mathcal {X} = \mathcal {X};\\&(3)~(\mathcal {A}{*_N}\mathcal {X})^* = \mathcal {A}{*_N}\mathcal {X};\\&(4)~(\mathcal {X}{*_M}\mathcal {A})^* = \mathcal {X}{*_M}\mathcal {A} \end{aligned}$$

is called the Moore–Penrose inverse of \(\mathcal {A}\), and is denoted by \(\mathcal {A}^{{\dagger }}\).

Similar to the proof of Theorem 3.2 in Sun et al. (2016), we have the existence and uniqueness of the Moore–Penrose inverse of an arbitrary-order tensor in \({\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N}\) as follows.

Theorem 1

The Moore–Penrose inverse of an arbitrary-order tensor, \( \mathcal {A} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N} \) exists and is unique.

By straightforward derivation, the following results can be obtained, which also hold (Lemmas 2.3, 2.6) in Behera and Mishra (2017) for even-order tensor.

Lemma 3

Let \(\mathcal {A}\in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N }\). Then

  1. (a)

    \(\mathcal {A}^* = \mathcal {A}^{{\dagger }} {*_M}\mathcal {A} {*_N}\mathcal {A}^*=\mathcal {A}^* {*_M}\mathcal {A} {*_N}\mathcal {A}^{{\dagger }};\)

  2. (b)

    \(\mathcal {A} = \mathcal {A} {*_N}\mathcal {A}^* {*_M}(\mathcal {A}^*)^{{\dagger }} = (\mathcal {A}^*)^{{\dagger }} {*_N}\mathcal {A}^* {*_M}\mathcal {A};\)

  3. (c)

    \(\mathcal {A}^{{\dagger }} = ({\mathcal {A}^*}{*_M}{\mathcal {A})^{{\dagger }}}{*_N}\mathcal {A}^* = {\mathcal {A}^*}{*_M}(\mathcal {A}{*_N}\mathcal {A}^*)^{{\dagger }}.\)

From Stanimirović et al. (2020), we present the relation of range space of multidimensional arrays which will be used to prove next Lemma.

Lemma 4

(Lemma2.2, Stanimirović et al. 2020) Let \(\mathcal {A}\in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N }\), \(\mathcal {B}\in {\mathbb {C}}^{I_1\times \cdots \times I_M \times K_1 \times \cdots \times K_L }\). Then \( {\mathfrak {R}}(\mathcal {B}) \subseteq {\mathfrak {R}}(\mathcal {A}) \) if and only if there exists \( \mathcal {U}\in {\mathbb {C}}^{J_1 \times \cdots \times J_N \times K_1 \times \cdots \times K_L } \) such that \( \mathcal {B} = \mathcal {A}{*_N}\mathcal {U} \).

We now discuss the important relation between range and Moore–Penrose inverse of an arbitrary order tensor, which are mostly used in various section of this paper.

Lemma 5

Let \(\mathcal {A}\in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N }\) and \(\mathcal {B}\in {\mathbb {C}}^{I_1\times \cdots \times I_M \times K_1 \times \cdots \times K_L }\). Then

  1. (a)

    \( {\mathfrak {R}}(\mathcal {B}) \subseteq {\mathfrak {R}}(\mathcal {A}) \Leftrightarrow \mathcal {A}*_N\mathcal {A}^\dag *_M\mathcal {B} =\mathcal {B} \),

  2. (b)

    \( {\mathfrak {R}}(\mathcal {A}) = {\mathfrak {R}}(\mathcal {B}) \Leftrightarrow \mathcal {A}*_N\mathcal {A}^\dag =\mathcal {B}*_L\mathcal {B}^\dag \),

  3. (c)

    \( {\mathfrak {R}}(\mathcal {A}) = {\mathfrak {R}}[(\mathcal {A}^\dag )^*] \) and \( {\mathfrak {R}}(\mathcal {A}^*) = {\mathfrak {R}}(\mathcal {A}^\dag ) \).

Proof

  1. (a)

    Using the fact that \({\mathfrak {R}}(\mathcal {A}*_N \mathcal {U}) \subseteq {\mathfrak {R}}(\mathcal {A})\) for two tensors \(\mathcal {A}\) and \(\mathcal {U}\) in appropriate order, one can conclude \({\mathfrak {R}}(\mathcal {B}) \subseteq {\mathfrak {R}}(\mathcal {A}) \) from \( \mathcal {A}*_N\mathcal {A}^\dag *_M\mathcal {B} = \mathcal {B}\). Applying Lemma 4, we conclude \(\mathcal {B} = \mathcal {A}*_N\mathcal {P}\) from \({\mathfrak {R}}(\mathcal {B}) \subseteq {\mathfrak {R}}(\mathcal {A})\), where \(\mathcal {P} \in {\mathbb {C}}^{J_1\times \cdots \times J_N \times K_1 \times \cdots \times K_L }\). Hence, \(\mathcal {A}*_N\mathcal {A}^\dag *_M \mathcal {B} = \mathcal {A}*_N\mathcal {A}^\dag *_M\mathcal {A}*_N\mathcal {P} =\mathcal {B}.\)

  2. (b)

    From (a), we have \({\mathfrak {R}}(\mathcal {A}) = {\mathfrak {R}}(\mathcal {B})\) if and only if \(\mathcal {A}*_N\mathcal {A}^\dag *_M \mathcal {B} = \mathcal {B}\) and \(\mathcal {B}*_L\mathcal {B}^\dag *_M\mathcal {A} = \mathcal {A} \) which implies \( \mathcal {B}^{\dagger }= \mathcal {B}^\dag {*_M}\mathcal {A}{*_N}\mathcal {A}^{\dagger }\). Then \(\mathcal {A}*_N\mathcal {A}^\dag = \mathcal {B}*_L\mathcal {B}^\dag *_M\mathcal {A}*_N\mathcal {A}^\dag = \mathcal {B}*_L\mathcal {B}^\dag \).

  3. (c)

    Using Lemma 3 [(b), (c)], one can conclude that \({\mathfrak {R}}(\mathcal {A}) \subseteq {\mathfrak {R}}[(\mathcal {A}^\dag )^*] \) and \({\mathfrak {R}}[(\mathcal {A}^\dag )^*] \subseteq {\mathfrak {R}}(\mathcal {A})\) respectively. This follows \({\mathfrak {R}}(\mathcal {A}) = {\mathfrak {R}}[(\mathcal {A}^\dag )^*]\). Further, replacing \( \mathcal {A}\) by \(\mathcal {A}^*\) and using the fact \(( \mathcal {A}^*)^\dag = (\mathcal {A}^\dag )^*\) we obtain \( {\mathfrak {R}}(\mathcal {A}^*) = {\mathfrak {R}}(\mathcal {A}^\dag )\).

\(\square \)

Using the fact that \({\mathfrak {R}}(\mathcal {A}*_N \mathcal {B}) \subseteq {\mathfrak {R}}(\mathcal {A})\) for two tensors \(\mathcal {A}\) and \(\mathcal {B}\) and the Definition-7, we get

$$\begin{aligned} {\mathfrak {R}}(\mathcal {A}*_N\mathcal {B}*_L\mathcal {B}^\dag ) = {\mathfrak {R}}(\mathcal {A}*_N\mathcal {B}), \end{aligned}$$
(12)

where \(\mathcal {A}\in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N }\) and \(\mathcal {B}\in {\mathbb {C}}^{J_1\times \cdots \times J_N \times K_1 \times \cdots \times K_L }\). Now using the method as in the proof of Lemma 5, one can prove the next Lemma.

Lemma 6

Let \(\mathcal {A}\in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N }\) and \(\mathcal {B}\in {\mathbb {C}}^{K_1\times \cdots \times K_L \times J_1 \times \cdots \times J_N }\). Then

  1. (a)

    \({\mathfrak {R}}(\mathcal {B}^*) \subseteq {\mathfrak {R}}(\mathcal {A}^*) \Leftrightarrow \mathcal {B}*_N\mathcal {A}^\dag *_M\mathcal {A} =\mathcal {B}\),

  2. (b)

    \({\mathfrak {R}}(\mathcal {A}^*) ={\mathfrak {R}}(\mathcal {B}^*) \Leftrightarrow \mathcal {A}^\dag *_M\mathcal {A} = \mathcal {B}^\dag *_L\mathcal {B},\)

  3. (c)

    \({\mathfrak {R}}(\mathcal {A}*_N\mathcal {B}^\dag ) ={\mathfrak {R}}(\mathcal {A}*_N\mathcal {B}^*)\).

Consider \( \mathcal {A},~ \mathcal {B},~\mathcal {X} \in {\mathbb {C}}^{I_1 \times \cdots \times I_N \times I_1 \times \cdots \times I_N}\) and all are invertible, the following equation:

$$\begin{aligned} \mathcal {B}{*_N}(\mathcal {A}{*_N}\mathcal {X}{*_N}\mathcal {B})^{-1}{*_N}\mathcal {A} = \mathcal {X}^{-1} \end{aligned}$$
(13)

is called the cancellation property of product of tensors \((\mathcal {A},\mathcal {B},\mathcal {X})\). When the ordinary inverse is replaced by generalized inverse with suitable order, this cancellation property is not true in general.

Example 1

Consider tensors \(\mathcal {A} = (a_{ijkl}) \in {\mathbb {R}}^{ 3\times 2\times 2\times 2}\), \( \mathcal {B} = (b_{ijkl}) \in {\mathbb {R}}^{ 3\times 2\times 3\times 2}\) and \( \mathcal {X} = (x_{ijkl}) \in {\mathbb {R}}^{2\times 2\times 3\times 2} \) such that

$$\begin{aligned} a_{ij11}= & {} \begin{pmatrix} 0 &{} 0\\ 0 &{} -1\\ 1 &{} -1 \end{pmatrix}, a_{ij21} = \begin{pmatrix} 0 &{} 0\\ -1 &{} 0\\ 1 &{} -1 \end{pmatrix}, a_{ij12} = \begin{pmatrix} 1 &{} 1\\ 1 &{} -1\\ 0 &{} 1 \end{pmatrix}, a_{ij22} = \begin{pmatrix} -1 &{} 0\\ 1 &{} -1\\ 1 &{} 0 \end{pmatrix}, ~ \\ b_{ij11}= & {} \begin{pmatrix} 1 &{} 1\\ 0 &{} 1\\ 0 &{} 1 \end{pmatrix},~~ b_{ij21} = \begin{pmatrix} 0 &{} 0\\ 0 &{} 0\\ 0 &{} 0 \end{pmatrix},~~ b_{ij31} = \begin{pmatrix} 0 &{} 1\\ 0 &{} 0\\ 1 &{} 0 \end{pmatrix},~~ \\ b_{ij12}= & {} \begin{pmatrix} 0 &{} 0\\ 0 &{} 0\\ 1 &{} 1 \end{pmatrix},~~ b_{ij22} = \begin{pmatrix} 0 &{} 0\\ 0 &{} 1\\ 1 &{} 1 \end{pmatrix}=b_{ij32}, \end{aligned}$$

and

$$\begin{aligned}&x_{ij11} = \begin{pmatrix} -1 &{} 1\\ -1 &{} 0 \end{pmatrix},~~ x_{ij21} = \begin{pmatrix} 0 &{} -1\\ -1 &{} 0 \end{pmatrix},~~ x_{ij31} = \begin{pmatrix} 0 &{} 0\\ 0 &{} 0 \end{pmatrix},\\&x_{ij12} = \begin{pmatrix} 0 &{} 0\\ -1 &{} -1 \end{pmatrix}, ~ x_{ij22} = \begin{pmatrix} 1 &{} -1\\ -1 &{} 1 \end{pmatrix}, ~ x_{ij32} = \begin{pmatrix} -1 &{} 0\\ 0 &{} -1 \end{pmatrix}. ~ \end{aligned}$$

Then

$$\begin{aligned} (\mathcal {X}^{\dagger })_{ij11}= & {} \begin{pmatrix} -1/3 &{} 2/3\\ -4/9 &{} 1/9\\ 0 &{} -5/9 \end{pmatrix}, (\mathcal {X}^{\dagger })_{ij21} = \begin{pmatrix} -1/3 &{} -1/3\\ -1/9 &{} -2/9\\ 0 &{} 1/9 \end{pmatrix}, \\ (\mathcal {X}^{\dagger })_{ij12}= & {} \begin{pmatrix} 1/3 &{} 1/3\\ -5/9 &{} -1/9\\ 0 &{} -4/9 \end{pmatrix}, (\mathcal {X}^{\dagger })_{ij22} = \begin{pmatrix} 1/3 &{} -2/3\\ 1/9 &{} 2/9\\ 0 &{} -1/9 \end{pmatrix} \end{aligned}$$

and

$$\begin{aligned} (\mathcal {B} *_2 (\mathcal {A}*_2\mathcal {X}*_2\mathcal {B})^{\dagger }*_2 \mathcal {A})_{ij11}= & {} \begin{pmatrix} -1/3 &{} 2/3\\ 0 &{} -1/3\\ 1/3 &{} -1 \end{pmatrix}, \\ (\mathcal {B} *_2 (\mathcal {A}*_2\mathcal {X}*_2\mathcal {B})^{\dagger }*_2 \mathcal {A})_{ij21}= & {} \begin{pmatrix} -1/3 &{} -1/3\\ 0 &{} -1/3\\ 1/3 &{} 0 \end{pmatrix},\\ (\mathcal {B} *_2 (\mathcal {A}*_2\mathcal {X}*_2\mathcal {B})^{\dagger }*_2 \mathcal {A})_{ij12}= & {} \begin{pmatrix} 1/3 &{} 1/3\\ 0 &{} -2/3\\ -4/3 &{} -1 \end{pmatrix}, \\ (\mathcal {B} *_2 (\mathcal {A}*_2\mathcal {X}*_2\mathcal {B})^{\dagger }*_2 \mathcal {A})_{ij22}= & {} \begin{pmatrix} 1/3 &{} -2/3\\ 0 &{} 1/3\\ -4/3 &{} 0 \end{pmatrix}. \end{aligned}$$

Hence,

$$\begin{aligned} \mathcal {X}^{\dagger }\ne \mathcal {B} *_2 (\mathcal {A}*_2\mathcal {X}*_2\mathcal {B})^{\dagger }*_2 \mathcal {A}. \end{aligned}$$

In this context, we concentrate to characterize all triples \( (\mathcal {A},\mathcal {B},\mathcal {X}) \) which satisfy

$$\begin{aligned} \mathcal {X}^{\dagger }= \mathcal {B} {*_R}(\mathcal {A}{*_M}\mathcal {X}{*_N}\mathcal {B})^{\dagger }{*_L}\mathcal {A}, \end{aligned}$$
(14)

where \( \mathcal {X} \in {\mathbb {C}}^{I_1 \times \cdots \times I_M \times J_1 \times \cdots \times J_N} \), \( \mathcal {A} \in {\mathbb {C}}^{K_1 \times \cdots \times K_L \times I_1 \times \cdots \times I_M} \) and \( \mathcal {B} \in {\mathbb {C}}^{J_1 \times \cdots \times J_N \times H_1 \times \cdots \times H_R} \). The first result obtained below deals with the necessary condition of this properties.

Lemma 7

Let \( \mathcal {X} \in {\mathbb {C}}^{I_1 \times \cdots \times I_M \times J_1 \times \cdots \times J_N} \), \( \mathcal {A} \in {\mathbb {C}}^{K_1 \times \cdots \times K_L \times I_1 \times \cdots \times I_M} \) and \( \mathcal {B} \in {\mathbb {C}}^{J_1 \times \cdots \times J_N \times H_1 \times \cdots \times H_R } \).

If \( \mathcal {X}^\dag = \mathcal {B}*_R(\mathcal {A}*_M\mathcal {X}*_N\mathcal {B})^\dag *_L\mathcal {A} \), then \({ \mathcal {X} = \mathcal {A}^\dag *_L \mathcal {A}*_M\mathcal {X} } ~~and~~ {\mathcal {X} =\mathcal {X}*_N\mathcal {B}*_R\mathcal {B}^\dag }\).

Proof

Let, \( \mathcal {X}^\dag = \mathcal {B}*_R(\mathcal {A}*_M\mathcal {X}*_N\mathcal {B})^\dag *_L\mathcal {A} \). Then \({\mathcal {R}}({\mathcal {X}}^\dag )\subseteq {\mathcal {R}}({\mathcal {B}}), {\mathcal {R}}(({\mathcal {X}}^\dag )^*)\subseteq {\mathcal {R}}({\mathcal {A}}^*).\) Hence, from (a) and (c) in Lemma 5 and (a) in Lemma 6, we have \({\mathcal {R}}({\mathcal {X}})\subseteq {\mathcal {R}}({\mathcal {A}}^*),{\mathcal {R}}({\mathcal {X}}^*)\subseteq {\mathcal {R}}(({\mathcal {B}}^\dag )^*),\) which implies

$$\begin{aligned} {\mathcal {X}}={\mathcal {A}}^\dag *_L{\mathcal {A}}*_M{\mathcal {X}}, {\mathcal {X}}={\mathcal {X}}*_N{\mathcal {B}}*_R{\mathcal {B}}^\dag . \end{aligned}$$

\(\square \)

The following example shows that converse of the above theorem is not true in general.

Example 2

Consider tensors \(\mathcal {A} = (a_{ijkl})_{1 \le i,j,k,l \le 2} \in {\mathbb {R}}^{ 2\times 2\times 2\times 2}\), \( \mathcal {B} = \mathcal {A}^* \) and \( \mathcal {X} = (x_{ijkl})_{1 \le i,j,k,l \le 2} \in {\mathbb {R}}^{2\times 2\times 2\times 2} \) such that

$$\begin{aligned} a_{ij11} = \begin{pmatrix} 1 &{} -1\\ 0 &{} 0\\ \end{pmatrix}, a_{ij21} = \begin{pmatrix} -1 &{} 0\\ 0 &{} 0\\ \end{pmatrix}, a_{ij12} = \begin{pmatrix} 0 &{} -1\\ 1 &{} 0\\ \end{pmatrix}, a_{ij22} = \begin{pmatrix} 1 &{} 0\\ 0 &{} -1\\ \end{pmatrix}, ~ \end{aligned}$$

and

$$\begin{aligned} x_{ij11} = \begin{pmatrix} 1 &{} -1\\ 0 &{} 0\\ \end{pmatrix}, x_{ij12} = \begin{pmatrix} 0 &{} 1\\ 0 &{} 0\\ \end{pmatrix}, x_{ij21} = \begin{pmatrix} 0 &{} 0\\ -1 &{} 0\\ \end{pmatrix}, x_{ij22} = \begin{pmatrix} 0 &{} 0\\ 1 &{} 0\\ \end{pmatrix}. ~ \end{aligned}$$

Then

$$\begin{aligned} (\mathcal {A}^{\dagger })_{ij11} = \begin{pmatrix} 0 &{} -1\\ 0 &{} 0\\ \end{pmatrix}, (\mathcal {A}^{\dagger })_{ij21} = \begin{pmatrix} -1 &{} -1\\ 1 &{} 0\\ \end{pmatrix}, (\mathcal {A}^{\dagger })_{ij12} = \begin{pmatrix} -1 &{} -1\\ 0 &{} 0\\ \end{pmatrix}, (\mathcal {A}^{\dagger })_{ij22} = \begin{pmatrix} 0 &{} -1\\ 0 &{} -1\\ \end{pmatrix}. \end{aligned}$$

Thus, we have

$$\begin{aligned} \mathcal {A}^{\dagger }*_2 \mathcal {A} *_2 \mathcal {X} = \mathcal {X} ~~~\text {and}~~~ \mathcal {X} *_2\mathcal {B} *_2 \mathcal {B}^{\dagger }= \mathcal {X}. \end{aligned}$$

But

$$\begin{aligned} \mathcal {B}*_2(\mathcal {A} *_2 \mathcal {X} *_2\mathcal {B})^{\dagger }*_2\mathcal {A} \ne \mathcal {X}^{\dagger }, \end{aligned}$$

where

$$\begin{aligned} (\mathcal {B}*_2(\mathcal {A} *_2 \mathcal {X} *_2\mathcal {B})^{\dagger }*_2\mathcal {A})_{ij11}= & {} \begin{pmatrix} 1 &{} 1\\ \frac{1}{2} &{} \frac{1}{2}\\ \end{pmatrix}, (\mathcal {B}*_2(\mathcal {A} *_2 \mathcal {X} *_2\mathcal {B})^{\dagger }*_2\mathcal {A})_{ij21} = \begin{pmatrix} 0 &{} 0\\ -\frac{1}{2} &{} \frac{1}{2}\\ \end{pmatrix},\\ (\mathcal {B}*_2(\mathcal {A} *_2 \mathcal {X} *_2\mathcal {B})^{\dagger }*_2\mathcal {A})_{ij12}= & {} \begin{pmatrix} 0 &{} 1\\ 0 &{} 0\\ \end{pmatrix}, (\mathcal {B}*_2(\mathcal {A} *_2 \mathcal {X} *_2\mathcal {B})^{\dagger }*_2\mathcal {A})_{ij22} = \begin{pmatrix} 0 &{} -1\\ 0 &{} 0\\ \end{pmatrix}, ~ \\ (\mathcal {X}^{\dagger })_{ij11}= & {} \begin{pmatrix} 1 &{} 1\\ 0 &{} 0\\ \end{pmatrix}, (\mathcal {X}^{\dagger })_{ij21} = \begin{pmatrix} 0 &{} 0\\ -\frac{1}{2} &{} \frac{1}{2}\\ \end{pmatrix}, \\ (\mathcal {X}^{\dagger })_{ij12}= & {} \begin{pmatrix} 0 &{} 1\\ 0 &{} 0\\ \end{pmatrix}, (\mathcal {X}^{\dagger })_{ij22} = \begin{pmatrix} 0 &{} 0\\ 0 &{} 0\\ \end{pmatrix}. \end{aligned}$$

However, the converse of Lemma 7 holds under the assumption of additional condition which is stated below.

Lemma 8

Let \( \mathcal {X} \in {\mathbb {C}}^{I_1 \times \cdots \times I_M \times J_1 \times \cdots \times J_N }, \mathcal {A} \in {\mathbb {C}}^{K_1 \times \cdots \times K_L \times I_1 \times \cdots \times I_M} \) and \( \mathcal {B} \in {\mathbb {C}}^{J_1 \times \cdots \times J_N \times H_1 \times \cdots \times H_R } \). If \({ \mathcal {X} = \mathcal {A}^\dag *_L \mathcal {A}*_M\mathcal {X} } { =\mathcal {X}*_N\mathcal {B}*_R\mathcal {B}^\dag }\) along with the condition \(\mathcal {K} = \mathcal {A}^\dag {*_L}(\mathcal {A}{*_M}\mathcal {X}){*_N}(\mathcal {A}{*_M}\mathcal {X})^\dag {*_L}\mathcal {A} \) and \(\mathcal {L} = \mathcal {B}{*_R}(\mathcal {X}{*_N}\mathcal {B})^\dag {*_M}(\mathcal {X}{*_N}\mathcal {B}){*_R}\mathcal {B}^\dag \) are Hermitian, then \( \mathcal {X}^\dag = \mathcal {B}*_R(\mathcal {A}*_M\mathcal {X}*_N\mathcal {B})^\dag *_L\mathcal {A} \).

Proof

Let \( \mathcal {W} = \mathcal {B}*_R(\mathcal {A}*_M\mathcal {X}*_N\mathcal {B})^\dag *_L\mathcal {A} \).

Now, \( \mathcal {X}*_N\mathcal {W}*_M\mathcal {X} = (\mathcal {A}^\dag *_L\mathcal {A}*_M\mathcal {X}*_N\mathcal {B}*_R\mathcal {B}^\dag )*_N\mathcal {B}*_R(\mathcal {A}*_M\mathcal {X}*_N\mathcal {B})^\dag *_L\mathcal {A}*_M(\mathcal {A}^\dag *_L\mathcal {A}*_M\mathcal {X}*_N\mathcal {B}*_R\mathcal {B}^\dag ). \)

\(= \mathcal {A}^\dag *_L[(\mathcal {A}*_M\mathcal {X}*_N\mathcal {B})*_R(\mathcal {A}*_M\mathcal {X}*_N\mathcal {B})^\dag *_L(\mathcal {A}*_M\mathcal {X}*_N\mathcal {B})]*_R\mathcal {B}^\dag .\)

\(= \mathcal {A}^\dag *_L(\mathcal {A}*_M\mathcal {X}*_N\mathcal {B}) *_R\mathcal {B}^\dag = \mathcal {X} \).

Further, \(\mathcal {W}*_M\mathcal {X}*_N\mathcal {W} = \mathcal {B}*_R(\mathcal {A}*_M\mathcal {X}*_N\mathcal {B})^\dag *_L\mathcal {A}*_M\mathcal {X}*_N\mathcal {B} *_R(\mathcal {A}*_M\mathcal {X}*_N\mathcal {B})^\dag *_L \mathcal {A} = \mathcal {W} \).

Again \( \mathcal {K} = \mathcal {X}*_N\mathcal {W} \) and \( \mathcal {L}=\mathcal {W}*_M\mathcal {X} \) are Hermitian. Hence, \( \mathcal {W} = \mathcal {X}^\dag \). \(\square \)

From Lemma 7 It is clear that if \( \mathcal {X}^{\dagger }= \mathcal {B}{*_R}(\mathcal {A}{*_M}\mathcal {X}{*_N}\mathcal {B})^{\dagger }{*_L}\mathcal {A} \), then \( {\mathfrak {R}}(\mathcal {A} {*_M}\mathcal {X}) = {\mathfrak {R}}(\mathcal {A}{*_M}\mathcal {X}{*_N}\mathcal {B}) \), which implies that \( (\mathcal {A}{*_M}\mathcal {X}{*_N}\mathcal {B}){*_R}(\mathcal {A}{*_M}\mathcal {X}{*_N}\mathcal {B})^{\dagger }= \mathcal {A}{*_M}\mathcal {X}{*_N}(\mathcal {A}{*_M}\mathcal {X})^{\dagger }\). It is easy to verify that \( \mathcal {X}{*_N}\mathcal {X}^{\dagger }= \mathcal {K} \) and \( \mathcal {X}^{\dagger }{*_M}\mathcal {X}= \mathcal {L}\), and both \( \mathcal {K}\) and \(\mathcal {L}\) both are Hermitian. Therefore, a necessary and sufficient condition for the cancellation law can be stated as

Theorem 2

Let \( \mathcal {X} \in {\mathbb {C}}^{I_1 \times \cdots \times I_M \times J_1 \times \cdots \times J_N }, \mathcal {A} \in {\mathbb {C}}^{K_1 \times \cdots \times K_L \times I_1 \times \cdots \times I_M} \) and \( \mathcal {B} \in {\mathbb {C}}^{J_1 \times \cdots \times J_N \times H_1 \times \cdots \times H_R } \).

\( \mathcal {X}^\dag = \mathcal {B}*_R(\mathcal {A}*_M\mathcal {X}*_N\mathcal {B})^\dag *_L\mathcal {A} \) if and only if \({ \mathcal {X} = \mathcal {A}^\dag *_L \mathcal {A}*_M\mathcal {X} } { =\mathcal {X}*_N\mathcal {B}*_R\mathcal {B}^\dag }\) and both \(\mathcal {K} = \mathcal {A}^\dag {*_L}(\mathcal {A}{*_M}\mathcal {X}){*_N}(\mathcal {A}{*_M}\mathcal {X})^\dag {*_L}\mathcal {A} \) and \(\mathcal {L} = \mathcal {B}{*_R}(\mathcal {X}{*_N}\mathcal {B})^\dag {*_M}(\mathcal {X}{*_N}\mathcal {B}){*_R}\mathcal {B}^\dag \) are Hermitian.

We now proceed to discuss a few necessary and sufficient conditions for the cancellation law.

Corollary 1

Let \( \mathcal {X} \in {\mathbb {C}}^{I_1 \times \cdots \times I_M \times J_1 \times \cdots \times J_N }, \mathcal {A} \in {\mathbb {C}}^{K_1 \times \cdots \times K_L \times I_1 \times \cdots \times I_M} \) and \( \mathcal {B} \in {\mathbb {C}}^{J_1 \times \cdots \times J_N \times H_1 \times \cdots \times H_R }\), and \( \mathcal {X}^\dag = \mathcal {B}*_R(\mathcal {A}*_M\mathcal {X}*_N\mathcal {B})^\dag *_L\mathcal {A} \) if and only if both the equations

$$\begin{aligned} { \mathcal {X}^\dag = (\mathcal {A}{*_M}\mathcal {X})^\dag {*_L}\mathcal {A} ~~and~~ \mathcal {X}^\dag = \mathcal {B}{*_R}(\mathcal {X}{*_N}\mathcal {B})^\dag } ~~ are~ satisfied. \end{aligned}$$
(15)

Proof

By taking \( \mathcal {B} = \mathcal {I} \) in Theorem 2, we have \( \mathcal {X}^\dag = (\mathcal {A}*_M\mathcal {X})^\dag *_L\mathcal {A} \) if and only if \( \mathcal {A}^\dag *_L(\mathcal {A}{*_M}\mathcal {X}){*_N}(\mathcal {A}{*_M}\mathcal {X})^{\dagger }*_L\mathcal {A} \) is Hermitian and \( \mathcal {X}= \mathcal {A}^\dag *_L\mathcal {A}*_M\mathcal {X} \). Similarly, with the special case \(\mathcal {A} = \mathcal {I} \) in Theorem 2, we get \( \mathcal {X}^\dag =\mathcal {B}*_R(\mathcal {X}*_N\mathcal {B})^\dag \) if and only if \( \mathcal {B}{*_R}(\mathcal {X}{*_N}\mathcal {B})^\dag {*_M}(\mathcal {X}{*_N}\mathcal {B}){*_R}\mathcal {B}^\dag \) is Hermitian and \( \mathcal {X} = \mathcal {X}*_N\mathcal {B}*_R\mathcal {B}^\dag \). Using the fact of Theorem2 one can prove the required result. \(\square \)

Using Lemmas 5 and 6 in Corollary 1 one obtain the following result.

Theorem 3

Let \( \mathcal {X} \in {\mathbb {C}}^{I_1 \times \cdots \times I_M \times J_1 \times \cdots \times J_N }\), \( \mathcal {A} \in {\mathbb {C}}^{K_1 \times \cdots \times K_L \times I_1 \times \cdots \times I_M} \) and \( \mathcal {B} \in {\mathbb {C}}^{J_1 \times \cdots \times J_N \times H_1 \times \cdots \times H_R}\), then

$$\begin{aligned} \mathcal {X}^\dag = \mathcal {B}*_R(\mathcal {A}*_M\mathcal {X}*_N\mathcal {B})^\dag *_L\mathcal {A} \end{aligned}$$

if and only if

$$\begin{aligned} (\mathcal {A}*_M\mathcal {X})^\dag = \mathcal {X}^\dag *_M\mathcal {A}^\dag ,~~ (\mathcal {X}*_N\mathcal {B})^\dag = \mathcal {B}^\dag *_N\mathcal {X}^\dag , ~~\mathcal {X} = \mathcal {A}^\dag *_L\mathcal {A}*_M\mathcal {X} ~~\text {and} ~~\mathcal {X} = \mathcal {X}*_N\mathcal {B}*_R\mathcal {B}^\dag . \end{aligned}$$

Proof

Suppose that \( \mathcal {X}^\dag = \mathcal {B}*_R(\mathcal {A}*_M\mathcal {X}*_N\mathcal {B})^\dag *_L\mathcal {A} \). Then from Corollary 1,

\(\mathcal {X}^\dag = (\mathcal {A}{*_M}\mathcal {X})^\dag {*_L}\mathcal {A} \) and \( \mathcal {X}^\dag = \mathcal {B}{*_R}(\mathcal {X}{*_N}\mathcal {B})^\dag \).

Now, \( {\mathfrak {R}}(\mathcal {X}) = {\mathfrak {R}}[(\mathcal {X}^\dag )^*] = {\mathfrak {R}}[\mathcal {A}^* *_L\{(\mathcal {A}*_M\mathcal {X})^\dag \}^*] = {\mathfrak {R}}(\mathcal {A}^* *_L\mathcal {A}*_M\mathcal {X})\) and \( {\mathfrak {R}}(\mathcal {X}^*) = {\mathfrak {R}}(\mathcal {X}^\dag ) = {\mathfrak {R}}[\mathcal {B}*_R(\mathcal {X}*_N\mathcal {B})^\dag ] = {\mathfrak {R}}[\mathcal {B}*_R(\mathcal {X}*_N\mathcal {B})^*] = {\mathfrak {R}}(\mathcal {B}*_R\mathcal {B}^* *_N\mathcal {X}^*) \).

Therefore, \( {\mathfrak {R}}(\mathcal {X}*_N\mathcal {X}^* *_M\mathcal {A}^*) \subseteq {\mathfrak {R}}(\mathcal {X}) = {\mathfrak {R}}(\mathcal {A}^* *_L\mathcal {A}*_M\mathcal {X}) \subseteq {\mathfrak {R}}(\mathcal {A}^*) \), i.e., \( {\mathfrak {R}}(\mathcal {X}*_N\mathcal {X}^* *_M\mathcal {A}^*) \subseteq {\mathfrak {R}}(\mathcal {A}^*) \) and \( {\mathfrak {R}}(\mathcal {A}^*{*_L}\mathcal {A}{*_M}\mathcal {X}) \subseteq {\mathfrak {R}}(\mathcal {X}) \) implies that \( (\mathcal {A}*_M\mathcal {X})^\dag = \mathcal {X}^\dag *_M\mathcal {A}^\dag \).

\( {\mathfrak {R}}(\mathcal {X}) \subseteq {\mathfrak {R}}(\mathcal {A}^\dag ) \) implies \( \mathcal {A}^\dag *_L\mathcal {A}*_M\mathcal {X} = \mathcal {X} \). Similarly, from \( {\mathfrak {R}}(\mathcal {X}^*) = {\mathfrak {R}}(\mathcal {B}{*_R}\mathcal {B}^*{*_N}\mathcal {X}^*) \) it follows that \( (\mathcal {X}*_N\mathcal {B})^\dag = \mathcal {B}^\dag *_N\mathcal {X}^\dag \), \( \mathcal {X} = \mathcal {X}*_N\mathcal {B}*_R\mathcal {B}^\dag \).

Conversely, using Lemma 5(c), Lemma6(a) in the fact

\( {\mathfrak {R}}[(\mathcal {X}^{\dagger })^*] = {\mathfrak {R}}(\mathcal {X}) \subseteq {\mathfrak {R}}(\mathcal {A}^{\dagger }) = {\mathfrak {R}}(\mathcal {A}^*) \) and \( {\mathfrak {R}}(\mathcal {X}^{\dagger }) = {\mathfrak {R}}(\mathcal {X}^*) \subseteq {\mathfrak {R}}[(\mathcal {B}^{\dagger })^*] = {\mathfrak {R}}(\mathcal {B}).\)

One has \( \mathcal {X}^{\dagger }= \mathcal {X}^{\dagger }{*_M}\mathcal {A}^{\dagger }{*_L}\mathcal {A} \) and \( \mathcal {X}^{\dagger }= \mathcal {B}{*_R}\mathcal {B}^{\dagger }{*_N}\mathcal {X}^{\dagger }\).

Now, \( (\mathcal {A}*_M\mathcal {X})^\dag *_L\mathcal {A} = \mathcal {X}^\dag *_M\mathcal {A}^\dag *_L\mathcal {A} = \mathcal {X}^\dag \) and \( \mathcal {B}*_R(\mathcal {X}*_N\mathcal {B})^\dag = \mathcal {B}*_R\mathcal {B}^\dag *_N\mathcal {X}^\dag = \mathcal {X}^{\dagger }\). then by Corollary 1 proof is done. \(\square \)

2.2 Weighted Moore–Penrose inverse

Weighted Moore–Penrose inverse of even-order tensor, \(\mathcal {A} \in {\mathbb {C}}^{I_1\times \cdots \times I_K \times J_1 \times \cdots \times J_K}\) was introduced in Ji and Wei (2017), very recently. Here we have discussed weighted Moore–Penrose inverse for an arbitrary-order tensor via Einstein product, which is a special case of generalized weighted Moore–Penrose inverse.

Definition 8

Let \(\mathcal {A} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N},\) and a pair of invertible Hermitian tensors \(\mathcal {M} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times I_1 \times \cdots \times I_M}\) and \(\mathcal {N} \in {\mathbb {C}}^{J_1\times \cdots \times J_N \times J_1 \times \cdots \times J_N}\). A tensor \(\mathcal {Y} \in {\mathbb {C}}^{J_1\times \cdots \times J_N \times I_1 \times \cdots \times I_M}\) is said to be the generalized weighted Moore–Penrose inverse of \(\mathcal {A}\) with respect to \(\mathcal {M}\) and \(\mathcal {N}\), if \(\mathcal {Y}\) satisfies the following four tensor equations:

$$\begin{aligned}&(1)~\mathcal {A}*_N\mathcal {X}*_M\mathcal {A}= \mathcal {A};\\&(2)~\mathcal {X}*_M\mathcal {A}*_N\mathcal {X}= \mathcal {X};\\&(3)~(\mathcal {M}*_M \mathcal {A}*_N\mathcal {X})^* = \mathcal {M}*_M\mathcal {A}*_N\mathcal {X};\\&(4)~(\mathcal {N}*_N \mathcal {X}*_M\mathcal {A})^* =\mathcal {N}*_N \mathcal {X}*_M\mathcal {A}. \end{aligned}$$

In particular, when both \(\mathcal {M},~\mathcal {N}\) are Hermitian positive definite tensors, the tensor \(\mathcal {Y}\) is called the weighted Moore–Penrose inverse of \(\mathcal {A}\) and denote by \(\mathcal {A}_{\mathcal {M},\mathcal {N}}^{\dagger }\).

However, the generalized weighted Moore–Penrose inverse \(\mathcal {Y}\) does not always exist for any tensor \(\mathcal {A}\), as shown below with an example.

Example 3

Consider tensors \(~\mathcal {A}=(a_{ijk}) \in {\mathbb {R}}^{\overline{2\times 3}\times {\overline{2}}}\) and \(\mathcal {M}=(a_{ijkl}) \in {\mathbb {R}}^{\overline{2\times 3}\times \overline{2\times 3}}\) with \(\mathcal {N}=(n_{ij}) \in {\mathbb {R}}^{{\overline{2}}\times {\overline{2}}}\) such that

$$\begin{aligned} a_{ij1} = \begin{pmatrix} 1 &{} 0 &{} 1 \\ -1&{} 2 &{} 1 \end{pmatrix}, a_{ij2} = \begin{pmatrix} 2 &{} 0 &{} 3\\ 2 &{} 0 &{} 1 \end{pmatrix} ~~\text {and}~~~ N = \begin{pmatrix} 2 &{} 0\\ 0 &{} -1 \end{pmatrix} \end{aligned}$$

with

$$\begin{aligned} m_{ij11}= & {} \begin{pmatrix} 2 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 \end{pmatrix}, m_{ij12} = \begin{pmatrix} 0 &{} 2 &{} 0\\ 0 &{} 0 &{} 0 \end{pmatrix}, m_{ij13} = \begin{pmatrix} 0 &{} 0 &{} 1\\ 0 &{} 0 &{} 0 \end{pmatrix}, \\ m_{ij21}= & {} \begin{pmatrix} 0 &{} 0 &{} 0\\ -1 &{} 0 &{} 0 \end{pmatrix}, m_{ij22} = \begin{pmatrix} 0 &{} 0 &{} 0\\ 0 &{} 1 &{} 0 \end{pmatrix}, m_{ij23} = \begin{pmatrix} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 3 \end{pmatrix}. \end{aligned}$$

Then we have

$$\begin{aligned} \mathcal {A}^T *_2 \mathcal {M}*_2\mathcal {A}=\begin{pmatrix} 9 &{} 12\\ 12 &{} 16. \end{pmatrix} \end{aligned}$$

This shows \(\mathcal {A}^T*_2\mathcal {M}*_2\mathcal {A}\) is not invertible. Consider the generalized weighted Moore–Penrose inverse \(\mathcal {Y} \in {\mathbb {R}}^{{\overline{2}}\times \overline{2\times 3}}\) of the given tensor \(\mathcal {A}\) is exist, then using relation (1) and relation (3) of Definition 8, we have

$$\begin{aligned} \mathcal {A}*_1 \mathcal {Y}*_2 \mathcal {M}^{-1}*_2 \mathcal {Y}^T *_1 \mathcal {A}^T *_2 \mathcal {M}*_2 \mathcal {A} = \mathcal {A}. \end{aligned}$$
(16)

Since \((\mathcal {A}^T*_2\mathcal {A})^{-1}*_1\mathcal {A}^T*_2\mathcal {A} = \mathcal {I}\), then \(\mathcal {A}\) is left cancellable, now (16) becomes

$$\begin{aligned} \mathcal {Y}*_2 \mathcal {M}^{-1}*_2 \mathcal {Y}^T *_1 \mathcal {A}^T *_2 \mathcal {M}*_2 \mathcal {A} = \mathcal {I}, \end{aligned}$$
(17)

this follows that \(\mathcal {A}^T *_2 \mathcal {M}*_2 \mathcal {A}\) is invertible, which is a contradiction.

At this point, one may be interested to know when does the generalized weighted Moore–Penrose inverse exist? The answer to this question is explained in the following theorem.

Theorem 4

Let \(\mathcal {A} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N}\). If both \(\mathcal {M} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times I_1 \times \cdots \times I_M}\) and \(\mathcal {N} \in {\mathbb {C}}^{J_1\times \cdots \times J_N \times J_1 \times \cdots \times J_N}\) are Hermitian positive definite tensors. Then generalized weighted Moore–Penrose inverse of an arbitrary-order tensor \(\mathcal {A}\) exists and is unique, i.e., there exist a unique tensor \(\mathcal {X} \in {\mathbb {C}}^{J_1\times \cdots \times J_N \times I_1 \times \cdots \times I_M}\), such that

$$\begin{aligned} \mathcal {X}=\mathcal {A}_{\mathcal {M},\mathcal {N}}^{\dagger }= \mathcal {N}^{-1/2} *_N ({\mathcal {M}^{1/2} *_M \mathcal {A} *_N\mathcal {N}^{-1/2}})^{\dagger }*_M \mathcal {M}^{1/2}, \end{aligned}$$
(18)

where \(\mathcal {M}^{1/2}\) and \(\mathcal {N}^{1/2}\) are square roots of \(\mathcal {M}\) and \(\mathcal {N}\), respectively, satisfy all four relations of Definition 8.

One can prove the above theorem, using Theorem 1 in Ji and Wei (2017) and Theorem 1. Further, it is known that identity tensors are always Hermitian and positive definite; therefore, for any \( \mathcal {A} \in {\mathbb {C}}^{I_1 \times \cdots \times I_M \times J_1 \times \cdots \times J_N} \), \( \mathcal {A}^{\dagger }_{\mathcal {I}_M,\mathcal {I}_N} \) exists and \( \mathcal {A}^{\dagger }_{\mathcal {I}_M,\mathcal {I}_N} = \mathcal {A}^{\dagger }\), which is called the Moore–Penrose inverse of \(\mathcal {A}\). Specifically, if we take \( \mathcal {M} = \mathcal {I}_M \) or \(\mathcal {N} = \mathcal {I}_N \) in Eq. (18), then the following identities are hold.

Corollary 2

Let \( \mathcal {A} \in {\mathbb {C}}^{I_1 \times \cdots \times I_M \times J_1 \times \cdots \times J_N} \). Then

$$\begin{aligned} (a)~\mathcal {A}_{\mathcal {M},\mathcal {I}_N}^{\dagger }= & {} (\mathcal {M}^{1/2} *_M \mathcal {A})^{\dagger }*_M \mathcal {M}^{1/2}, \\ (b)~\mathcal {A}_{\mathcal {I}_M,\mathcal {N}}^{\dagger }= & {} \mathcal {N}^{-1/2} *_N (\mathcal {A} *_N \mathcal {N}^{-1/2})^{\dagger }. \end{aligned}$$

Using Definition 8 and following Lemma 2 in Ji and Wei (2017), one can write \( (\mathcal {A}_{\mathcal {M},\mathcal {N}}^{\dagger })_{\mathcal {N},\mathcal {M}}^{\dagger }= \mathcal {A} \) and \( (\mathcal {A}_{\mathcal {M},\mathcal {N}}^{\dagger })^* = (\mathcal {A}^*)^{\dagger }_{\mathcal {N}^{-1}, \mathcal {M}^{-1}} \), where \(\mathcal {A}\) is any arbitrary-order tensor.

Now we define weighted conjugate transpose of a arbitrary-order tensor, as follows.

Definition 9

Let \( \mathcal {M} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times I_1 \times \cdots \times I_M}\) and \(\mathcal {N} \in {\mathbb {C}}^{J_1\times \cdots \times J_N \times J_1 \times \cdots \times J_N} \) are Hermitian positive definite tensors, the weighted conjugate transpose of \( \mathcal {A} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N} \) is denoted by \( \mathcal {A}^{\#}_{\mathcal {N},\mathcal {M}}\) and defined as \( \mathcal {A}^{\#}_{\mathcal {N},\mathcal {M}} = \mathcal {N}^{-1} *_N\mathcal {A}^* *_M\mathcal {M} \).

Next we present the properties of the weighted conjugate transpose of any arbitrary-order tensor, \(\mathcal {A} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N}\), as follows.

Lemma 9

Let \(\mathcal {A} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N}\), \(\mathcal {B} \in {\mathbb {C}}^{J_1\times \cdots \times J_N \times K_1 \times \cdots \times K_L}\) and Hermitian positive definite tensors \(\mathcal {M} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times I_1 \times \cdots \times I_M}\), \(\mathcal {P} \in {\mathbb {C}}^{K_1\times \cdots \times K_L \times K_1 \times \cdots \times K_L}\) and \(\mathcal {N} \in {\mathbb {C}}^{J_1\times \cdots \times J_N \times J_1 \times \cdots \times J_N}\). Then

  1. (a)

    \( (\mathcal {A}^\#_{\mathcal {N},\mathcal {M}})^\#_{\mathcal {M},\mathcal {N}} = \mathcal {A} \),

  2. (b)

    \( (\mathcal {A}{*_N}\mathcal {B})^\#_{\mathcal {P},\mathcal {M}} = \mathcal {B}^\#_{\mathcal {P},\mathcal {N}}{*_N}\mathcal {A}^\#_{\mathcal {N},\mathcal {M}} \).

Adopting the result of Lemma 9(b) and the definition of the weighted Moore–Penrose inverse, we can write the following identities.

Lemma 10

Let \(\mathcal {A}\in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N }\), and \( \mathcal {M} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times I_1 \times \cdots \times I_M}\), \(\mathcal {N} \in {\mathbb {C}}^{J_1\times \cdots \times J_N \times J_1 \times \cdots \times J_N} \) are Hermitian positive definite tensors. Then

  1. (a)

    \( (\mathcal {A}^\#_{\mathcal {N},\mathcal {M}})^{\dagger }_{\mathcal {N},\mathcal {M}} = (\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {N}})^\#_{\mathcal {M},\mathcal {N}};\)

  2. (b)

    \(\mathcal {A} = \mathcal {A} {*_N}\mathcal {A}^\#_{\mathcal {N},\mathcal {M}} {*_M}(\mathcal {A}^\#_{\mathcal {N},\mathcal {M}})^{{\dagger }}_{\mathcal {N},\mathcal {M}} = (\mathcal {A}^\#_{\mathcal {N},\mathcal {M}})^{{\dagger }}_{\mathcal {N},\mathcal {M}} {*_N}\mathcal {A}^\#_{\mathcal {N},\mathcal {M}} {*_M}\mathcal {A};\)

  3. (c)

    \(\mathcal {A}^\#_{\mathcal {N},\mathcal {M}} = \mathcal {A}^{{\dagger }}_{\mathcal {M},\mathcal {N}} {*_M}\mathcal {A} {*_N}\mathcal {A}^\#_{\mathcal {N},\mathcal {M}} = \mathcal {A}^\#_{\mathcal {N},\mathcal {M}} {*_M}\mathcal {A} {*_N}\mathcal {A}^{{\dagger }}_{\mathcal {M},\mathcal {N}}\).

Using Lemma 3.17 in Panigrahy et al. (2020) on two invertible tensors \(\mathcal {B} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times I_1 \times \cdots \times I_M} \) and \(\mathcal {C} \in {\mathbb {C}}^{J_1\times \cdots \times J_N \times J_1 \times \cdots \times J_N}\), one can write the following identities:

$$\begin{aligned} (\mathcal {B}{*_M}\mathcal {A})^{\dagger }{*_M}\mathcal {B}{*_M}\mathcal {A} = \mathcal {A}^{\dagger }{*_M}\mathcal {A} ~~and~~ \mathcal {A}*_N \mathcal {C}{*_N}(\mathcal {A}{*_N}\mathcal {C})^{\dagger }=\mathcal {A} {*_N}\mathcal {A}^{\dagger }, \end{aligned}$$
(19)

where \(\mathcal {A}\) is the arbitrary-order tensor, i.e., \(\mathcal {A} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N}\). By Eq. (19) and Corollary 2, we get following results.

Lemma 11

Let \(\mathcal {A} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N} \), and \(\mathcal {M} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times I_1 \times \cdots \times I_M} \), \(\mathcal {N} \in {\mathbb {C}}^{J_1\times \cdots \times J_N \times J_1 \times \cdots \times J_N}\) be a pair of Hermitian positive definite tensors. Then

  1. (a)

     \(\mathcal {A}_{\mathcal {M},\mathcal {I}_N}^{\dagger }*_M \mathcal {A} =(\mathcal {M}^{1/2} *_M\mathcal {A})^{\dagger }*_M \mathcal {M}^{1/2} *_M \mathcal {A} = \mathcal {A}^{\dagger }*_M \mathcal {A} \),

  2. (b)

     \( \mathcal {A} *_N \mathcal {A}_{\mathcal {I}_M,\mathcal {N}}^{\dagger }=\mathcal {A} *_N \mathcal {N}^{-1/2} *_N(\mathcal {A} *_N \mathcal {N}^{-1/2})^{\dagger }= \mathcal {A} *_N \mathcal {A}^{\dagger }\),

  3. (c)

     \( (\mathcal {A}_{\mathcal {M},\mathcal {I}_N}^\dag )^* = \mathcal {M}^{1/2} *_M [\mathcal {A}^* *_M \mathcal {M}^{1/2}]^{\dagger }\),

  4. (d)

     \( (\mathcal {A}_{\mathcal {I}_M,\mathcal {N}}^\dag )^* =(\mathcal {N}^{-1/2}*_N\mathcal {A}^*)^{\dagger }*_N \mathcal {N}^{-1/2} \).

The considerable amount of conventional and important facts with the properties concerning the range space of arbitrary-order tensor, the following theorem obtains the well-formed result.

Theorem 5

Let \(\mathcal {U}\in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N }\), \(\mathcal {V}\in {\mathbb {C}}^{J_1\times \cdots \times J_N \times K_1\times \cdots \times K_L }\). Let \(\mathcal {M} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times I_1 \times \cdots \times I_M}\) and \(\mathcal {N} \in {\mathbb {C}}^{K_1\times \cdots \times k_L \times K_1 \times \cdots \times K_L}\) be a pair of Hermitian positive definite tensors. Then

$$\begin{aligned} (\mathcal {U} *_N \mathcal {V})_{\mathcal {M},\mathcal {N}}^{\dagger }=[(\mathcal {U}_{\mathcal {M},\mathcal {I}_N}^{\dagger })^* *_N \mathcal {V}]_{\mathcal {M}^{-1},\mathcal {N}}^{\dagger }*_M (\mathcal {V}_{\mathcal {I}_N,\mathcal {N}}^{\dagger }*_N \mathcal {U}_{\mathcal {M},\mathcal {I}_N}^{\dagger })^* *_L [\mathcal {U} *_N(\mathcal {V}_{\mathcal {I}_N,\mathcal {N}}^{\dagger }) ^ *]_{\mathcal {M},\mathcal {N}^{-1}}^{\dagger }. \end{aligned}$$

Proof

Let \(\mathcal {X} =(\mathcal {U}^\dag )^* *_N\mathcal {V}\) and \(\mathcal {Y}=\mathcal {U}*_N(\mathcal {V}^\dag )^*\). From Lemma 6(c), we get

$$\begin{aligned} {\mathfrak {R}}(\mathcal {X}^*) = {\mathfrak {R}}[(\mathcal {U}*_N\mathcal {V})^*]~~\text {and}~~ {\mathfrak {R}}(\mathcal {Y}) = {\mathfrak {R}}(\mathcal {U}*_N\mathcal {V}). \end{aligned}$$

Now, using Lemmas 6(b) and 5(b) along with the fact \((\mathcal {V}^\dag )^*= \mathcal {V}*_L\mathcal {V}^\dag *_N(\mathcal {V}^\dag )^*\), we obtain

$$\begin{aligned} \mathcal {X}^\dag *_M(\mathcal {V}^\dag *_N\mathcal {U}^\dag )^* *_L\mathcal {Y}^\dag= & {} \mathcal {X}^\dag *_M(\mathcal {U}^\dag )^* {*_N}\mathcal {V}*_L\mathcal {V}^\dag *_N(\mathcal {V}^\dag )^* *_L\mathcal {Y}^{\dagger }\\= & {} \mathcal {X}^\dag *_M\mathcal {X}*_L\mathcal {V}^\dag *_N(\mathcal {V}^\dag )^* *_L\mathcal {Y}^\dag \\= & {} (\mathcal {U}*_N\mathcal {V})^\dag *_M\mathcal {Y}*_L\mathcal {Y}^\dag =(\mathcal {U}*_N\mathcal {V})^\dag . \end{aligned}$$

Replacing \( \mathcal {U}\) and \(\mathcal {V} \) by \( \mathcal {M}^{1/2}{*_M}\mathcal {U} \) and \(\mathcal {V}*_L \mathcal {N}^{-1/2} \), respectively, on the above result, we get

$$\begin{aligned}&[(\mathcal {M}^{1/2} *_M \mathcal {U})*_N(\mathcal {V}*_L\mathcal {N}^{-1/2})]^{{\dagger }}\\&\quad =\{[(\mathcal {M}^{1/2}*_M\mathcal {U})^{{\dagger }}]^{*}*_N\mathcal {V}*_L\mathcal {N}^{-1/2}\}^{{\dagger }}*_M[(\mathcal {M}^{1/2}*_M\mathcal {U})^{{\dagger }}]^{*}*_N[(\mathcal {V}*_L\mathcal {N}^{-1/2})^{{\dagger }}]^{*}*_L\\&\qquad \{\mathcal {M}^{1/2}*_M\mathcal {U}{*_N}[(\mathcal {V}*_L\mathcal {N}^{-1/2})^{{\dagger }}]^{*}\}^{{\dagger }}\\&\quad =[\mathcal {M}^{-1/2}*_M(\mathcal {U}_{\mathcal {M},\mathcal {I}_N}^{{\dagger }})^{*}*_N\mathcal {V}*_L\mathcal {N}^{-1/2}]^{{\dagger }}*_M\mathcal {M}^{-1/2}*_M(\mathcal {U}_{\mathcal {M},\mathcal {I}_N}^{{\dagger }})^{*}*_N (\mathcal {V}_{\mathcal {I}_N,\mathcal {N}}^{{\dagger }})^{*}*_L \\&\qquad \mathcal {N}^{1/2}*_L[\mathcal {M}^{1/2}*_M\mathcal {U}*_N(\mathcal {V}_{\mathcal {I}_N,\mathcal {N}}^{{\dagger }})^{*}*_L\mathcal {N}^{1/2}]^{{\dagger }}. \end{aligned}$$

Substituting the above result in Eq. (18), we get the desired result. \(\square \)

Further, in connection with range space of arbitrary-order tensor, the following theorem collects some useful identities of weighted Moore–Penrose inverses.

Theorem 6

Let \( \mathcal {U}\in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N },~~\mathcal {V}\in {\mathbb {C}}^{J_1\times \cdots \times J_N \times K_1 \times \cdots \times K_L }\) and  \(\mathcal {W}\in {\mathbb {C}}^{K_1\times \cdots \times K_L \times H_1 \times \cdots \times H_R }\). If \( \mathcal {A} = \mathcal {U}{*_N}\mathcal {V}{*_L}\mathcal {W}\), and \( \mathcal {M} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times I_1 \times \cdots \times I_M} \) and \( \mathcal {N} \in {\mathbb {C}}^{H_1\times \cdots \times H_R \times H_1 \times \cdots \times H_R} \) are Hermitian positive definite tensors. Then

  1. (a)

    \(\mathcal {A}^\dag _{\mathcal {M},\mathcal {N}} = \mathcal {X}^{\dagger }_{\mathcal {I}_N,\mathcal {N}}{*_N}\mathcal {V}{*_L}\mathcal {Y}^{\dagger }_{\mathcal {M},\mathcal {I}_L} \), where \(\mathcal {X} = (\mathcal {U}{*_N}\mathcal {V}{*_L}\mathcal {V}^{\dagger })^{\dagger }{*_M}\mathcal {A} \) and \( \mathcal {Y} = \mathcal {A}{*_R}(\mathcal {V}^{\dagger }{*_N}\mathcal {V}{*_L}\mathcal {W})^{\dagger }\);

  2. (b)

    \(\mathcal {A}^\dag _{\mathcal {M},\mathcal {N}} = \mathcal {X}^{\dagger }_{\mathcal {I}_L,\mathcal {N}}{*_L}\mathcal {V}^*{*_N}\mathcal {V} {*_L}\mathcal {V}^*{*_N}\mathcal {Y}^{\dagger }_{\mathcal {M},\mathcal {I}_N} \), where \( \mathcal {X} = [\mathcal {U}{*_N}(\mathcal {V}^{\dagger })^*]^{\dagger }{*_M}\mathcal {A} \) and \( \mathcal {Y} = \mathcal {A}{*_R}[(\mathcal {V}^{\dagger })^*{*_L}\mathcal {W}]^{\dagger }\).

Proof

(a) From Eq. (18), we have

$$\begin{aligned} \mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {N}} = \mathcal {N}^{-1/2}{*_R}[\mathcal {U}_1{*_N}\mathcal {V}{*_L}\mathcal {W}_1]^{\dagger }{*_M}\mathcal {M}^{1/2}, \end{aligned}$$

where \(\mathcal {U}_1 = \mathcal {M}^{1/2}{*_M}\mathcal {U} \) and \( \mathcal {W}_1 = \mathcal {W}{*_R}\mathcal {N}^{-1/2} \). On the other hand, by Eq. (12), we have

$$\begin{aligned} {\mathfrak {R}}(\mathcal {X}^*)= & {} {\mathfrak {R}}[(\mathcal {V}{*_L}\mathcal {W})^*{*_N}(\mathcal {U}{*_N}\mathcal {V}{*_L}\mathcal {V}^{\dagger })^*{*_M}\{(\mathcal {U}{*_N}\mathcal {V}{*_L}\mathcal {V}^{\dagger })^*\}^{\dagger }]\\= & {} {\mathfrak {R}}(\mathcal {A}^*) ~~\text {and}~~ {\mathfrak {R}}(\mathcal {Y}) = {\mathfrak {R}}(\mathcal {A}). \end{aligned}$$

Also by Lemma 5(c), we get

$$\begin{aligned} {\mathfrak {R}}[(\mathcal {X}^{\dagger })^*] \subseteq {\mathfrak {R}}[(\mathcal {U}{*_N}\mathcal {V}{*_L}\mathcal {V}^{\dagger })^{\dagger }] = {\mathfrak {R}}[(\mathcal {U}{*_N}\mathcal {V}{*_L}\mathcal {V}^{\dagger })^*]~~\text {and~~} {\mathfrak {R}}(\mathcal {Y}^{\dagger }) \subseteq {\mathfrak {R}}(\mathcal {V}^{\dagger }{*_N}\mathcal {V}{*_L}\mathcal {W}). \end{aligned}$$

Thus, using Lemmas 5(a, b) and 6(a, b), we have

$$\begin{aligned} \mathcal {X}^{\dagger }{*_N}\mathcal {V}{*_L}\mathcal {Y}^{\dagger }= & {} \mathcal {X}^{\dagger }{*_N}(\mathcal {U}{*_N}\mathcal {V}{*_L}\mathcal {V}^{\dagger })^{\dagger }{*_M}(\mathcal {U}{*_N}\mathcal {V}{*_L}\mathcal {V}^{\dagger }){*_N}\mathcal {V}{*_L}(\mathcal {V}^{\dagger }{*_N}\mathcal {V}{*_L}\mathcal {W}){*_R}\nonumber \\&\quad (\mathcal {V}^{\dagger }{*_N}\mathcal {V}{*_L}\mathcal {W})^{\dagger }{*_N}\mathcal {Y}^{\dagger }\nonumber \\= & {} \mathcal {A}^{\dagger }{*_M}\mathcal {Y}{*_N}\mathcal {Y}^{\dagger }= \mathcal {A}^{\dagger }. \end{aligned}$$
(20)

Replacing \(\mathcal {U}\) by \(\mathcal {U}_1\) and \(\mathcal {W}\) by \( \mathcal {W}_1\) in Eq. (20) and then using Lemma 11(a, b), we get

$$\begin{aligned} \mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {N}}= & {} \mathcal {N}^{-1/2}{*_R}[(\mathcal {U}_1{*_N}\mathcal {V}{*_L}\mathcal {V}^{\dagger })^{\dagger }{*_M}\mathcal {M}^{1/2}{*_M}\mathcal {A}{*_R}\mathcal {N}^{-1/2}]^{\dagger }{*_N}\mathcal {V}{*_L}\\&\quad [\mathcal {M}^{1/2}{*_M}\mathcal {A}{*_R}\mathcal {N}^{-1/2}{*_R}(\mathcal {V}^{\dagger }{*_N}\mathcal {V}{*_L}\mathcal {W}_1)^{\dagger }]^{\dagger }{*_M}\mathcal {M}^{1/2}\\= & {} \mathcal {N}^{-1/2}{*_R}[(\mathcal {U}{*_N}\mathcal {V}{*_L}\mathcal {V}^{\dagger })^{\dagger }{*_M}\mathcal {A}{*_R}\mathcal {N}^{-1/2}]^{\dagger }{*_N}\mathcal {V} \\&\quad {*_L}[\mathcal {M}^{1/2}{*_M}\mathcal {A}{*_R}(\mathcal {V}^{\dagger }{*_N}\mathcal {V}{*_L}\mathcal {W})^{\dagger }]^{\dagger }{*_M}\mathcal {M}^{1/2} \\= & {} \mathcal {X}^{\dagger }_{\mathcal {I}_N,\mathcal {N}}{*_N}\mathcal {V}{*_L}\mathcal {Y}^{\dagger }_{\mathcal {M},\mathcal {I}_L}. \end{aligned}$$

(b) Following Lemma 3(b) and Eq. (12), we get

\({\mathfrak {R}}(\mathcal {A})= {\mathfrak {R}}(\mathcal {Y})\) and \({\mathfrak {R}}(\mathcal {X}^*) = {\mathfrak {R}}[(\mathcal {V}^* {*_N}\mathcal {V}*_L\mathcal {W})^* *_L(\mathcal {U}*_N(\mathcal {V}^\dag )^*)^* *_M\{( \mathcal {U}*_N(\mathcal {V}^\dag )^*)^*\}^\dag ] = {\mathfrak {R}}(\mathcal {A}^*)\). Also using Lemma 5(c)

$$\begin{aligned} {\mathfrak {R}}[(\mathcal {X}^\dag )^*] \subseteq {\mathfrak {R}}[\{ \mathcal {U}*_N(\mathcal {V}^\dag )^*\}^*] ~~~\text {and}~~~ {\mathfrak {R}}(\mathcal {Y}^\dag ) \subseteq {\mathfrak {R}}[(\mathcal {V}^\dag )^* *_L\mathcal {W}]. \end{aligned}$$

Using Lemmas 5(a, b) and 6(a, b), we obtain

$$\begin{aligned} \mathcal {X}^{\dagger }{*_L}\mathcal {V}^*{*_N}\mathcal {V} {*_L}\mathcal {V}^*{*_N}\mathcal {Y}^{\dagger }= & {} \mathcal {X}^\dag *_L[\mathcal {U}*_N(\mathcal {V}^\dag )^*]^\dag *_M[\mathcal {U}*_N(\mathcal {V}^\dag )^*]{*_L}\mathcal {V}^*{*_N}\mathcal {V} {*_L}\mathcal {V}^*{*_N}\nonumber \\&\quad [(\mathcal {V}^\dag )^* *_L\mathcal {W}]*_R[(\mathcal {V}^\dag )^* *_L\mathcal {W}]^\dag *_N\mathcal {Y}^\dag \nonumber \\= & {} \mathcal {X}^\dag *_L[\mathcal {U}*_N(\mathcal {V}^\dag )^*]^\dag *_M \mathcal {A}{*_R}[(\mathcal {V}^\dag )^* *_L\mathcal {W}]^\dag *_N \mathcal {Y}^\dag \nonumber \\= & {} \mathcal {A}^{\dagger }{*_M}\mathcal {A}{*_R}[(\mathcal {V}^\dag )^* *_L\mathcal {W}]^\dag *_N\mathcal {Y}^{\dagger }=\mathcal {A}^{\dagger }. \end{aligned}$$
(21)

Let \( \mathcal {U}_1 = \mathcal {M}^{1/2}{*_M}\mathcal {U},~~ \mathcal {W}_1 = \mathcal {W}{*_R}\mathcal {N}^{-1/2} \) and \( \mathcal {A}_1 = \mathcal {U}_1{*_N}\mathcal {V}{*_L}\mathcal {W}_1 \). Then using Eq. (21) and Lemma 11(a, b), we can write

$$\begin{aligned} \mathcal {A}_1 ^{\dagger }= & {} [\{\mathcal {U}_1{*_N}(\mathcal {V}^{\dagger })^*\}^{\dagger }{*_M}\mathcal {A}_1]^{\dagger }{*_L}\mathcal {V}^*{*_N}\mathcal {V} {*_L}\mathcal {V}^*{*_N}[\mathcal {A}_1{*_R}\{(\mathcal {V}^{\dagger })^*{*_L}\mathcal {W}_1\}^{\dagger }]^{\dagger }\\= & {} [\{\mathcal {U}{*_N}(\mathcal {V}^{\dagger })^*\}^{\dagger }{*_M}\mathcal {A}{*_R}\mathcal {N}^{-1/2} ]^{\dagger }{*_L}\mathcal {V}^*{*_N}\mathcal {V} {*_L}\mathcal {V}^*{*_N}[\mathcal {M}^{1/2}{*_M}\mathcal {A}{*_R}\{(\mathcal {V}^{\dagger })^*{*_L}\mathcal {W}\}^{\dagger }]^{\dagger }. \end{aligned}$$

Therefore, \( \mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {N}} = \mathcal {N}^{-1/2} {*_R}\mathcal {A}_1 ^{\dagger }{*_M}\mathcal {M}^{1/2} = \mathcal {X}^{\dagger }_{\mathcal {I}_L,\mathcal {N}}{*_L}\mathcal {V}^*{*_N}\mathcal {V} {*_L}\mathcal {V}^*{*_N}\mathcal {Y}^{\dagger }_{\mathcal {M},\mathcal {I}_N} \). \(\square \)

Theorem 7

Let \(\mathcal {U}\in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N }\), \( \mathcal {V}\in {\mathbb {C}}^{J_1\times \cdots \times J_N \times K_1 \times \cdots \times K_L }\) and \( \mathcal {W}\in {\mathbb {C}}^{K_1\times \cdots \times K_L \times H_1 \times \cdots \times H_R} \). Also let \( \mathcal {M} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times I_1 \times \cdots \times I_M} \) and \( \mathcal {N} \in {\mathbb {C}}^{H_1\times \cdots \times H_R \times H_1 \times \cdots \times H_R} \) be a pair of Hermitian positive definite tensors. Then

  1. (a)

    \( (\mathcal {U}*_N\mathcal {V}*_L\mathcal {W})_{\mathcal {M},\mathcal {N}}^{{\dagger }} = [(\mathcal {U}_{\mathcal {M},\mathcal {I}_N}^{{\dagger }})^{*}*_N\mathcal {V}*_L\mathcal {W}]_{\mathcal {M}^{-1},\mathcal {N}}^{{\dagger }}*_M(\mathcal {U}_{\mathcal {M},\mathcal {I}_N}^{{\dagger }})^{*}*_N\mathcal {V}*_L(\mathcal {W}_{\mathcal {I}_L,\mathcal {N}}^{{\dagger }})^{*}{*_R}[\mathcal {U}*_N\mathcal {V}*_L(\mathcal {W}_{\mathcal {I}_L,\mathcal {N}}^{{\dagger }})^{*}]_{\mathcal {M},\mathcal {N}^{-1}}^{{\dagger }} \);

  2. (b)

    \( (\mathcal {U}*_N\mathcal {V}*_L\mathcal {W})_{\mathcal {M},\mathcal {N}}^{{\dagger }} = [\{(\mathcal {U}*_N\mathcal {V})_{\mathcal {M},\mathcal {I}_L}^{{\dagger }}\}^{*}*_L\mathcal {W}]_{\mathcal {M}^{-1},\mathcal {N}}^{{\dagger }}*_M[(\mathcal {U}*_N\mathcal {V})_{\mathcal {M},\mathcal {I}_L}^{{\dagger }}]^{*}*_L\mathcal {V}^{{\dagger }}*_N[(\mathcal {V}*_L\mathcal {W})_{\mathcal {I}_N,\mathcal {N}}^{{\dagger }}]^{*}*_R[\mathcal {U}*_N\{(\mathcal {V}*_L\mathcal {W})_{\mathcal {I}_N,\mathcal {N}}^{{\dagger }}\}^{*}]_{\mathcal {M},\mathcal {N}^{-1}}^{{\dagger }} \).

Proof

(a) Let \( \mathcal {A} = \mathcal {U}{*_N}\mathcal {V}{*_L}\mathcal {W}\), \( \mathcal {X} = (\mathcal {U}^\dag )^*{*_N}\mathcal {V}{*_L}\mathcal {W} \) and \( \mathcal {Y}= \mathcal {U}{*_N}\mathcal {V}{*_L}(\mathcal {W}^\dag )^* \).

Using Lemma 6(c), we get

$$\begin{aligned} {\mathfrak {R}}(\mathcal {X}^*) = {\mathfrak {R}}(\mathcal {A}^*) ~~~\text {and}~~~ {\mathfrak {R}}(\mathcal {Y}) = {\mathfrak {R}}(\mathcal {A}). \end{aligned}$$

Further, using the fact that \(\mathcal {W}^\dag =\mathcal {W}^\dag *_N(\mathcal {W}^\dag )^* *_N\mathcal {W}^*\), and Lemma 5(b) and Lemma 6(b), we can write

$$\begin{aligned} \mathcal {X}^\dag *_M(\mathcal {U}^\dag )^* *_N\mathcal {V}*_L(\mathcal {W}^\dag )^* *_R\mathcal {Y}^\dag= & {} \mathcal {X}^\dag *_M(\mathcal {U}^\dag )^* *_N\mathcal {V}*_L\mathcal {W}*_R\mathcal {W}^\dag *_L(\mathcal {W}^\dag )^* *_R\mathcal {Y}^\dag \\= & {} \mathcal {A}^\dag {*_M}\mathcal {Y} {*_R}\mathcal {Y}^\dag =\mathcal {A}^\dag . \end{aligned}$$

Using the above result to \([({\mathcal {M}^{1/2} *_M\mathcal {U}){*_N}\mathcal {V}{*_L}(\mathcal {W}{*_R}\mathcal {N}^{-1/2})}]^{\dagger }\) and following Lemma 11[(c),(d)] we get

$$\begin{aligned}&({\mathcal {U}*_N\mathcal {V}*_L\mathcal {W}})_{\mathcal {M},\mathcal {N}}^{\dagger }\\&\quad = \mathcal {N}^{-1/2}{*_R}\{[(\mathcal {M}^{1/2}*_M\mathcal {U})^{{\dagger }}]^{*}*_N\mathcal {V}*_L\mathcal {W}*_R\mathcal {N}^{-1/2}\}^{{\dagger }}*_M[(\mathcal {M}^{1/2}*_M\mathcal {U})^{{\dagger }}]^{*}*_N \mathcal {V} *_L \\&\qquad [(\mathcal {W}*_R\mathcal {N}^{-1/2})^{{\dagger }}]^{*} *_R[\mathcal {M}^{1/2} *_M\mathcal {U} *_N\mathcal {V}*_L \{(\mathcal {W}*_R\mathcal {N}^{-1/2})^{\dagger }\}^*]^{\dagger }{*_M}\mathcal {M}^{1/2} \\&\quad = \mathcal {N}^{-1/2}{*_R}[\mathcal {M}^{-1/2}*_M(\mathcal {U}_{\mathcal {M},\mathcal {I}_N}^{{\dagger }})^{*}*_N\mathcal {V}*_L\mathcal {W}*_R\mathcal {N}^{-1/2}]^{{\dagger }}*_M{\mathcal {M}}^{-1/2}*_M(\mathcal {U}_{\mathcal {M},\mathcal {I}_N}^\dag )^**_N \\&\qquad \mathcal {V}*_L(\mathcal {W}_{\mathcal {I}_L,\mathcal {N}}^{{\dagger }})^{*}*_R{\mathcal {N}^{1/2}}*_R[\mathcal {M}^{1/2}*_M\mathcal {U}*_N\mathcal {V}*_L(\mathcal {W}_{\mathcal {I}_L,\mathcal {N}}^{{\dagger }})^{*}*_R\mathcal {N}^{1/2}]^{{\dagger }} {*_M}\mathcal {M}^{1/2} \\&\quad = [(\mathcal {U}_{\mathcal {M},\mathcal {I}_N}^{{\dagger }})^{*}*_N\mathcal {V}*_L\mathcal {W}]_{\mathcal {M}^{-1},\mathcal {N}}^{{\dagger }}*_M(\mathcal {U}_{\mathcal {M},\mathcal {I}_N}^{{\dagger }})^{*}*_N\mathcal {V}*_L (\mathcal {W}_{\mathcal {I}_L,\mathcal {N}}^{{\dagger }})^{*}{*_R}\\&\qquad [\mathcal {U}*_N\mathcal {V}*_L(\mathcal {W}_{\mathcal {I}_L,\mathcal {N}}^{{\dagger }})^{*}]_{\mathcal {M},\mathcal {N}^{-1}}^{{\dagger }}. \end{aligned}$$

(b) Let \( \mathcal {A} = \mathcal {U}{*_N}\mathcal {V}{*_L}\mathcal {W}\), \( \mathcal {X} = [(\mathcal {U}*_N\mathcal {V})^\dag ]^* *_L\mathcal {W} \) and \( \mathcal {Y} = \mathcal {U}*_N[(\mathcal {V}*_L\mathcal {W})^\dag ]^* \). Using Lemma 6(c), we get

$$\begin{aligned} {\mathfrak {R}}(\mathcal {X}^*)= & {} {\mathfrak {R}}(\mathcal {A}^*),~~~ {\mathfrak {R}}(\mathcal {Y}) = {\mathfrak {R}}(\mathcal {A}) ~~\text {and}~~ \\ {\mathfrak {R}}[\{(\mathcal {U}{*_N}\mathcal {V})^{\dagger }\}^*]^*= & {} {\mathfrak {R}}[(\mathcal {U}{*_N}\mathcal {V})^{\dagger }] = {\mathfrak {R}}[(\mathcal {U}{*_N}\mathcal {V})^*] \subseteq {\mathfrak {R}}(\mathcal {V}^*). \end{aligned}$$

From Lemma 3(a), one can write \( [(\mathcal {V}{*_L}\mathcal {W})^{\dagger }]^* = (\mathcal {V}{*_L}\mathcal {W}){*_R}(\mathcal {V}{*_L}\mathcal {W})^{\dagger }{*_N}[(\mathcal {V}{*_L}\mathcal {W})^{\dagger }]^* \).

Now using Lemma 5(b) and Lemma 6(b), we obtain

$$\begin{aligned}&\mathcal {X}^\dag {*_M}[(\mathcal {U}{*_N}\mathcal {V})^\dag ]^* *_L\mathcal {V}^\dag *_N[(\mathcal {V}*_L\mathcal {W})^\dag ]^* *_R\mathcal {Y}^\dag \\&\quad = \mathcal {X}^\dag *_M[(\mathcal {U}*_N\mathcal {V})^\dag ]^* *_L\mathcal {V}^\dag *_N\mathcal {V}*_L\mathcal {W}*_R(\mathcal {V}*_L\mathcal {W})^\dag *_N[(\mathcal {V}*_L\mathcal {W})^\dag ]^* {*_R}\mathcal {Y}^\dag \\&\quad = \mathcal {X}^\dag *_M\mathcal {X}*_R(\mathcal {V}*_L\mathcal {W})^\dag *_N[(\mathcal {V}*_L\mathcal {W})^\dag ]^* *_R\mathcal {Y}^\dag \\&\quad = \mathcal {A}^\dag *_M\mathcal {U}*_N[(\mathcal {V}*_L\mathcal {W})^\dag ]^* *_R\mathcal {Y}^\dag = \mathcal {A}^\dag . \end{aligned}$$

Replacing \( \mathcal {U}\) and \(\mathcal {W} \) by \( \mathcal {M}^{1/2}{*_M}\mathcal {U} \) and \(\mathcal {W}{*_R}\mathcal {N}^{-1/2} \), respectively, on the above result, we have

$$\begin{aligned}&[{(\mathcal {M}^{1/2}*_M\mathcal {U})*_N\mathcal {V}*_L (\mathcal {W}*_R\mathcal {N}^{-1/2})}]^\dag \\&\quad =[\{(\mathcal {M}^{1/2}*_M\mathcal {U}*_N\mathcal {V})^\dag \}^* *_L\mathcal {W}*_R\mathcal {N}^{-1/2}]^\dag *_M[(\mathcal {M}^{1/2}*_M\mathcal {U}*_N\mathcal {V})^\dag ]^* *_L\mathcal {V}^\dag \\&\qquad *_N[(\mathcal {V}*_L\mathcal {W}*_R\mathcal {N}^{-1/2})^\dag ]^* *_R [\mathcal {M}^{1/2}*_M\mathcal {U}*_N\{(\mathcal {V}*_L\mathcal {W}*_R\mathcal {N}^{-1/2})^\dag \}^*]^\dag \\&\quad =\{\mathcal {M}^{-1/2}{*_M}[(\mathcal {U}*_N\mathcal {V})_{\mathcal {M},\mathcal {I}_L}^\dag ]^* *_L\mathcal {W}*_R\mathcal {N}^{-1/2}\}^\dag *_M\mathcal {M}^{-1/2}*_M[(\mathcal {U}*_N\mathcal {V})_{\mathcal {M},\mathcal {I}_L}^\dag ]^* *_L\mathcal {V}^\dag \\&\qquad *_N[(\mathcal {V}*_L\mathcal {W})_{\mathcal {I}_N, \mathcal {N}}^\dag ]^* *_R\mathcal {N}^{-1/2} *_R[\mathcal {M}^{1/2}*_M\mathcal {U}*_N\{(\mathcal {V}*_L\mathcal {W})_{\mathcal {I}_N,\mathcal {N}}^\dag \}^* *_R\mathcal {N}^{1/2}]^\dag . \end{aligned}$$

Substituting the above result in Eq. (18) one can get the desired result. \(\square \)

Theorem 8

Let \( \mathcal {U}\in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N} \), \( \mathcal {V}\in {\mathbb {C}}^{J_1\times \cdots \times J_N \times K_1 \times \cdots \times K_L} \) and \( \mathcal {W} \in {\mathbb {C}}^{K_1\times \cdots \times K_L \times H_1 \times \cdots \times H_R} \). If \( \mathcal {A} = \mathcal {U}{*_N}\mathcal {V}{*_L}\mathcal {W} \), and \(\mathcal {M} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times I_1 \times \cdots \times I_M} \), \(\mathcal {N} \in {\mathbb {C}}^{H_1\times \cdots \times H_R \times H_1 \times \cdots \times H_R} \) be a pair of Hermitian positive definite tensors, then

$$\begin{aligned} \mathcal {A}^\dag _{\mathcal {M},\mathcal {N}} = \mathcal {X}^{\dagger }_{\mathcal {I}_N,\mathcal {N}}{*_N}\mathcal {V}{*_L}\mathcal {Y}^{\dagger }_{\mathcal {M},\mathcal {I}_L}, \end{aligned}$$

where \(\mathcal {X} = \mathcal {U}^{\dagger }{*_M}\mathcal {A} \) and \( \mathcal {Y} = \mathcal {A}{*_R}\mathcal {W}^{\dagger }\).

Proof

Let \( \mathcal {U}_1 = \mathcal {M}^{1/2}{*_M}\mathcal {U} \) and \( \mathcal {W}_1 = \mathcal {W}{*_R}\mathcal {N}^{-1/2} \). It is known, from Eq. (18),

$$\begin{aligned} \mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {N}} = \mathcal {N}^{-1/2}{*_R}(\mathcal {U}_1 {*_N}\mathcal {V}{*_L}\mathcal {W}_1)^{\dagger }{*_R}\mathcal {M}^{-1/2}. \end{aligned}$$

Now using Eq. (12) we have \({\mathfrak {R}}(\mathcal {X}^*) = {\mathfrak {R}}[(\mathcal {U}*_N\mathcal {V}*_L\mathcal {W})^* *_N(\mathcal {U}^*)^\dag ] ={\mathfrak {R}}(\mathcal {A}^*) \) and \( {\mathfrak {R}}(\mathcal {Y}) = {\mathfrak {R}}(\mathcal {A})\). Also, from Lemma 5 (c), we have

$$\begin{aligned} {\mathfrak {R}}[(\mathcal {X}^\dag )^*]= & {} {\mathfrak {R}}(\mathcal {X}) \subseteq {\mathfrak {R}}(\mathcal {U}^\dag ) = {\mathfrak {R}}(\mathcal {U}^*) ~\text {and}~ \\ {\mathfrak {R}}(\mathcal {Y}^\dag )= & {} {\mathfrak {R}}(\mathcal {Y}^*) ={\mathcal {R}}[({\mathcal {A}}*_R{\mathcal {W}}^\dag )^*]= {\mathcal {R}}[({\mathcal {U}}*_N{\mathcal {V}}*_L({\mathcal {W}}*_R{\mathcal {W}}^\dag ))^*] \\= & {} {\mathcal {R}}[({\mathcal {W}}*_R{\mathcal {W}}^\dag )^**_L({\mathcal {U}}*_N{\mathcal {V}})^*]\\= & {} {\mathfrak {R}}(\mathcal {W}*_R\mathcal {W}^\dag *_L(\mathcal {U}*_N\mathcal {V})^*) \subseteq {\mathfrak {R}}(\mathcal {W}). \end{aligned}$$

Using Lemmas 5(a, b) and 6(a, b), we get

$$\begin{aligned} \mathcal {X}^\dag *_N\mathcal {V} *_L\mathcal {Y}^\dag= & {} \mathcal {X}^\dag *_N\mathcal {U}^\dag *_M \mathcal {U}*_N\mathcal {V}*_L\mathcal {W}*_R\mathcal {W}^\dag *_L\mathcal {Y}^\dag \nonumber \\= & {} \mathcal {A}^\dag *_N\mathcal {Y}*_L\mathcal {Y}^\dag = \mathcal {A}^\dag . \end{aligned}$$
(22)

Using Lemma 11(a, b) one can conclude

$$\begin{aligned} (\mathcal {U}_1 {*_N}\mathcal {V} {*_L}\mathcal {W}_1)^{\dagger }= & {} [\mathcal {U}_1^{\dagger }{*_M}\mathcal {M}^{1/2}{*_M}\mathcal {A}{*_R}\mathcal {N}^{-1/2} ]^{\dagger }{*_N}\mathcal {V}{*_L}[\mathcal {M}^{1/2}{*_M}\mathcal {A}{*_R}\mathcal {N}^{-1/2}{*_R}\mathcal {W}_1 ^{\dagger }]^{\dagger }\\= & {} [\mathcal {U}^{\dagger }{*_M}\mathcal {A}{*_R}\mathcal {N}^{-1/2} ]^{\dagger }{*_N}\mathcal {V}{*_L}[\mathcal {M}^{1/2}{*_M}\mathcal {A}{*_R}\mathcal {W}^{\dagger }]^{\dagger }. \\ \text {Hence,} ~ \mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {N}}= & {} \mathcal {N}^{-1/2}{*_R}[\mathcal {U}^{\dagger }{*_M}\mathcal {A}{*_R}\mathcal {N}^{-1/2} ]^{\dagger }{*_N}\mathcal {V}{*_L}[\mathcal {M}^{1/2}{*_M}\mathcal {A}{*_R}\mathcal {W}^{\dagger }]^{\dagger }{*_M}\mathcal {M}^{1/2}\\= & {} \mathcal {X}^{\dagger }_{\mathcal {I}_N,\mathcal {N}}{*_N}\mathcal {V}{*_L}\mathcal {Y}^{\dagger }_{\mathcal {M},\mathcal {I}_L}. \end{aligned}$$

Hence, the proof is complete. \(\square \)

By Lemma 11(a, b) and Eq. (18), and Eq. (22), we have

Corollary 3

Let \( \mathcal {U}\in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N }\), \(\mathcal {V}\in {\mathbb {C}}^{J_1\times \cdots \times J_N \times K_1 \times \cdots \times K_L }\) and \(\mathcal {W}\in {\mathbb {C}}^{K_1\times \cdots \times K_L \times H_1\times \cdots \times H_R }\). Let \( \mathcal {A} = \mathcal {U}{*_N}\mathcal {V} {*_L}\mathcal {W} \). Let \( \mathcal {M} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times I_1 \times \cdots \times I_M}\), \(\mathcal {N} \in {\mathbb {C}}^{H_1\times \cdots \times H_R \times H_1 \times \cdots \times H_R} \), \( \mathcal {P} \in {\mathbb {C}}^{J_1\times \cdots \times J_N \times J_1 \times \cdots \times J_N} \) and \( \mathcal {Q} \in {\mathbb {C}}^{K_1\times \cdots \times K_L \times K_1 \times \cdots \times K_L} \) are Hermitian positive definite tensors. Then the weighted Moore–Penrose inverse of \( \mathcal {A} \) with respect to \(\mathcal {M}\) and \(\mathcal {N}\) satisfies the following identities:

  1. (a)

    \(\mathcal {A}_{\mathcal {M},\mathcal {N}}^{{\dagger }} =(\mathcal {U}_{\mathcal {I}_M,\mathcal {P}}^{{\dagger }}{*_M}\mathcal {A})_{\mathcal {P},\mathcal {N}}^{{\dagger }}{*_N}\mathcal {V}{*_L}(\mathcal {A}{*_R}\mathcal {W}_{\mathcal {Q},\mathcal {I}_R}^{{\dagger }})_{\mathcal {M},\mathcal {Q}}^{{\dagger }},\)

  2. (b)

    \( \mathcal {A}_{\mathcal {M},\mathcal {N}}^{{\dagger }} = [(\mathcal {U}*_N\mathcal {V}*_L\mathcal {V}_{\mathcal {P},\mathcal {I}_L}^{{\dagger }})_{\mathcal {M},\mathcal {P}}^{{\dagger }}*_M\mathcal {A}]_{\mathcal {P},\mathcal {N}}^{{\dagger }}*_N\mathcal {V}*_L[\mathcal {A}{*_R}(\mathcal {V}_{\mathcal {I}_N,\mathcal {Q}}^{{\dagger }}*_N\mathcal {V}*_L\mathcal {W})_{\mathcal {Q},\mathcal {N}}^{{\dagger }}]_{\mathcal {M},\mathcal {Q}}^{{\dagger }}\).

2.3 The full rank decomposition

The tensors and their decompositions originally appeared in 1927, (Hitchcock 1927). The idea of decomposing a tensor as a product of tensors with a more desirable structure may well be one of the most important developments in numerical analysis such as the implementation of numerically efficient algorithms and the solution of multilinear systems (Kolda and Bader 2009; Che and Wei 2019; Martin and Loan 2008; Kolda 2001). As part of this section, we focus on the full rank decomposition of a tensor. Unfortunately, It is very difficult to compute tensor rank. But the authors of in Stanimirović et al. (2020) introduced an useful and effective definition of the tensor rank, termed as reshaping rank. With the help of reshaping rank, We present one of our important results, full rank decomposition of an arbitrary-order tensor.

Theorem 9

Let \( \mathcal {A} \in {\mathbb {C}}^{I_1 \times \cdots \times I_M \times J_1 \times \cdots \times J_N} \). Then there exist a left invertible tensor \( \mathcal {F} \in {\mathbb {C}}^{I_1 \times \cdots \times I_M \times H_1 \times \cdots \times H_R} \) and a right invertible tensor \( \mathcal {G} \in {\mathbb {C}}^{H_1 \times \cdots \times H_R \times J_1 \times \cdots \times J_N} \) such that

$$\begin{aligned} \mathcal {A} = \mathcal {F} {*_R}\mathcal {G}, \end{aligned}$$
(23)

where \( rshrank(\mathcal {F}) =rshrank(\mathcal {G})= rshrank(\mathcal {A}) = r = H_1 \cdots H_R \). This is called the full rank decomposition of the tensor \( \mathcal {A} \).

Proof

Let the matrix \(A = rsh(\mathcal {A}) \in {\mathbb {C}}^{I_1 \cdots I_M \times J_1 \cdots J_N}\). Then we have, rank(A) = r. Suppose that the matrix A has a full rank decompositions, as follows:

$$\begin{aligned} A = FG, \end{aligned}$$
(24)

where \( F \in {\mathbb {C}}^{I_1 \cdots I_M \times H_1 \cdots H_R} \) is a full column rank matrix and \( G \in {\mathbb {C}}^{H_1 \cdots H_R \times J_1 \cdots J_N} \) is a full row rank matrix. From Eqs. (8) and (24), we obtain

$$\begin{aligned} rhs^{-1}(A)=rhs^{-1}(FG)= rhs^{-1}(F)*_R rhs^{-1}(G), \end{aligned}$$
(25)

where \(\mathcal {F} = rsh^{-1}(F) \in {\mathbb {C}}^{I_1 \times \cdots \times I_M \times H_1 \times \cdots \times H_R} ~\text { and }~ \mathcal {G} = rsh^{-1}(G) \in {\mathbb {C}}^{H_1 \times \cdots \times H_R \times J_1 \times \cdots \times J_N}.\) It follows that

$$\begin{aligned} \mathcal {A} = \mathcal {F} {*_R}\mathcal {G}, \end{aligned}$$

where \(\mathcal {F} \in {\mathbb {C}}^{I_1 \times \cdots \times I_M \times H_1 \times \cdots \times H_R} \) is the left invertible tensor and \(\mathcal {G} \in {\mathbb {C}}^{H_1 \times \cdots \times H_R \times J_1 \times \cdots \times J_N}\) is a right invertible tensor. \(\square \)

The prove of the above theorem was proved earlier (see Lemma 2.3(a), Liang and Zheng 2019) indifferent way. Here, we have provided another proof without using reshape operation. Further, the authors of in Liang and Zheng (2019) computed the Moore–Penrose inverse of a tensor using full rank decomposition of tensors as follows:

Lemma 12

(Theorem 3.7, Liang and Zheng 2019) If the full rank decomposition of a tensor \( \mathcal {A} \in {\mathbb {C}}^{I_1 \times \cdots \times I_M \times J_1 \times \cdots \times J_N}\) is given as Theorem 9, then

$$\begin{aligned} \mathcal {A}^{\dagger }= \mathcal {G}^**_R(\mathcal {F}^**_M\mathcal {A}*_N\mathcal {G}^*)^{-1}*_R\mathcal {F}^*. \end{aligned}$$
(26)

Now, the following theorem expressed the weighted Moore–Penrose inverse of a tensor \( \mathcal {A} \in {\mathbb {C}}^{I_1 \times \cdots \times I_M \times J_1 \times \cdots \times J_N}\) in form of the ordinary tensor inverse.

Theorem 10

If the full rank decomposition of a tensor \( \mathcal {A} \in {\mathbb {C}}^{I_1 \times \cdots \times I_M \times J_1 \times \cdots \times J_N} \) is given by Eq. (23), then the weighted Moore–Penrose inverse of \(\mathcal {A} \) can be written as

$$\begin{aligned} \mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {N}} = \mathcal {N}^{-1}{*_N}\mathcal {G}^* {*_R}(\mathcal {F}^* {*_M}\mathcal {M} {*_M}\mathcal {A} {*_N}\mathcal {N}^{-1}{*_N}\mathcal {G}^*)^{-1} {*_R}\mathcal {F}^* {*_M}\mathcal {M}, \end{aligned}$$

where \( \mathcal {M} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times I_1 \times \cdots \times I_M} \) and \( \mathcal {N} \in {\mathbb {C}}^{J_1\times \cdots \times J_N \times J_1 \times \cdots \times J_N} \) are Hermitian positive definite tensors.

Proof

From Eq. (18), we have

$$\begin{aligned} \mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {N}} = \mathcal {N}^{-1/2}{*_N}(\mathcal {M}^{1/2} {*_M}\mathcal {A}{*_N}\mathcal {N}^{-1/2})^{\dagger }{*_M}\mathcal {M}^{1/2} = \mathcal {N}^{-1/2}{*_N}\mathcal {B}^{\dagger }{*_M}\mathcal {M}^{1/2}, \end{aligned}$$

where \(\mathcal {B} = (\mathcal {M}^{1/2} {*_M}\mathcal {F}) {*_R}(\mathcal {G} {*_N}\mathcal {N}^{-1/2})\),  and  \( \mathcal {M} ~ \& ~ \mathcal {N}\) are Hermitian positive definite tensors. Here \(\mathcal {B} \) is in the form of full rank decomposition, as both \(\mathcal {M}^{1/2} \) and \( \mathcal {N}^{1/2} \) are invertible. Now, from Lemma 12, we get

$$\begin{aligned} \mathcal {B}^{\dagger }= & {} (\mathcal {G} {*_N}\mathcal {N}^{-1/2})^* {*_R}[(\mathcal {M}^{1/2} {*_M}\mathcal {F})^* {*_M}(\mathcal {M}^{1/2} {*_M}\mathcal {F}) \\&{*_R}(\mathcal {G} {*_N}\mathcal {N}^{-1/2}) {*_N}(\mathcal {G} {*_N}\mathcal {N}^{-1/2})^*]^{-1} {*_R}(\mathcal {M}^{1/2} {*_M}\mathcal {F})^* \\= & {} \mathcal {N}^{-1/2}{*_N}\mathcal {G}^* {*_R}(\mathcal {F}^* {*_M}\mathcal {M} {*_M}\mathcal {A} {*_N}\mathcal {N}^{-1} {*_N}\mathcal {G}^*)^{-1} {*_R}\mathcal {F}^* {*_M}\mathcal {M}^{1/2}. \end{aligned}$$

Therefore, \( \mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {N}} = \mathcal {N}^{-1} {*_N}\mathcal {G}^* {*_R}(\mathcal {F}^* {*_M}\mathcal {M} {*_M}\mathcal {A} {*_N}\mathcal {N}^{-1} {*_N}\mathcal {G}^*)^{-1}{*_R}\mathcal {F}^* {*_M}\mathcal {M} \). \(\square \)

In particular when the arbitrary-order tensor, \( \mathcal {A} \) is either left invertible or right invertible, we have the following results.

Corollary 4

Let a tensor \( \mathcal {A} \in {\mathbb {C}}^{I_1 \times \cdots \times I_M \times J_1 \times \cdots \times J_N} \) has the full rank decomposition.

  1. (a)

    If the tensor \(\mathcal {A} \) is left invertible, then \(\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {N}} = \mathcal {N}^{-1}{*_N}(\mathcal {A}^*{*_M}\mathcal {M}{*_M} \mathcal {A}{*_N}\mathcal {N}^{-1})^{-1}{*_N}\mathcal {A}^* {*_M}\mathcal {M} \).

  2. (b)

    If the tensor \(\mathcal {A} \) is right invertible, then \( \mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {N}} = \mathcal {N}^{-1} {*_N}\mathcal {A}^* {*_M}(\mathcal {M}{*_M} \mathcal {A}{*_N}\mathcal {N}^{-1}{*_N}\mathcal {A}^*)^{-1} {*_M}\mathcal {M} \).

It is easy to see that the full rank factorizations of a tensor \(\mathcal {A} \in {\mathbb {C}}^{I_1 \times \cdots \times I_M \times J_1 \times \cdots \times J_N}\) are not unique: if \( \mathcal {A} = \mathcal {F} *_R\mathcal {G}\) is one full rank factorization, where \( \mathcal {F} \in {\mathbb {C}}^{I_1 \times \cdots \times I_M \times H_1 \times \cdots \times H_R} \) is the left invertible tensor and \( \mathcal {G} \in {\mathbb {C}}^{H_1 \times \cdots \times H_R \times J_1 \times \cdots \times J_N} \) is the right invertible tensor, then there exist a invertible tensor \(\mathcal {P}\) of appropriate size, such that \(\mathcal {A} = (\mathcal {F} *_R\mathcal {P})*_R(\mathcal {P}^{-1}*_R\mathcal {G})\) is another full rank factorization. The following theorem represents the result.

Theorem 11

Let \( \mathcal {A} \in {\mathbb {C}}^{I_1 \times \cdots \times I_M \times J_1 \times \cdots \times J_N} \) with \( rshrank(\mathcal {A}) = r = H_1 H_2 \cdots H_R \). Then \( \mathcal {A} \) has infinitely many full rank decompositions. However if \(\mathcal {A}\) has two full rank decompositions, as follows:

$$\begin{aligned} \mathcal {A} = \mathcal {F}{*_R}\mathcal {G} = \mathcal {F}_1 {*_R}\mathcal {G}_1, \end{aligned}$$

where \( \mathcal {F},~\mathcal {F}_1 \in {\mathbb {C}}^{I_1 \times \cdots \times I_M \times H_1 \times \cdots \times H_R} \) and \( \mathcal {G},~\mathcal {G}_1 \in {\mathbb {C}}^{H_1 \times \cdots \times H_R \times J_1 \times \cdots \times J_N} \), then there exists an invertible tensor \( \mathcal {B} \) such that

$$\begin{aligned} \mathcal {F}_1 = \mathcal {F}{*_R}\mathcal {B}~~~and~~~ \mathcal {G}_1 = \mathcal {B}^{-1} {*_R}\mathcal {G}. \end{aligned}$$

Moreover,

$$\begin{aligned} \mathcal {F}_1 ^{\dagger }= (\mathcal {F} {*_R}\mathcal {B})^{\dagger }= \mathcal {B}^{-1} {*_R}\mathcal {F}^{\dagger }~~ and ~~ \mathcal {G}_1 ^{\dagger }= (\mathcal {B}^{-1} {*_R}\mathcal {G})^{\dagger }= \mathcal {G}^{\dagger }{*_R}\mathcal {B}. \end{aligned}$$

Proof

Suppose the tensor, \( \mathcal {A} \in {\mathbb {C}}^{I_1 \times \cdots \times I_M \times J_1 \times \cdots \times J_N} \) has two full rank decompositions, as follows:

$$\begin{aligned} \mathcal {A} = \mathcal {F}{*_R}\mathcal {G} = \mathcal {F}_1 {*_R}\mathcal {G}_1, \end{aligned}$$
(27)

where \(\mathcal {F}, ~\mathcal {F}_1 \in {\mathbb {C}}^{I_1 \times \cdots \times I_M \times H_1 \times \cdots \times H_R}\) and \(\mathcal {G}, ~\mathcal {G}_1 \in {\mathbb {C}}^{H_1 \times \cdots \times H_R \times J_1 \times \cdots \times J_N} \). Then

$$\begin{aligned} \mathcal {F} {*_R}\mathcal {G}{*_N}\mathcal {G}_1 ^{\dagger }= \mathcal {F}_1 {*_R}\mathcal {G}_1 {*_N}\mathcal {G}_1 ^{\dagger }. \end{aligned}$$

Substituting \( \mathcal {M}= \mathcal {I}_M \) and \(\mathcal {N} = \mathcal {I}_N \) in Corollary 4(b) we have \( \mathcal {G}_1 {*_N}\mathcal {G}_1 ^{\dagger }= \mathcal {I}_R\).

Therefore, \(\mathcal {F}_1 = \mathcal {F}{*_R}(\mathcal {G}{*_N}\mathcal {G}_1 ^{\dagger })\), similarly we can find \(\mathcal {G}_1 = (\mathcal {F}_1^{\dagger }{*_M}\mathcal {F}){*_R}\mathcal {G}.\)

Let \( rsh(\mathcal {G}) = G =reshape(\mathcal {G},r,J_1\cdots J_N)\) and \( rsh(\mathcal {G}_1) = G_1 =reshape(\mathcal {G}_1,r,J_1\cdots J_N)\). Then \( rsh(\mathcal {G}{*_N}\mathcal {G}_1 ^{\dagger }) = G G_1^{\dagger }\in {\mathbb {C}}^{r \times r}\) and

$$\begin{aligned} r = rshrank(\mathcal {F}_1) = rshrank(\mathcal {F}{*_R}(\mathcal {G}{*_N}\mathcal {G}_1 ^{\dagger })) \le rshrank(\mathcal {G}{*_N}\mathcal {G}_1 ^{\dagger }) = rank(GG_1^{\dagger }) \le r \end{aligned}$$

Hence, \( GG_1^{\dagger }\) is invertible as it has full rank. This concluded \( \mathcal {G}{*_N}\mathcal {G}_1^{\dagger }= rsh^{-1}(GG_1^{\dagger }) \) is invertible. Similarly, \( \mathcal {F}_1^{\dagger }{*_M}\mathcal {F} \) is also invertible. Let \( \mathcal {B} = \mathcal {G}{*_N}\mathcal {G}_1^{\dagger }\) and \( \mathcal {C} = \mathcal {F}_1 ^{\dagger }{*_M}\mathcal {F}\). Then

$$\begin{aligned} \mathcal {C}{*_R}\mathcal {B} = \mathcal {F}_1^{\dagger }{*_M}\mathcal {F}{*_R}\mathcal {G}{*_N}\mathcal {G}_1 ^{\dagger }= \mathcal {F}_1^{\dagger }{*_M}\mathcal {F}_1{*_R}\mathcal {G}_1{*_N}\mathcal {G}_1 ^{\dagger }= \mathcal {I}_R \end{aligned}$$

is equivalent to \( \mathcal {C} = \mathcal {B}^{-1} \). Therefore,

$$\begin{aligned} \mathcal {F}_1 = \mathcal {F}{*_R}\mathcal {B} ~~\text {and}~~ \mathcal {G}_1 = \mathcal {B}^{-1}{*_R}\mathcal {G}. \end{aligned}$$

Further

$$\begin{aligned} \mathcal {F}_1^{\dagger }= & {} ((\mathcal {F}{*_R}\mathcal {B})^* {*_M}\mathcal {F}{*_R}\mathcal {B})^{-1}{*_R}(\mathcal {F}{*_R}\mathcal {B})^*\\= & {} \mathcal {B}^{-1}{*_R}(\mathcal {F}^*{*_M}\mathcal {F})^{-1}{*_R}(\mathcal {B}^*)^{-1}{*_R}\mathcal {B}^* {*_R}\mathcal {F}^* \\= & {} \mathcal {B}^{-1}{*_R}(\mathcal {F}^*{*_M}\mathcal {F})^{-1}{*_R}\mathcal {F}^* = \mathcal {B}^{-1}{*_R}\mathcal {F}^{\dagger }. \end{aligned}$$

Similarly \( \mathcal {G}_1 ^{\dagger }= \mathcal {G}^{\dagger }{*_R}\mathcal {B}\). \(\square \)

3 Reverse order law

In this section, we present various necessary and sufficient conditions of the reverse-order law for the weighted Moore–Penrose inverses of tensors. The first result obtained below addresses the sufficient condition for reverse-order law of tensor.

Theorem 12

Let \(\mathcal {A}\in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N }\) and \(\mathcal {B}\in {\mathbb {C}}^{J_1\times \cdots \times J_N \times K_1\times \cdots \times K_L }\). Let \(\mathcal {M} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times I_1 \times \cdots \times I_M},\) and \(\mathcal {N} \in {\mathbb {C}}^{K_1\times \cdots \times K_L \times K_1 \times \cdots \times K_L}\) be a pair of Hermitian positive definite tensors. If \( {\mathfrak {R}}(\mathcal {B}) = {\mathfrak {R}}(\mathcal {A}^*) \), then

$$\begin{aligned} (\mathcal {A}{*_N}\mathcal {B})^{\dagger }_{\mathcal {M},\mathcal {N}} = \mathcal {B}^{\dagger }_{\mathcal {I}_N,\mathcal {N}}{*_N}\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {I}_N}. \end{aligned}$$

Proof

Let \(\mathcal {X} =\mathcal {A}^\dag *_M\mathcal {A}*_N\mathcal {B} \) and \(\mathcal {Y} =\mathcal {A}*_N\mathcal {B}*_L\mathcal {B}^\dag \). Using Lemma 5(c), we get

$$\begin{aligned}&{\mathfrak {R}}[(\mathcal {X}^\dag )^*] = {\mathfrak {R}}[(\mathcal {A}^\dag *_M\mathcal {A})^* *_N\mathcal {B}] \subseteq {\mathfrak {R}}(\mathcal {A}^*)\\&\quad ~~\text {and}~~{\mathfrak {R}}(\mathcal {Y}^\dag ) ={\mathfrak {R}}(\mathcal {B}*_L\mathcal {B}^\dag *_N\mathcal {A}^*) \subseteq {\mathfrak {R}}(\mathcal {B}). \end{aligned}$$

Similarly, from Eq. (12), we have

$$\begin{aligned} {\mathfrak {R}}(\mathcal {X}^*) ={\mathfrak {R}}(\mathcal {B}^* *_N\mathcal {A}^* *_M(\mathcal {A}^*)^\dag ) ={\mathfrak {R}}[(\mathcal {A}*_N\mathcal {B})^*] ~~\text { and}~~ {\mathfrak {R}}(\mathcal {Y}) ={\mathfrak {R}}(\mathcal {A}*_N\mathcal {B}). \end{aligned}$$

Further, from Lemma 5[(a), (b)] and Lemma 6[(a), (b)], we obtain

$$\begin{aligned} \mathcal {X}^\dag *_N\mathcal {Y}^\dag = \mathcal {X}^\dag *_N\mathcal {X}*_L\mathcal {B}^\dag *_N\mathcal {Y}^\dag = (\mathcal {A}*_N\mathcal {B})^\dag *_M\mathcal {Y}*_N\mathcal {Y}^\dag = (\mathcal {A}*_N\mathcal {B})^\dag , \end{aligned}$$

i.e.,

$$\begin{aligned} (\mathcal {A}*_N\mathcal {B})^\dag = (\mathcal {A}^{\dagger }{*_M}\mathcal {A}{*_N}\mathcal {B})^{\dagger }{*_N}(\mathcal {A}{*_N}\mathcal {B}{*_L}\mathcal {B}^{\dagger })^{\dagger }. \end{aligned}$$
(28)

Let \(\mathcal {A}_1 = \mathcal {M}^{1/2}{*_M}\mathcal {A} \) and \( \mathcal {B}_1 = \mathcal {B}{*_L}\mathcal {N}^{-1/2} \). Using Lemma 11(a, b), we get

$$\begin{aligned} \mathcal {X} =\mathcal {A}_1^\dag *_M\mathcal {A}_1*_N\mathcal {B} ~~\text {and}~~ \mathcal {Y} =\mathcal {A}*_N\mathcal {B}_1*_L\mathcal {B}_1^\dag . \end{aligned}$$

Now, replacing \( \mathcal {A}\) and \(\mathcal {B} \) by \( \mathcal {A}_1 \) and \(\mathcal {B}_1 \), respectively, in Eq. (28), we get

$$\begin{aligned} (\mathcal {A}_1{*_N}\mathcal {B}_1)^{\dagger }= (\mathcal {X}{*_L}\mathcal {N}^{-1/2})^{\dagger }{*_N}(\mathcal {M}^{1/2}{*_M}\mathcal {Y})^{\dagger }. \end{aligned}$$

Thus, from Corollary 2, we can conclude

$$\begin{aligned} (\mathcal {A}{*_N}\mathcal {B})^{\dagger }_{\mathcal {M},\mathcal {N}} = \mathcal {N}^{-1/2}{*_L}(\mathcal {A}_1{*_N}\mathcal {B}_1)^{\dagger }{*_M}\mathcal {M}^{1/2} = \mathcal {X}^{\dagger }_{\mathcal {I}_N,\mathcal {N}}{*_N}\mathcal {Y}^{\dagger }_{\mathcal {M},\mathcal {I}_N}. \end{aligned}$$

From the given condition and Lemma 5[(b), (c)], we have \( \mathcal {B}{*_L}\mathcal {B}^{\dagger }= \mathcal {A}^{\dagger }{*_M}\mathcal {A} \), i.e.,

$$\begin{aligned} \mathcal {A} = \mathcal {A}{*_N}\mathcal {B}{*_L}\mathcal {B}^{\dagger }= \mathcal {Y} ~~\text {and}~~ \mathcal {B} = \mathcal {A}^{\dagger }{*_M}\mathcal {A}{*_N}\mathcal {B} = \mathcal {X}. \end{aligned}$$

Hence, \( (\mathcal {A}{*_N}\mathcal {B})^{\dagger }_{\mathcal {M},\mathcal {N}} = \mathcal {B}^{\dagger }_{\mathcal {I}_N,\mathcal {N}}{*_N}\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {I}_N} \). \(\square \)

Further, using Theorem 3.30 in Panigrahy et al. (2020) one can write a necessary and sufficient condition for reverse order law for arbitrary-order tensors, i.e., for \(\mathcal {A}\in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1\times \cdots \times J_N }\) and \(\mathcal {B}\in {\mathbb {C}}^{J_1\times \cdots \times J_N \times K_1\times \cdots \times K_L}\). Then \((\mathcal {A}{*_N}\mathcal {B})^{{\dagger }} = \mathcal {B}^{{\dagger }} {*_N}\mathcal {A}^{{\dagger }}\) if and only if

$$\begin{aligned} \mathcal {A}^{{\dagger }}{*_M}\mathcal {A}{*_N}\mathcal {B}{*_L}\mathcal {B}^* {*_N}\mathcal {A}^* = \mathcal {B}{*_L}\mathcal {B}^* {*_N}\mathcal {A}^*, ~~and ~~ \mathcal {B}{*_L}\mathcal {B}^{{\dagger }}{*_N}\mathcal {A}^*{*_M}\mathcal {A}{*_N}\mathcal {B} = \mathcal {A}^* {*_M}\mathcal {A} {*_N}\mathcal {B}. \end{aligned}$$

Now, utilizing the above result and the fact of Lemma 5[(a),(c)], we conclude a beautiful result for necessary and sufficient condition for Moore–Penrose inverse of arbitrary-order tensor as follows.

Lemma 13

Let \(\mathcal {A}\in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1\times \cdots \times J_N }\) and \(\mathcal {B}\in {\mathbb {C}}^{J_1\times \cdots \times J_N \times K_1\times \cdots \times K_L}\). The reverse order law hold for Moore–Penrose inverse, i.e.,\( (\mathcal {A}*_N\mathcal {B})^\dag = \mathcal {B}^\dag *_N\mathcal {A}^\dag \) if and only if 

\( {\mathfrak {R}}(\mathcal {A}^* *_M\mathcal {A}*_N\mathcal {B})\subseteq {\mathfrak {R}}(\mathcal {B}) \) and \( {\mathfrak {R}}(\mathcal {B}*_L\mathcal {B}^* *_N\mathcal {A}^*) \subseteq {\mathfrak {R}}(\mathcal {A}^*)\).

The primary result of this paper is presented next under the impression of the properties of range space of arbitrary-order tensor.

Theorem 13

Let \(\mathcal {A} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N}, \mathcal {B} \in {\mathbb {C}}^{J_1\times \cdots \times J_N \times K_1 \times \cdots \times K_L}\). Let \( \mathcal {M} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times I_1 \times \cdots \times I_M}\), \(\mathcal {N} \in {\mathbb {C}}^{K_1\times \cdots \times K_L \times K_1 \times \cdots \times K_L} \) and \( \mathcal {P} \in {\mathbb {C}}^{J_1\times \cdots \times J_N \times J_1 \times \cdots \times J_N} \) are three Hermitian positive definite tensors. Then

$$\begin{aligned} (\mathcal {A}*_N\mathcal {B})_{\mathcal {M},\mathcal {N}} ^\dag = \mathcal {B}_{\mathcal {P},\mathcal {N}}^\dag *_N\mathcal {A}_{\mathcal {M},\mathcal {P}} ^{\dagger }\end{aligned}$$

if and only if

$$\begin{aligned} {\mathfrak {R}}(\mathcal {A}^{\#}_{\mathcal {P},\mathcal {M}}*_M\mathcal {A}*_N\mathcal {B}) \subseteq {\mathfrak {R}}(\mathcal {B}) ~~\text { and }~~ {\mathfrak {R}}(\mathcal {B}*_L\mathcal {B}^{\#}_{\mathcal {N},\mathcal {P}}*_N\mathcal {A}^{\#}_{\mathcal {P},\mathcal {M}}) \subseteq {\mathfrak {R}}(\mathcal {A}^{\#}_{\mathcal {P},\mathcal {M}}). \end{aligned}$$

Proof

From Eq. (18), we have \((\mathcal {A}*_N\mathcal {B})_{\mathcal {M},\mathcal {N}} ^\dag = \mathcal {B}_{\mathcal {P},\mathcal {N}}^\dag *_N\mathcal {A}_{\mathcal {M},\mathcal {P}}^\dag \) if and only if

$$\begin{aligned}&\mathcal {N}^{-1/2}*_L(\mathcal {M}^{1/2}*_M\mathcal {A}*_N\mathcal {B}*_L\mathcal {N}^{-1/2})^\dag *_M\mathcal {M}^{1/2} \\&\quad = \mathcal {N}^{-1/2}*_L(\mathcal {P}^{1/2}*_N\mathcal {B}*_L\mathcal {N}^{-1/2})^\dag *_N\mathcal {P}^{1/2} *_N\mathcal {P}^{-1/2}*_N \\&\qquad (\mathcal {M}^{1/2}*_M\mathcal {A}*_N\mathcal {P}^{-1/2})^\dag *_M\mathcal {M}^{1/2}, \end{aligned}$$

is equivalent to, if and only if

$$\begin{aligned} (\tilde{\mathcal {A}}*_N\tilde{\mathcal {B}})^\dag = \tilde{\mathcal {B}}^\dag *_N\tilde{\mathcal {A}}^\dag , \end{aligned}$$

where \( \tilde{\mathcal {A}} = \mathcal {M}^{1/2}*_M\mathcal {A}*_N\mathcal {P}^{-1/2} \) and \( \tilde{\mathcal {B}}= \mathcal {P}^{1/2}*_N\mathcal {B}*_L\mathcal {N}^{-1/2} \). From Lemma 13, we have

$$\begin{aligned} (\mathcal {A}*_N\mathcal {B})_{\mathcal {M},\mathcal {N}} ^\dag = \mathcal {B}_{\mathcal {P},\mathcal {N}}^\dag *_N\mathcal {A}_{\mathcal {M},\mathcal {P}}^\dag \end{aligned}$$

if and only if

$$\begin{aligned} {\mathfrak {R}}(\tilde{\mathcal {A}}^* *_M\tilde{\mathcal {A}}*_N\tilde{\mathcal {B}}) \subseteq {\mathfrak {R}}(\tilde{\mathcal {B}}) ~~\text {and}~~ {\mathfrak {R}}(\tilde{\mathcal {B}}*_L\tilde{\mathcal {B}}^* *_N\tilde{\mathcal {A}}^*) \subseteq {\mathfrak {R}}(\tilde{\mathcal {A}}^*), \end{aligned}$$
(29)

which equivalently if and only if

$$\begin{aligned}&{\mathfrak {R}}(\mathcal {P}^{1/2}*_N\mathcal {A}^{\#}_{\mathcal {P},\mathcal {M}}*_M\mathcal {A}*_N\mathcal {B}*_L\mathcal {N}^{-1/2}) \subseteq {\mathfrak {R}}(\mathcal {P}^{1/2}*_N\mathcal {B}*_L\mathcal {N}^{-1/2})\\&\quad and~~ {\mathfrak {R}}(\mathcal {P}^{1/2}*_N\mathcal {B}*_L\mathcal {B}^{\#}_{\mathcal {N},\mathcal {P}}*_N\mathcal {A}^{\#}_{\mathcal {P},\mathcal {M}}*_M\mathcal {M}^{-1/2}) \subseteq {\mathfrak {R}}(\mathcal {P}^{1/2}*_N\mathcal {A}^{\#}_{\mathcal {P},\mathcal {M}}*_M\mathcal {M}^{-1/2}). \end{aligned}$$

Hence, \( (\mathcal {A}*_N\mathcal {B})_{\mathcal {M},\mathcal {N}} ^\dag = \mathcal {B}_{\mathcal {P},\mathcal {N}}^\dag *_N\mathcal {A}_{\mathcal {M},\mathcal {P}} ^{\dagger }\) if and only if

$$\begin{aligned} {\mathfrak {R}}(\mathcal {A}^{\#}_{\mathcal {P},\mathcal {M}}*_M\mathcal {A}*_N\mathcal {B}) \subseteq {\mathfrak {R}}(\mathcal {B}) \text { ~~and~~} {\mathfrak {R}}(\mathcal {B}*_L\mathcal {B}^{\#}_{\mathcal {N},\mathcal {P}}*_N\mathcal {A}^{\#}_{\mathcal {P},\mathcal {M}}) \subseteq {\mathfrak {R}}(\mathcal {A}^{\#}_{\mathcal {P},\mathcal {M}}). \end{aligned}$$

This completes the proof. \(\square \)

As a corollary to Theorem 13, we present another reverse order law for the weighted Moore–Penrose inverse of arbitrary-order tensor.

Corollary 5

Let \(\mathcal {A} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N}, \mathcal {B} \in {\mathbb {C}}^{J_1\times \cdots \times J_N \times K_1 \times \cdots \times K_L}\). Let \( \mathcal {M} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times I_1 \times \cdots \times I_M}\), \(\mathcal {N} \in {\mathbb {C}}^{K_1\times \cdots \times K_L \times K_1 \times \cdots \times K_L} \) and \( \mathcal {P} \in {\mathbb {C}}^{J_1\times \cdots \times J_N \times J_1 \times \cdots \times J_N} \) are three Hermitian positive definite tensors. Then

$$\begin{aligned} (\mathcal {A}*_N\mathcal {B})_{\mathcal {M},\mathcal {N}} ^\dag = \mathcal {B}_{\mathcal {P},\mathcal {N}}^\dag *_N\mathcal {A}_{\mathcal {M},\mathcal {P}}^\dag \end{aligned}$$

if and only if

$$\begin{aligned}&\mathcal {A}_{\mathcal {M},\mathcal {P}}^\dag *_M\mathcal {A}*_N\mathcal {B}*_L\mathcal {B}^{\#}_{\mathcal {N},\mathcal {P}}*_N\mathcal {A}^{\#}_{\mathcal {P},\mathcal {M}} = \mathcal {B}*_L\mathcal {B}^{\#}_{\mathcal {N},\mathcal {P}}*_N\mathcal {A}^{\#}_{\mathcal {P},\mathcal {M}}\\&\quad \text {and~~} \mathcal {B}*_L\mathcal {B}_{\mathcal {P},\mathcal {N}}^\dag *_N\mathcal {A}^{\#}_{\mathcal {P},\mathcal {M}}*_M\mathcal {A}*_N\mathcal {B} = \mathcal {A}^{\#}_{\mathcal {P},\mathcal {M}}*_M\mathcal {A}*_N\mathcal {B}. \end{aligned}$$

Proof

From Theorem 13, Eq. (29) and Lemma 5(a), we have \( (\mathcal {A}*_N\mathcal {B})_{\mathcal {M},\mathcal {N}} ^\dag = \mathcal {B}_{\mathcal {P},\mathcal {N}}^\dag *_N\mathcal {A}_{\mathcal {M},\mathcal {P}}^\dag \) if and only if

$$\begin{aligned}&(\mathcal {P}^{1/2}*_N\mathcal {B}*_L\mathcal {N}^{-1/2})*_L (\mathcal {P}^{1/2}*_N\mathcal {B}*_L\mathcal {N}^{-1/2})^\dag *_N\mathcal {P}^{1/2}*_N\mathcal {A}^{\#}_{\mathcal {P},\mathcal {M}}*_M\mathcal {A}*_N\mathcal {B}*_L\mathcal {N}^{-1/2} \\&\quad = \mathcal {P}^{1/2}*_N\mathcal {A}^{\#}_{\mathcal {P},\mathcal {M}}*_M\mathcal {A}*_N\mathcal {B}*_L\mathcal {N}^{-1/2} \end{aligned}$$

and

$$\begin{aligned}&(\mathcal {P}^{1/2}*_N\mathcal {A}^{\#}_{\mathcal {P},\mathcal {M}}*_M\mathcal {M}^{-1/2})*_N (\mathcal {P}^{1/2}*_N\mathcal {A}^{\#}_{\mathcal {P},\mathcal {M}}*_M \\&\qquad \mathcal {M}^{-1/2})^\dag *_N\mathcal {P}^{1/2}*_N\mathcal {B}*_L\mathcal {B}^{\#}_{\mathcal {N},\mathcal {P}}*_N\mathcal {A}^{\#}_{\mathcal {M},\mathcal {P}}*_M\mathcal {M}^{-1/2}\\&\quad = \mathcal {P}^{1/2}*_N\mathcal {B}*_L\mathcal {B}^{\#}_{\mathcal {N},\mathcal {P}}*_N\mathcal {A}^{\#}_{\mathcal {P},\mathcal {M}}*_M\mathcal {M}^{-1/2}, \end{aligned}$$

i.e., if and only if

$$\begin{aligned}&\mathcal {B}*_L\mathcal {B}_{\mathcal {P},\mathcal {N}}^\dag *_N\mathcal {A}^{\#}_{\mathcal {P},\mathcal {M}}*_M\mathcal {A}*_N\mathcal {B} = \mathcal {A}^{\#}_{\mathcal {P},\mathcal {M}}*_M\mathcal {A}*_N\mathcal {B} ~~~\text {and} \\&\qquad [(\mathcal {M}^{1/2}*_M\mathcal {A}*_N\mathcal {P}^{-1/2})^\dag *_M(\mathcal {M}^{1/2}*_M\mathcal {A}*_N \\&\qquad \mathcal {P}^{-1/2})]^* *_N\mathcal {P}^{1/2}*_N\mathcal {B}*_L\mathcal {B}^{\#}_{\mathcal {N},\mathcal {P}}*_N\mathcal {A}^{\#}_{\mathcal {M},\mathcal {P}}*_M\mathcal {M}^{-1/2}\\&\quad = \mathcal {P}^{1/2}*_N\mathcal {B}*_L\mathcal {B}^{\#}_{\mathcal {N},\mathcal {P}}*_N\mathcal {A}^{\#}_{\mathcal {P},\mathcal {M}}*_M\mathcal {M}^{-1/2}, \end{aligned}$$

i.e., if and only if

$$\begin{aligned}&\mathcal {B}*_L\mathcal {B}_{\mathcal {P},\mathcal {N}}^\dag *_N\mathcal {A}^{\#}_{\mathcal {P},\mathcal {M}}*_M\mathcal {A}*_N\mathcal {B} = \mathcal {A}^{\#}_{\mathcal {P},\mathcal {M}}*_M\mathcal {A}*_N\mathcal {B}\\&\quad ~~\text {and}~~ \mathcal {A}_{\mathcal {M},\mathcal {P}}^\dag *_M\mathcal {A}*_N\mathcal {B}*_L\mathcal {B}^{\#}_{\mathcal {N},\mathcal {P}}*_N\mathcal {A}^{\#}_{\mathcal {P},\mathcal {M}} = \mathcal {B}*_L\mathcal {B}^{\#}_{\mathcal {N},\mathcal {P}}*_N\mathcal {A}^{\#}_{\mathcal {P},\mathcal {M}}. \end{aligned}$$

This completes the proof. \(\square \)

Theorem 14

Let \( \mathcal {A} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N}, \mathcal {B} \in {\mathbb {C}}^{J_1\times \cdots \times J_N \times K_1 \times \cdots \times K_L}\). Let \( \mathcal {M} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times I_1 \times \cdots \times I_M}\), \(\mathcal {N} \in {\mathbb {C}}^{K_1\times \cdots \times K_L \times K_1 \times \cdots \times K_L} \) and \( \mathcal {P} \in {\mathbb {C}}^{J_1\times \cdots \times J_N \times J_1 \times \cdots \times J_N} \) are positive definite Hermitian tensors. Then

$$\begin{aligned} (\mathcal {A}{*_N}\mathcal {B})^{\dagger }_{\mathcal {M},\mathcal {N}} = \mathcal {B}^{\dagger }_{\mathcal {P},\mathcal {N}}{*_N}\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}} \end{aligned}$$

if and only if

$$\begin{aligned} (\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A}{*_N}\mathcal {B})^{\dagger }_{\mathcal {P},\mathcal {N}}= & {} \mathcal {B}^{\dagger }_{\mathcal {P},\mathcal {N}}{*_N}\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A} ~~\text {and}~~\\ (\mathcal {A}{*_N}\mathcal {B}{*_L}\mathcal {B}^{\dagger }_{\mathcal {P},\mathcal {N}})^{\dagger }_{\mathcal {M},\mathcal {P}}= & {} \mathcal {B}{*_L}\mathcal {B}^{\dagger }_{\mathcal {P},\mathcal {N}}{*_N}\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}. \end{aligned}$$

Proof

Suppose, \( (\mathcal {A}{*_N}\mathcal {B})^{\dagger }_{\mathcal {M},\mathcal {N}} = \mathcal {B}^{\dagger }_{\mathcal {P},\mathcal {N}}{*_N}\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}.\) Now one can write

$$\begin{aligned} (\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A}{*_N}\mathcal {B}){*_L}(\mathcal {B}^{\dagger }_{\mathcal {P},\mathcal {N}}{*_N}\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A}) {*_N}(\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}{*_N}\mathcal {A}{*_N}\mathcal {B}) = \mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A}{*_N}\mathcal {B}. \end{aligned}$$

Further, we can write

$$\begin{aligned}&(\mathcal {B}^{\dagger }_{\mathcal {P},\mathcal {N}}{*_N}\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A}){*_N}(\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A}{*_N}\mathcal {B}){*_L}(\mathcal {B}^{\dagger }_{\mathcal {P},\mathcal {N}}{*_N}\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A}) \\&\quad = \mathcal {B}^{\dagger }_{\mathcal {P},\mathcal {N}}{*_N}\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A}. \\&\quad \text {Also~}, [\mathcal {P} {*_N}(\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A}{*_N}\mathcal {B}){*_L}(\mathcal {B}^{\dagger }_{\mathcal {P},\mathcal {N}}{*_N}\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A})]^* \\&\qquad = [(\mathcal {P}{*_N}\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A}){*_N}\mathcal {P}^{-1}{*_N}(\mathcal {P}{*_N}\mathcal {B}{*_L}\mathcal {B}^{\dagger }_{\mathcal {P},\mathcal {N}}){*_N}\mathcal {P}^{-1}{*_N}(\mathcal {P}{*_N}\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A})]^* \\&\qquad = (\mathcal {P}{*_N}\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A}){*_N}\mathcal {P}^{-1}{*_N}(\mathcal {P}{*_N}\mathcal {B}{*_L}\mathcal {B}^{\dagger }_{\mathcal {P},\mathcal {N}}){*_N}\mathcal {P}^{-1}{*_N}(\mathcal {P}{*_N}\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A})\\&\qquad = \mathcal {P}{*_N}(\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A}{*_N}\mathcal {B}){*_L}(\mathcal {B}^{\dagger }_{\mathcal {P},\mathcal {N}}{*_N}\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A}), \\&\quad \text {and~~} [\mathcal {N}{*_L}(\mathcal {B}^{\dagger }_{\mathcal {P},\mathcal {N}}{*_N}\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A}) {*_N}(\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A}{*_N}\mathcal {B})]^* \\&\qquad = [\mathcal {N}{*_L}(\mathcal {B}^{\dagger }_{\mathcal {P},\mathcal {N}}{*_N}\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}){*_M}\mathcal {A}{*_N}\mathcal {B})]^* \\&\qquad = \mathcal {N}{*_L}(\mathcal {B}^{\dagger }_{\mathcal {P},\mathcal {N}}{*_N}\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A}){*_N}(\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A}{*_N}\mathcal {B}). \end{aligned}$$

Hence,

$$\begin{aligned} (\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A}{*_N}\mathcal {B})^{\dagger }_{\mathcal {P},\mathcal {N}} = \mathcal {B}^{\dagger }_{\mathcal {P},\mathcal {N}}{*_N}\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A}. \end{aligned}$$

By similar arguments one can also show that\( (\mathcal {A}{*_N}\mathcal {B}{*_L}\mathcal {B}^{\dagger }_{\mathcal {P},\mathcal {N}})^{\dagger }_{\mathcal {M},\mathcal {P}} = \mathcal {B}{*_L}\mathcal {B}^{\dagger }_{\mathcal {P},\mathcal {N}}{*_N}\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}} \).

Conversely, for proving converse, first we prove a identity.

From Eq. (18) and Eq. (28), we have

$$\begin{aligned}&(\mathcal {A}{*_N}\mathcal {B})^{\dagger }_{\mathcal {M},\mathcal {N}} \\&\quad = \mathcal {N}^{-1/2}{*_L}[(\mathcal {M}^{1/2}{*_M}\mathcal {A}{*_N}\mathcal {P}^{-1/2}){*_N}(\mathcal {P}^{1/2}{*_N}\mathcal {B}{*_L}\mathcal {N}^{-1/2}]^{\dagger }{*_M}\mathcal {M}^{1/2}\\&\quad = \mathcal {N}^{-1/2}{*_L}[(\mathcal {M}^{1/2}{*_M}\mathcal {A}{*_N}\mathcal {P}^{-1/2})^{\dagger }{*_N}(\mathcal {M}^{1/2}{*_M}\mathcal {A}{*_N}\mathcal {P}^{-1/2}){*_N}(\mathcal {P}^{1/2}{*_N}\mathcal {B}{*_L}\mathcal {N}^{-1/2})]^{\dagger }\\&\qquad {*_N}[(\mathcal {M}^{1/2}{*_M}\mathcal {A}{*_N}\mathcal {P}^{-1/2}){*_N}(\mathcal {P}^{1/2}{*_N}\mathcal {B}{*_L}\mathcal {N}^{-1/2}){*_N}(\mathcal {P}^{1/2}{*_N}\mathcal {B}{*_L}\mathcal {N}^{-1/2})^{\dagger }]^{\dagger }{*_M}\mathcal {M}^{1/2}\\&\quad = \mathcal {N}^{-1/2}{*_L}[\mathcal {P}^{1/2}{*_N}\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A}{*_N}\mathcal {B}{*_L}\mathcal {N}^{-1/2}]^{\dagger }{*_N}\mathcal {P}^{1/2}{*_N}\mathcal {P}^{-1/2}{*_N}\\&\qquad [\mathcal {M}^{1/2}{*_M}\mathcal {A}{*_N}\mathcal {B}{*_L}\mathcal {B}^{\dagger }_{\mathcal {P},\mathcal {N}}{*_N}\mathcal {P}^{-1/2}]^{\dagger }{*_M}\mathcal {M}^{1/2}\\&\quad = (\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A}{*_N}\mathcal {B})^{\dagger }_{\mathcal {P},\mathcal {N}}{*_N}(\mathcal {A}{*_N}\mathcal {B}{*_L}\mathcal {B}^{\dagger }_{\mathcal {P},\mathcal {N}})^{\dagger }_{\mathcal {M},\mathcal {P}}. \end{aligned}$$

Further, using the given hypothesis and above identity, we can write

$$\begin{aligned} (\mathcal {A}{*_N}\mathcal {B})^{\dagger }_{\mathcal {M},\mathcal {N}}= & {} (\mathcal {B}^{\dagger }_{\mathcal {P},\mathcal {N}}{*_N}\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A}){*_N}( \mathcal {B}{*_L}\mathcal {B}^{\dagger }_{\mathcal {P},\mathcal {N}}{*_N}\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}})\\= & {} (\mathcal {B}^{\dagger }_{\mathcal {P},\mathcal {N}}{*_N}\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A}){*_N}(\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A}{*_N}\mathcal {B}){*_L}(\mathcal {B}^{\dagger }_{\mathcal {P},\mathcal {N}}{*_N}\\&\quad \mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A}){*_N}\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}\\= & {} \mathcal {B}^{\dagger }_{\mathcal {P},\mathcal {N}}{*_N}\mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {P}}. \end{aligned}$$

This completes the proof. \(\square \)

In the next theorem, we develop the characterization for the weighted Moore–Penrose inverse of the product of arbitrary-order tensors \(\mathcal {A}\) and \(\mathcal {B}\), as follows.

Theorem 15

Let \(\mathcal {A} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N}, \mathcal {B} \in {\mathbb {C}}^{J_1\times \cdots \times J_N \times K_1 \times \cdots \times K_L}\). Let \( \mathcal {M} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times I_1 \times \cdots \times I_M}\), \(\mathcal {N} \in {\mathbb {C}}^{K_1\times \cdots \times K_L \times K_1 \times \cdots \times K_L} \) and \( \mathcal {P} \in {\mathbb {C}}^{J_1\times \cdots \times J_N \times J_1 \times \cdots \times J_N} \) are three Hermitian positive definite tensors. Then

$$\begin{aligned} (\mathcal {A}*_N\mathcal {B})^\dag _{\mathcal {M},\mathcal {N}} = (\mathcal {B}_1)_{\mathcal {P},\mathcal {N}}^\dag *_N(\mathcal {A}_1)_{\mathcal {M},\mathcal {P}}^\dag , \end{aligned}$$

where \( \mathcal {A}_1 = \mathcal {A}*_N\mathcal {B}_1*_L(\mathcal {B}_1)_{\mathcal {P},\mathcal {N}}^\dag \) and \( \mathcal {B}_1 = \mathcal {A}_{\mathcal {M},\mathcal {P}}^\dag *_M\mathcal {A}*_N\mathcal {B} \).

Proof

$$\begin{aligned} \mathcal {A}*_N\mathcal {B}= & {} \mathcal {A}*_N\mathcal {A}^\dag _{\mathcal {M},\mathcal {P}}*_M\mathcal {A}*_N\mathcal {B} = \mathcal {A}*_N\mathcal {B}_1\nonumber \\= & {} \mathcal {A}*_N\mathcal {B}_1*_L(\mathcal {B}_1)^\dag _{\mathcal {P},\mathcal {N}}*_N\mathcal {B}_1 = \mathcal {A}_1*_N\mathcal {B}_1. \end{aligned}$$
(30)
$$\begin{aligned} \mathcal {A}_{\mathcal {M},\mathcal {P}}^\dag *_M\mathcal {A}_1= & {} \mathcal {A}^\dag _{\mathcal {M},\mathcal {P}}*_M\mathcal {A}*_N\mathcal {A}^\dag _{\mathcal {M},\mathcal {P}}*_M\mathcal {A}*_N\mathcal {B}*_L(\mathcal {B}_1)^\dag _{\mathcal {P},\mathcal {N}}\nonumber \\= & {} \mathcal {B}_1*_L(\mathcal {B}_1)^\dag _{\mathcal {P},\mathcal {N}}. \end{aligned}$$
(31)
$$\begin{aligned} \mathcal {A}^\dag _{\mathcal {M},\mathcal {P}}*_M\mathcal {A}_1= & {} \mathcal {A}^\dag _{\mathcal {M},\mathcal {P}}*_M\mathcal {A}_1*_N(\mathcal {A}_1)^\dag _{\mathcal {M},\mathcal {P}}*_M\mathcal {A}_1\nonumber \\= & {} \mathcal {B}_1*_L(\mathcal {B}_1)^\dag _{\mathcal {P},\mathcal {N}}*_N(\mathcal {A}_1)^\dag _{\mathcal {M},\mathcal {P}}*_M\mathcal {A}_1. \end{aligned}$$
(32)

From (31) and and the above equality, we have

$$\begin{aligned} \mathcal {P}*_N\mathcal {B}_1*_L(\mathcal {B}_1)^\dag _{\mathcal {P},\mathcal {N}} = [\mathcal {P}*_N\mathcal {B}_1*_L(\mathcal {B}_1)^\dag _{\mathcal {P},\mathcal {N}}]*_N\mathcal {P}^{-1}*_N[\mathcal {P}*_N(\mathcal {A}_1)^\dag _{\mathcal {M},\mathcal {P}}*_M\mathcal {A}_1]. \end{aligned}$$

Therefore,

$$\begin{aligned} \mathcal {P}*_N\mathcal {B}_1*_L(\mathcal {B}_1)^\dag _{\mathcal {P},\mathcal {N}}= & {} [\mathcal {P}*_N\mathcal {B}_1*_L(\mathcal {B}_1)^\dag _{\mathcal {P},\mathcal {N}}]^* \\= & {} \mathcal {P}*_N(\mathcal {A}_1)^\dag _{\mathcal {M},\mathcal {P}}*_M\mathcal {A}_1*_N\mathcal {B}_1*_L(\mathcal {B}_1)^\dag _{\mathcal {P},\mathcal {N}}\\= & {} \mathcal {P}*_N(\mathcal {A}_1)^\dag _{\mathcal {M},\mathcal {P}}*_M\mathcal {A}_1. \end{aligned}$$

Hence,

$$\begin{aligned} {\mathcal {B}_1*_L(\mathcal {B}_1)^\dag _{\mathcal {P},\mathcal {N}} = (\mathcal {A}_1)^\dag _{\mathcal {M},\mathcal {P}}*_M\mathcal {A}_1 = \mathcal {A}^\dag _{\mathcal {M},\mathcal {P}}*_M\mathcal {A}_1}. \end{aligned}$$
(33)

Let \( \mathcal {X} = \mathcal {A}*_N\mathcal {B} \) and \( \mathcal {Y} = (\mathcal {B}_1)^\dag _{\mathcal {P},\mathcal {N}}*_N(\mathcal {A}_1)^\dag _{\mathcal {M},\mathcal {P}} \). Using (30) and (33) we obtain

$$\begin{aligned} \mathcal {X}*_L\mathcal {Y}*_M\mathcal {X}= & {} \mathcal {A}*_N\mathcal {B}*_L(\mathcal {B}_1)^\dag _{\mathcal {P},\mathcal {N}}*_M(\mathcal {A}_1)^\dag _{\mathcal {M},\mathcal {P}}*_N\mathcal {A}_1*_N\mathcal {B}_1 \\= & {} \mathcal {A}_1*_N\mathcal {B}_1*_L(\mathcal {B}_1)^\dag _{\mathcal {P},\mathcal {N}}*_N\mathcal {B}_1*_L(\mathcal {B}_1)^\dag _{\mathcal {P},\mathcal {N}}*_N\mathcal {B}_1 = \mathcal {X}, \\ \mathcal {Y}*_M\mathcal {X}*_L\mathcal {Y}= & {} (\mathcal {B}_1)^\dag _{\mathcal {P},\mathcal {N}}*_N\mathcal {B}_1*_L(\mathcal {B}_1)^\dag _{\mathcal {P},\mathcal {N}}*_N\mathcal {B}_1*_L(\mathcal {B}_1)^\dag _{\mathcal {P},\mathcal {N}}*_N(\mathcal {A}_1)^\dag _{\mathcal {M},\mathcal {P}} = \mathcal {Y}, \\ \mathcal {M}*_M\mathcal {X}*_L\mathcal {Y}= & {} \mathcal {M}*_M\mathcal {A}_1*_N(\mathcal {A}_1)^\dag _{\mathcal {M},\mathcal {P}}*_M\mathcal {A}_1*_N(\mathcal {A}_1)^\dag _{\mathcal {M},\mathcal {P}} = (\mathcal {M}*_M\mathcal {X}*_L\mathcal {Y})^* \end{aligned}$$

and

$$\begin{aligned} \mathcal {N}*_L\mathcal {Y}*_M\mathcal {X}= & {} \mathcal {N}*_L(\mathcal {B}_1)^\dag _{\mathcal {P},\mathcal {N}}*_N\mathcal {B}_1*_L(\mathcal {B}_1)^\dag _{\mathcal {P},\mathcal {N}}*_N\mathcal {B}_1 = (\mathcal {N}*_L\mathcal {Y}*_M\mathcal {X})^*. \end{aligned}$$

Hence, \( \mathcal {X}^\dag _{\mathcal {M},\mathcal {N}} = \mathcal {Y} \), i.e., \( (\mathcal {A}*_N\mathcal {B})^\dag _{\mathcal {M},\mathcal {N}} = (\mathcal {B}_1)^\dag _{\mathcal {P},\mathcal {N}}*_N(\mathcal {A}_1)^\dag _{\mathcal {M},\mathcal {P}} \). \(\square \)

We shall present the following example as a confirmation of the above Theorem.

Example 4

Let \(\mathcal {A}_1 = \mathcal {A}*_2\mathcal {B}_1*_1(\mathcal {B}_1)_{\mathcal {P},\mathcal {N}}^\dag \) and \( \mathcal {B}_1 = \mathcal {A}_{\mathcal {M},\mathcal {P}}^\dag *_1\mathcal {A}*_2\mathcal {B}\), where \(~\mathcal {A}=(a_{ijk}) \in {\mathbb {R}}^{3\times 2\times 4},~~\mathcal {B}=(b_{ijk}) \in {\mathbb {R}}^{2\times 4\times 3}, ~~\mathcal {M}=(m_{ij}) \in {\mathbb {R}}^{3\times 3}, ~~ \mathcal {N}=(n_{ij}) \in {\mathbb {R}}^{3\times 3}\) and \(\mathcal {P}=(p_{ijkl}) \in {\mathbb {R}}^{2\times 4\times 2\times 4}\) such that

$$\begin{aligned} a_{ij1}= & {} \begin{pmatrix} -1 &{} 2 \\ 1 &{} -1 \\ 0 &{} 1 \\ \end{pmatrix}, a_{ij2} = \begin{pmatrix} 1 &{} 0 \\ 0 &{} 0 \\ 1 &{} 0 \\ \end{pmatrix}, a_{ij3} = \begin{pmatrix} 2 &{} 0 \\ 1 &{} 1 \\ 0 &{} 0 \\ \end{pmatrix}, a_{ij4} = \begin{pmatrix} 3 &{} 2 \\ 1 &{} -1 \\ 0 &{} 1 \\ \end{pmatrix}, \\ b_{ij1}= & {} \begin{pmatrix} -1 &{} 2 &{} 1 &{} 1\\ 0 &{} 1 &{} 1 &{} 0\\ \end{pmatrix}, b_{ij2} = \begin{pmatrix} 0 &{} 1 &{} 1 &{} 1\\ 1 &{} 1 &{} 0 &{} 1\\ \end{pmatrix}, b_{ij3} = \begin{pmatrix} 0 &{} 1 &{} 1 &{} 1\\ 1 &{} 1 &{} 0 &{} 1\\ \end{pmatrix}, \\ M= & {} \begin{pmatrix} 3 &{} 0 &{} 1 \\ 0 &{} 2 &{} 0 \\ 1 &{} 0 &{} 2 \\ \end{pmatrix}, N = \begin{pmatrix} 1 &{} 1 &{} 0 \\ 1 &{} 2 &{} 0 \\ 0 &{} 0 &{} 1 \\ \end{pmatrix}, \\ p_{ij11}= & {} \begin{pmatrix} 1 &{} 0 &{} 0 &{} 1\\ 0 &{} 0 &{} 0 &{} 0\\ \end{pmatrix}, p_{ij12} = \begin{pmatrix} 0 &{} 1 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 0\\ \end{pmatrix} p_{ij13} = \begin{pmatrix} 0 &{} 0 &{} 1 &{} 0\\ 1 &{} 0 &{} 0 &{} 0\\ \end{pmatrix}, p_{ij14} = \begin{pmatrix} 1 &{} 0 &{} 0 &{} 3\\ 0 &{} 0 &{} 1 &{} 0\\ \end{pmatrix} \\ p_{ij21}= & {} \begin{pmatrix} 0 &{} 0 &{} 1 &{} 0\\ 2 &{} 0 &{} 0 &{} 0\\ \end{pmatrix}, p_{ij22} = \begin{pmatrix} 0 &{} 0 &{} 0 &{} 0\\ 0 &{} 2 &{} 2 &{} 1\\ \end{pmatrix} p_{ij23} = \begin{pmatrix} 0 &{} 0 &{} 0 &{} 1\\ 0 &{} 2 &{} 5 &{} 0\\ \end{pmatrix}, p_{ij24} = \begin{pmatrix} 0 &{} 0 &{} 0 &{} 0\\ 0 &{} 1 &{} 0 &{} 1.\\ \end{pmatrix} \end{aligned}$$

Then \(\mathcal {A}_1 = ({\tilde{a}}_{ijk}) \in {\mathbb {R}}^{3\times 2\times 4}\), \(\mathcal {B}_1 = ({\tilde{b}}_{ijk}) \in {\mathbb {R}}^{2\times 4\times 3},~~ (\mathcal {A}_1)^\dag _{\mathcal {M},\mathcal {P}} = (x_{ijk}) \in {\mathbb {R}}^{2\times 4\times 3}\)

and \((\mathcal {B}_1)^\dag _{\mathcal {P},\mathcal {N}} = (y_{ijk}) \in {\mathbb {R}}^{3\times 2\times 4}\) such that

$$\begin{aligned} {\tilde{a}}_{ij1}= & {} \begin{pmatrix} -1 &{} 2 \\ 0 &{} -1 \\ 0 &{} 1 \\ \end{pmatrix}, {\tilde{a}}_{ij2} = \begin{pmatrix} 1 &{} 0 \\ 0 &{} 1 \\ 1 &{} 0 \\ \end{pmatrix}, {\tilde{a}}_{ij3} = \begin{pmatrix} 2 &{} 0 \\ 1 &{} 1 \\ 0 &{} 0 \\ \end{pmatrix}, {\tilde{a}}_{ij4} = \begin{pmatrix} 3 &{} 2 \\ 1 &{} -1 \\ 0 &{} 1 \\ \end{pmatrix}, \\ {\tilde{b}}_{ij1}= & {} \begin{pmatrix} -0.3450 &{} 1.0728 &{} 0.7438 &{} 0.4134\\ -0.2067 &{} 1.1965 &{} -0.4265 &{} -0.8661\\ \end{pmatrix},\\ {\tilde{b}}_{ij2}= & {} \begin{pmatrix} -0.3319 &{} 1.7409 &{} 1.0320 &{} 0.4483\\ -0.2242 &{} 1.1004 &{} -0.3217 &{} -0.5167 \end{pmatrix}, \\ {\tilde{b}}_{ij3}= & {} \begin{pmatrix} 1.5109 &{} 4.0568 &{} 0.2402 &{} -0.6376 \\ 0.3188 &{} 1.2533 &{} 0.0873 &{} -0.3755 \end{pmatrix}, x_{ij1} = \begin{pmatrix} -0.2052 &{} -0.1339 &{} 0.1514 &{} 0.1194\\ -0.0597 &{} -0.1616 &{} 0.0247 &{} 0.1936 \end{pmatrix}, \\ x_{ij2}= & {} \begin{pmatrix} 0.0218 &{} 0.4469 &{} 0.1470 &{} 0.0582\\ -0.0291 &{} 0.5066 &{} -0.1587 &{} -0.4178 \end{pmatrix}, x_{ij3} = \begin{pmatrix} 0.4236 &{} 0.9360 &{} -0.0146 &{} -0.2038\\ 0.1019 &{} 0.2271 &{} 0.0553 &{} -0.0378 \end{pmatrix}, \\ y_{ij1}= & {} \begin{pmatrix} 0.4783 &{} -1.6522\\ -0.5217 &{} 1.3478\\ 0.1304 &{} -0.0870 \end{pmatrix}, y_{ij2} = \begin{pmatrix} -0.5217 &{} 0.6522\\ 0.4783 &{} -0.3478\\ 0.1304 &{} 0.0870 \end{pmatrix}, y_{ij3} = \begin{pmatrix} -0.3043 &{} 0.6522\\ 0.6957 &{} -0.3478\\ -0.1739 &{} 0.0870 \end{pmatrix}, \\ y_{ij4}= & {} \begin{pmatrix} -0.7826 &{} -1.6522\\ 1.2174 &{} 1.3478\\ -0.3043 &{} -0.0870 \end{pmatrix}. \end{aligned}$$

Thus,

$$\begin{aligned} (\mathcal {A}*_2\mathcal {B})^\dag _{\mathcal {M},\mathcal {N}} = \begin{pmatrix} -0.4783 &{} 0.6522 &{} -0.0435\\ 0.5217 &{} -0.3478 &{} -0.0435\\ -0.1304 &{} 0.0870 &{} 0.2609 \end{pmatrix} = (\mathcal {B}_1)^\dag _{\mathcal {P},\mathcal {N}}*_2(\mathcal {A}_1)^\dag _{\mathcal {M},\mathcal {P}}. \end{aligned}$$

Hence, \( (\mathcal {A}*_2\mathcal {B})^\dag _{\mathcal {M},\mathcal {N}} = (\mathcal {B}_1)^\dag _{\mathcal {P},\mathcal {N}}*_2(\mathcal {A}_1)^\dag _{\mathcal {M},\mathcal {P}}.\)

Further, using Lemma 4 in Ji and Wei (2017) on an arbitrary-order tensor \(\mathcal {A} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N}\) with Hermitian positive definite tensors \(\mathcal {M} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times I_1 \times \cdots \times I_M}\) and \(\mathcal {N} \in {\mathbb {C}}^{J_1\times \cdots \times J_N \times J_1 \times \cdots \times J_N} \) one can write the following identity:

$$\begin{aligned} {\mathfrak {R}}(\mathcal {A}^\dag _{\mathcal {M},\mathcal {N}}*_M\mathcal {A}) = {\mathfrak {R}}(\mathcal {A}^\#_{\mathcal {N},\mathcal {M}}). \end{aligned}$$
(34)

Using the above identity, a sufficient condition for the reverse order law for weighted Moore–Penrose inverse of tensor is presented next.

Corollary 6

Let \( \mathcal {A} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N}, \mathcal {B} \in {\mathbb {C}}^{J_1\times \cdots \times J_N \times K_1 \times \cdots \times K_L}\). Let \( \mathcal {M} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times I_1 \times \cdots \times I_M}\), \(\mathcal {N} \in {\mathbb {C}}^{K_1\times \cdots \times K_L \times K_1 \times \cdots \times K_L} \) and \( \mathcal {P} \in {\mathbb {C}}^{J_1\times \cdots \times J_N \times J_1 \times \cdots \times J_N} \) are positive definite Hermitian tensors. If

$$\begin{aligned} {\mathfrak {R}}(\mathcal {B}) \subseteq {\mathfrak {R}}(\mathcal {A}^{\#}_{\mathcal {P},\mathcal {M}}) ~~~ and~~~ \mathcal {N}(\mathcal {B}^{\#}_{\mathcal {N}^{1/2},\mathcal {P}^{1/2}}) \subseteq \mathcal {N}(\mathcal {A}), \end{aligned}$$

then

$$\begin{aligned} (\mathcal {A}*_N\mathcal {B})^\dag _{\mathcal {M},\mathcal {N}} = \mathcal {B}^\dag _{\mathcal {P},\mathcal {N}}*_N\mathcal {A}^\dag _{\mathcal {M},\mathcal {P}}. \end{aligned}$$

Proof

From Theorem15 we have, \( (\mathcal {A}*_N\mathcal {B})^\dag _{\mathcal {M},\mathcal {N}} = (\mathcal {B}_1)_{\mathcal {P},\mathcal {N}}^\dag *_N(\mathcal {A}_1)_{\mathcal {M},\mathcal {P}}^\dag \), where

\( \mathcal {A}_1 = \mathcal {A}*_N\mathcal {B}_1*_L(\mathcal {B}_1)_{\mathcal {P},\mathcal {N}}^\dag \) and \( \mathcal {B}_1 = \mathcal {A}_{\mathcal {M},\mathcal {P}}^\dag *_M\mathcal {A}*_N\mathcal {B} \).

From Eq. (34) and given hypothesis, we have

$$\begin{aligned} {\mathfrak {R}}(\mathcal {A}^\dag _{\mathcal {M},\mathcal {P}}*_M\mathcal {A}) = {\mathfrak {R}}(\mathcal {A}^{\#}_{\mathcal {P},\mathcal {M}}) \supseteq {\mathfrak {R}}(\mathcal {B}) \end{aligned}$$

So there exists \( \mathcal {P} \in {\mathbb {C}}^{J_1 \times \cdots \times J_N \times K_1 \times \cdots \times K_L} \) such that \( \mathcal {B} = \mathcal {A}^\dag _{\mathcal {M},\mathcal {N}}*_M\mathcal {A}*_N\mathcal {P}\). Now,

$$\begin{aligned} \mathcal {B}_1 = \mathcal {A}^\dag _{\mathcal {M},\mathcal {N}}*_M\mathcal {A}*_N\mathcal {B} = \mathcal {A}^\dag _{\mathcal {M},\mathcal {N}}*_M\mathcal {A}*_N\mathcal {P} = \mathcal {B}. \end{aligned}$$

Hence, \( \mathcal {A}_1 = \mathcal {A}*_N\mathcal {B}*_L\mathcal {B}^\dag _{\mathcal {N},\mathcal {P}} \).

Further, we have, \( \mathcal {N}(\mathcal {B}^{\#}_{\mathcal {N}^{1/2},\mathcal {P}^{1/2}}) \subseteq \mathcal {N}(\mathcal {A}) \), which is equivalent to

$$\begin{aligned} {\mathfrak {R}}(\mathcal {P}^{-1/2}*_N\mathcal {A}^*) = {\mathfrak {R}}(\mathcal {A}^*) \subseteq {\mathfrak {R}}[(\mathcal {B}^{\#}_{\mathcal {N}^{1/2},\mathcal {P}^{1/2}})^*] = {\mathfrak {R}}[(\mathcal {N}^{-1/2}*_L\mathcal {B}^* *_N\mathcal {P}^{1/2})^*]. \end{aligned}$$

Then from Lemma 6 (a), we have

$$\begin{aligned} (\mathcal {A}*_N\mathcal {P}^{-1/2})*_N(\mathcal {N}^{-1/2}*_L\mathcal {B}^* *_N\mathcal {P}^{1/2})^\dag *_L(\mathcal {N}^{-1/2}*_L\mathcal {B}^* *_N\mathcal {P}^{1/2}) = \mathcal {A}*_N\mathcal {P}^{-1/2}, \end{aligned}$$

which equivalently

$$\begin{aligned} (\mathcal {A}*_N\mathcal {P}^{-1/2})*_N[(\mathcal {P}^{1/2}*_N\mathcal {B}*_L\mathcal {N}^{-1/2})*_L(\mathcal {P}^{1/2}*_N\mathcal {B}*_L\mathcal {N}^{-1/2})^\dag ]^* = \mathcal {A}*_N\mathcal {P}^{-1/2}, \end{aligned}$$

that is

$$\begin{aligned} \mathcal {A}*_N\mathcal {B}*_L\mathcal {N}^{-1/2}*_L(\mathcal {P}^{1/2}*_N\mathcal {B}*_L\mathcal {N}^{-1/2})^\dag *_N\mathcal {P}^{1/2} = \mathcal {A}, \end{aligned}$$

i.e.

$$\begin{aligned} \mathcal {A}_1 = \mathcal {A}*_N\mathcal {B}*_L\mathcal {B}^\dag _{\mathcal {N},\mathcal {P}} = \mathcal {A}. \end{aligned}$$

Hence, \( (\mathcal {A}*_N\mathcal {B})^\dag _{\mathcal {M},\mathcal {N}} = \mathcal {B}^\dag _{\mathcal {N},\mathcal {P}}*_N\mathcal {A}^\dag _{\mathcal {M},\mathcal {N}} \). \(\square \)

We next present another characterization of the product of arbitrary-order tensors, as follows:

Theorem 16

Let \(\mathcal {A} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N}\) and \( \mathcal {B} \in {\mathbb {C}}^{J_1\times \cdots \times J_N \times K_1 \times \cdots \times K_L}\).

Let \( \mathcal {M} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times I_1 \times \cdots \times I_M} \), \(\mathcal {N} \in {\mathbb {C}}^{K_1\times \cdots \times K_L \times K_1 \times \cdots \times K_L} \) and \( \mathcal {P} \in {\mathbb {C}}^{J_1\times \cdots \times J_N \times J_1 \times \cdots \times J_N} \) are three Hermitian positive definite tensors. Then

$$\begin{aligned} (\mathcal {A}*_N\mathcal {B})_{\mathcal {M},\mathcal {N}}^{{\dagger }} = (\mathcal {B}_1)^{\dagger }_{\mathcal {\mathcal {P},\mathcal {N}}}{*_N}(\mathcal {A}_1)^{\dagger }_{\mathcal {M},\mathcal {P}}, \end{aligned}$$

where \( \mathcal {A}_1 = \mathcal {A}{*_N}\mathcal {B}{*_L}\mathcal {B}^{\dagger }_{\mathcal {P},\mathcal {I}_L} \) and \( \mathcal {B}_1 = (\mathcal {A}_1)^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A}_1 {*_N}\mathcal {B}.\)

Proof

Let \( \mathcal {X} = \mathcal {A}{*_N}\mathcal {B} \) and \( \mathcal {Y} = (\mathcal {B}_1)^{\dagger }_{\mathcal {P},\mathcal {N}}{*_N}(\mathcal {A}_1)^{\dagger }_{\mathcal {M},\mathcal {P}}\). Now we have

$$\begin{aligned} \mathcal {A}{*_N}\mathcal {B} = \mathcal {A}{*_N}\mathcal {B}{*_L}\mathcal {B}^{\dagger }_{\mathcal {P},\mathcal {I}_L}{*_N}\mathcal {B} = \mathcal {A}_1{*_N}\mathcal {B} = \mathcal {A}_1 {*_N}(\mathcal {A}_1)^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A}_1 {*_N}\mathcal {B} = \mathcal {A}_1 {*_N}\mathcal {B}_1. \end{aligned}$$
(35)

Now, using Eq. (35), we obtain

$$\begin{aligned} \mathcal {X}{*_L}\mathcal {Y} {*_M}\mathcal {X}= & {} \mathcal {A}_1{*_N}\mathcal {B}_1{*_L}(\mathcal {B}_1)^{\dagger }_{\mathcal {P},\mathcal {N}} {*_N}\mathcal {B}_1 = \mathcal {X}, \end{aligned}$$
(36)
$$\begin{aligned} \mathcal {Y}{*_M}\mathcal {X}{*_L}\mathcal {Y}= & {} (\mathcal {B}_1)^{\dagger }_{\mathcal {P},\mathcal {N}} {*_N}\mathcal {B}_1 {*_L}(\mathcal {B}_1)^{\dagger }_{\mathcal {P},\mathcal {N}}{*_N}(\mathcal {A}_1)^{\dagger }_{\mathcal {M},\mathcal {P}} = \mathcal {Y}, \end{aligned}$$
(37)
$$\begin{aligned} \mathcal {N}{*_L}\mathcal {Y}{*_M}\mathcal {X}= & {} \mathcal {N}{*_L}(\mathcal {B}_1)^{\dagger }_{\mathcal {P},\mathcal {N}} {*_N}\mathcal {B}_1 = (\mathcal {N}{*_L}\mathcal {Y}{*_M}\mathcal {X})^*. \end{aligned}$$
(38)

Further, using the following relations

$$\begin{aligned} \mathcal {B}_1 {*_L}\mathcal {B}^{\dagger }_{\mathcal {P},\mathcal {I}_L} = (\mathcal {A}_1)^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A}_1 ~~and ~~\mathcal {B}_1 {*_L}\mathcal {B}^{\dagger }_{\mathcal {P},\mathcal {I}_L} = \mathcal {B}_1{*_L}(\mathcal {B}_1)^{\dagger }_{\mathcal {P},\mathcal {N}}{*_N}(\mathcal {A}_1)^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A}_1, \end{aligned}$$

we have

$$\begin{aligned} (\mathcal {A}_1)^{\dagger }_{\mathcal {M},\mathcal {P}}{*_M}\mathcal {A}_1 = \mathcal {B}_1{*_L}(\mathcal {B}_1)^{\dagger }_{\mathcal {P},\mathcal {N}}. \end{aligned}$$

It concludes that

$$\begin{aligned} \mathcal {M}{*_M}\mathcal {X}{*_L}\mathcal {Y} = \mathcal {M}{*_M}\mathcal {A}_1 {*_N}(\mathcal {A}_1)^{\dagger }_{\mathcal {M},\mathcal {P}} = (\mathcal {M}{*_M}\mathcal {X}{*_L}\mathcal {Y})^*. \end{aligned}$$
(39)

From the relations (36)–(39) validates \( \mathcal {Y}= \mathcal {X}^{\dagger }_{\mathcal {M},\mathcal {N}}\). Hence, \((\mathcal {A}*_N\mathcal {B})_{\mathcal {M},\mathcal {N}}^{{\dagger }} = (\mathcal {B}_1)^{\dagger }_{\mathcal {\mathcal {P},\mathcal {N}}}{*_N} (\mathcal {A}_1)^{\dagger }_{\mathcal {M},\mathcal {P}}.\) This completes the proof. \(\square \)

The significance of the properties of range and null space of arbitrary-order tensors, the last result achieved the sufficient condition for the triple reverse order law of tensor.

Theorem 17

Let \( \mathcal {U}\in {\mathbb {C}}^{I_1\times \cdots \times I_M \times J_1 \times \cdots \times J_N} \), \( \mathcal {V}\in {\mathbb {C}}^{J_1\times \cdots \times J_N \times K_1 \times \cdots \times K_L} \) and

\( \mathcal {W}\in {\mathbb {C}}^{K_1\times \cdots \times K_L \times H_1 \times \cdots \times H_R} \). Let \(\mathcal {M} \in {\mathbb {C}}^{I_1\times \cdots \times I_M \times I_1 \times \cdots \times I_M} \) and \(\mathcal {N} \in {\mathbb {C}}^{H_1\times \cdots \times H_R \times H_1 \times \cdots \times H_R}\) be a pair of Hermitian positive definite tensors. If

$$\begin{aligned} {\mathfrak {R}}(\mathcal {W}) \subseteq {\mathfrak {R}}[(\mathcal {U} {*_N}\mathcal {V})^*] ~~ and~~ {\mathfrak {R}}(\mathcal {U}^*) \subseteq {\mathfrak {R}}(\mathcal {V}{*_L}\mathcal {W}). \end{aligned}$$

Then

$$\begin{aligned} (\mathcal {U}{*_N}\mathcal {V}{*_L}\mathcal {W})^\dag _{\mathcal {M},\mathcal {N}} = \mathcal {W}^{\dagger }_{\mathcal {I}_L,\mathcal {N}}{*_L}\mathcal {V}^{\dagger }{*_N}\mathcal {U}^{\dagger }_{\mathcal {M},\mathcal {I}_N}. \end{aligned}$$

Proof

Let \( \mathcal {A} = \mathcal {U}{*_N}\mathcal {V}{*_L}\mathcal {W}, ~~ \mathcal {W}_1 = (\mathcal {U}{*_N}\mathcal {V})^{\dagger }{*_M}\mathcal {A} \) and \( \mathcal {U}_1 = \mathcal {A} {*_R}(\mathcal {V}{*_L}\mathcal {W})^\dag \). From Eq. (12), we get

$$\begin{aligned} {\mathfrak {R}}(\mathcal {U}_1) = {\mathfrak {R}}(\mathcal {A}) ~~ \text {and}~~ {\mathfrak {R}}(\mathcal {W}_1^*) = {\mathfrak {R}}(\mathcal {A}^*). \end{aligned}$$

Also from Lemma 5(c), we get

$$\begin{aligned}&{\mathfrak {R}}[(\mathcal {W}_1^\dag )^*] ={\mathfrak {R}}(\mathcal {W}_1) \subseteq {\mathfrak {R}}[(\mathcal {U}*_N\mathcal {V})^\dag ] ={\mathfrak {R}}[(\mathcal {U}*_N\mathcal {V})^*]\\&\quad \text {and}~~ {\mathfrak {R}}(\mathcal {U}_1^\dag ) = {\mathfrak {R}}(\mathcal {U}_1^*) \subseteq {\mathfrak {R}}[(\mathcal {V}{*_L}\mathcal {W})^\dag ]^* = {\mathfrak {R}}(\mathcal {V}{*_L}\mathcal {W}). \end{aligned}$$

Applying Lemmas 5(a, b) and 6(a, b), we have

$$\begin{aligned} \mathcal {W}_1^\dag *_L\mathcal {V}^\dag *_N\mathcal {U}_1^\dag= & {} \mathcal {W}_1^\dag *_L\mathcal {W}_1*_R(\mathcal {V}*_L\mathcal {W})^\dag *_N\mathcal {U}_1^\dag = \mathcal {A}^{\dagger }{*_M}\mathcal {U}_1 {*_N}\mathcal {U}_1^{\dagger }= \mathcal {A}^{\dagger }, \end{aligned}$$

which is equivalent to

$$\begin{aligned} (\mathcal {U}{*_N}\mathcal {V}{*_L}\mathcal {W})^{\dagger }= [(\mathcal {U}{*_N}\mathcal {V})^{\dagger }{*_M}\mathcal {A}]^{\dagger }{*_L}\mathcal {V}^{\dagger }{*_N}[\mathcal {A}{*_R}(\mathcal {V}{*_L}\mathcal {W})^{\dagger }]^{\dagger }. \end{aligned}$$
(40)

Replacing \( \mathcal {U}\) and \(\mathcal {W} \) by \( \mathcal {M}^{1/2}{*_M}\mathcal {U} \) and \(\mathcal {W}{*_R}\mathcal {N}^{-1/2} \) in Eq. (40) along with using Eq. (18) and Lemma 11(a, b) we have

$$\begin{aligned} \mathcal {A}^{\dagger }_{\mathcal {M},\mathcal {N}}= & {} \mathcal {N}^{-1/2}{*_R}[(\mathcal {M}^{1/2}{*_M}\mathcal {U}{*_N}\mathcal {V})^{\dagger }{*_M}\mathcal {M}^{1/2}{*_M}\mathcal {A}{*_R}\mathcal {N}^{-1/2}]^{\dagger }{*_L}\mathcal {V}^{\dagger }*_N \\&\quad [\mathcal {M}^{1/2}{*_M}\mathcal {A}{*_R}\mathcal {N}^{-1/2}{*_R}(\mathcal {V}{*_L}\mathcal {W}{*_R}\mathcal {N}^{-1/2})]^{\dagger }{*_M}\mathcal {M}^{1/2} \\= & {} \mathcal {N}^{-1/2}{*_R}[\mathcal {W}_1{*_R}\mathcal {N}^{-1/2}]^{\dagger }{*_L}\mathcal {V}^{\dagger }{*_N}[\mathcal {M}^{1/2}{*_M}\mathcal {U}_1]^{\dagger }{*_M}\mathcal {M}^{1/2},\\= & {} (\mathcal {W}_1)^{\dagger }_{\mathcal {I}_L,\mathcal {N}}{*_L}\mathcal {V}^{\dagger }{*_N}(\mathcal {U}_1)^{\dagger }_{\mathcal {M},\mathcal {I}_N}. \end{aligned}$$

Applying Lemma 5[(a),(c)] and Lemma 6(a) in the given condition, we get

$$\begin{aligned} \mathcal {W} = (\mathcal {U}{*_N}\mathcal {V})^{\dagger }{*_M}(\mathcal {U}{*_N}\mathcal {V}){*_L}\mathcal {W} = \mathcal {W}_1~~\text { and}~~ \mathcal {U} = \mathcal {U}{*_N}(\mathcal {V}{*_L}\mathcal {W}){*_R}(\mathcal {V}{*_L}\mathcal {W})^{\dagger }= \mathcal {U}_1. \end{aligned}$$

Hence, \((\mathcal {U}{*_N}\mathcal {V}{*_L}\mathcal {W})^{\dagger }_{\mathcal {M},\mathcal {N}} = \mathcal {W}^{\dagger }_{\mathcal {I}_L,\mathcal {N}}{*_L}\mathcal {V}^{\dagger }{*_N}\mathcal {U}^{\dagger }_{\mathcal {M},\mathcal {I}_N}.\) \(\square \)

4 Applications

This section is devoted to the application of the SVD and the Moore–Penrose inverse of tensor in a few 3D color images.

4.1 SVD for color images

The singular value decomposition is an attractive algebraic transform for image processing. According to Lemma 2 the tensor \(\mathcal {A}\) splits into a set of linearly independent components, each of them bear their own energy contribution, i.e., a tensor represents the orthonormal tensors \(\mathcal {U}\) and \(\mathcal {V}\) along with a diagonal tensor \(\mathcal {D}\) comprised by singular values of \(\mathcal {A}\). Thus, the tensor \(\mathcal {A}\) can be represented in term of \(rshrank(\mathcal {A})\), i.e.,

$$\begin{aligned} \mathcal {A} = \displaystyle \sum _{i=1}^{r} \sigma _i\mathcal {U}_i*_1\mathcal {V}_i^T= \sigma _1\mathcal {U}_1*_1\mathcal {V}_1^T+\sigma _2\mathcal {U}_2*_1\mathcal {V}_2^T +\cdots +\sigma _r\mathcal {U}_r*_1\mathcal {V}_r^T \end{aligned}$$
(41)

where \(r=rshrank(\mathcal {A})\), \(\mathcal {V}_1, \mathcal {V}_2, \cdots , \mathcal {V}_{N}\) be the frontal slices of \(\mathcal {V}\) and \(\mathcal {U}_1, \mathcal {U}_2, \cdots \mathcal {U}_M\) be the frontal slices of \(\mathcal {U}\) such that the tensors

$$\begin{aligned} \mathcal {V} = [\mathcal {V}_1, \mathcal {V}_2, \cdots \mathcal {V}_N],~~\mathcal {U} = [\mathcal {U}_1, \mathcal {U}_2, \cdots \mathcal {U}_M], ~~\text {and}~~r=rshrank(\mathcal {A}). \end{aligned}$$
Fig. 1
figure 1

a and f are true images. The reconstructions of images using SVD based on the Einstein product of tensor: b and g 05 singular values; c and h 15 singular values; d and i 25 singular values; e and j 200 singular values

Fig. 2
figure 2

The reconstructions of images using SVD based on t-product of tensor: a and e 05 singular values; b and f 15 singular values; c and g 25 singular values; d and h 200 singular values

It is well known that singular values are arranged in decreasing order and thus the last terms of the singulars values have the least effect on the image. To benefit from this property we use it for reducing space to store the image on the computer. For more details on SVD, the reader is encouraged to see the following papers for matrices (Shim and Cho 1981; Lyra-Leite et al. 2012) and the use of t-product tensors (Kilmer et al. 2013; Kilmer and Martin 2011). Consider a positive number k such that \(k\le r.\) Hence, without going to the very last singular value we can compress the image. Truncating (41) sums after the first k terms, we obtain

$$\begin{aligned} \mathcal {A}_k = \sigma _1\mathcal {U}_1*_1\mathcal {V}_1^T+\sigma _2\mathcal {U}_2*_1\mathcal {V}_2^T +\cdots +\sigma _k\mathcal {U}_k*_1\mathcal {V}_k^T. \end{aligned}$$

To illustrate the accuracy and efficiency of the SVD, we take into account t-product based SVD (see Kilmer et al. 2013; Kilmer and Martin 2011) and the Einstein product-based SVD. We consider two \(400\times 512\times 3\) color 3D images in Fig. 1a, f. Considering only five singular values of the associated tensor, we reconstruct the original image using the Einstein product-based SVD, and present in Fig. 1b, g. In the same manner, Fig. 1c, h are reconstructed with 15 singular values. Figure 1d, i has been reconstructed using 25 singular values. We have to increase the number of singular values to reconstruct the image as well as the original image. Finally, Fig. 1e, j is reconstructed with 200 singular values. Similarly, we have reconstructed images using t-product based SVD in Fig. 2. To determine the effectiveness of our reconstruction, we evaluate

$$\begin{aligned} \text {Relative error}=\frac{\left\| { \mathcal {A}}-{\mathcal { A}}_{\text {k}}\right\| _{F}}{\Vert { \mathcal {A}}\Vert _{F}}, \end{aligned}$$

where \(\mathcal {A}_k\) is the reconstruction image and

$$\begin{aligned} \left\| \mathcal {A} \right\| _F^2 =\displaystyle \sum _{i_1=1}^{I_1}\cdots \sum _{i_M=1}^{I_M}\sum _{j_1=1}^{J_1}\cdots \sum _{j_N=1}^{J_N} a^2_{{i_1\ldots i_M}{j_1\ldots j_N}}. \end{aligned}$$
(42)

To measure the quality of reconstruction between the original image \(\mathcal {A}\) and the SVD compressed images \(\mathcal {A}_k\) with different tensor product (the Einstein product and t-product), we determine the relative error in Fig. 3 and conclude that t-product-based SVD gives more accurate result compare to the Einstein product. But the drawback of the t-product is “multiplication of arbitrary order tensors”. However, the main aim of this paper is to focus on arbitrary order tensors; hence, we consider reshape operation-based SVD with Einstein product for our study.

Fig. 3
figure 3

Relative error (between compressed and original images) with the used number of singular values

Fig. 4
figure 4

a and d true images; b and e blurred noisy images; c and f reconstruction images

4.2 Moore–Penrose inverse for color images

We now discuss the reconstruction of an arbitrary-order image using the Moore–Penrose inverse of a tensor. The discrete model for a color image is represented as \(\mathcal {A}*_2\mathcal {X}=\mathcal {B},\) where the tensor \(\mathcal {B}\) is the blurred image, often corrupted by the noise from the true image \(\mathcal {X}\). \(\mathcal {A}\) is known as blurring tensor. The authors of Huang et al. (2019) have discussed the tensor form of the global GMRES, MINIRES, SYMMLQ iterative methods to find the approximate solution of the ill-posed system. Further, a few iterative methods (called LSQR, and LSMR) have been discussed in Huang and Ma (2020). The t-product based on the Moore–Product inverse may gives more accurate result, as SVD. Here our purpose is not to compare our tensor-based approach to other tensor-based method, but rather to contribute to a few characterizations of the weighted Moore–Penrose inverses of tensors and study the reverse-order laws for this inverse. We use the Einstein product based the Moore–Penrose inverse to reconstruct the original image with the help of blurring tensor \(\mathcal {A}\) and blurred image \(\mathcal {B}\). We consider two blurring \(256\times 256\times 3\) colour images \(\mathcal {B}\) form original image \(\mathcal {X}\). Then we have added random perturbations to \(\mathcal {B}\) with the noise level of 0.001 percent and shown our results in Fig. 4b, e. Two original images are also shown in Fig. 4a, d. Using least square solution \(\mathcal {A}^\dagger *_2\mathcal {B}\), we have reconstructed the true images, and the resulting images are displayed in Fig. 4c, f.

5 Conclusion

In this paper, a novel SVD and full rank-decomposition of arbitrary-order tensors using reshape operation is developed. Using this decomposition, we have studied the Moore–Penrose and general weighted Moore–Penrose inverse for arbitrary-order tensors via the Einstein product. Further, singular value decomposition has been use for 3D color image reconstruction and an application of Moore–Penrose inverse of tensors of arbitrary-order tensor is demonstrated in a colour image deblurring. We have also added some results on the range and null spaces to the existing theory. Then we discuss a few characterizations of cancellation properties for Moore–Penrose inverse of tensors. In addition to these, we have discussed the reverse-order laws for weighted Moore–Penrose inverses. In the future, it will be more interesting to express additional identities of weighted Moore–Penrose inverse in terms of the ordinary Moore–Penrose inverse for arbitrary-order tensor.