1 Introduction

A quaternion, which was found in 1840 by William Rowan Hamilton, is in the form of q = q1 + q2i + q3j + q4k, i2 = j2 = k2 = − 1,ijk = − 1, where q1, q2, q3, q4R, and ij = −ji = k,jk = −kj = i,ki = −ik = j. With the further development of quaternion algebras, the quaternion matrices have been widely used in computer science, quantum mechanics, signal and color image processing, control theory, geometric rotation and other fields [1,2,3,4,5,6,7,8]. Especially in recent years, after Professor Wei Musheng and others proposed the real structure-preserving algorithm of a quaternion matrix, they greatly promoted the research and application of the algorithm of a quaternion matrix. In papers [9,10,11,12,13,14,15,16,17,18], by using the real structure-preserving algorithm of quaternion matrices, the authors studied the LU decomposition problem of quaternion matrices, the singular value decomposition problem of quaternion matrices, eigenvalue problem of Hermitian quaternion matrices and the power method of eigenvalues of quaternion matrices and so on.

As a fundamental property of matrix algebras, the full rank decomposition is always a hot topic. In the field of face recognition, full rank decomposition can avoid iterative steps and solve the sparse system directly, save a lot of time and improve the classification accuracy [20,21,22,23]. In paper [19], the authors proposed a fast and robust face recognition method named enhancing sparsity via full rank decomposition. The proposed method first represented the test sample as a linear combination of the training data as the same as sparse representation, then made a full rank decomposition of the training data matrix. Finally, obtained the generalized inverse of the training data matrix and solved the general solution of the linear equation. In the field of system control, the full rank system compensator extended the internal model control to the structural rank deficient system, which effectively avoided the characteristic that the model control method is only suitable for the square systems. In paper [24], the authors proposed an internal model control method for structured rank deficient systems based on the full rank decomposition. The system converted into a column full rank system by designing a pre-compensator. Then a feedback-compensator is designed to improve the dynamic characteristics of the full rank system and decrease the controller design difficulties.

Based on the extensive application of quaternion algebra and the full rank decomposition in the above fields, and the structure-preserving algorithm greatly improves the timeliness of a quaternion matrix decomposition, and its results are better than quaternion toolbox (QTFM) to a certain extent. We find that the complex preserving structure algorithm is also efficient for the full rank decomposition of a quaternion matrix. The theoretical basis of complex structure-preserving algorithm is the same as real structure-preserving algorithm, but its algorithm writing form is relatively simple, the dimensions of the matrix are reduced by half, and scholars have not studied and given the full rank decomposition algorithm of quaternion matrices. This paper, by means of the Gauss elimination method of quaternion matrices, derives the direct algorithm and complex structure-preserving algorithm for the full rank decomposition of a quaternion matrix, proves the full rank decomposition form of the generalized inverse matrix of a quaternion matrix. In addition, by using the above two full rank decomposition algorithms, we also give a fast algorithm for solving the quaternion linear equations. The feasibility and timeliness of the algorithm are proved by numerical examples. Finally, this algorithm can also be applied to the sparse representation classification of color images, and the classification effect is well.

Let R denote the real number field, C the complex number field, Q = RR i ⊕R j ⊕R k the quaternion ring, in which i2 = j2 = k2 = ijk = − 1,ij = −ji = k,jk = -kj = i,ki = -ik=j. For any matrix \(A=A_{1}+A_{2}\mathrm {i}+A_{3}\mathrm {j}+A_{4}\mathrm {k}\in \mathbf {Q}^{m\times n}\), \(\overline {A}=A_{1}-A_{2}\mathrm {i}-A_{3}\mathrm {j}-A_{4}\mathrm {k}\), \(A^{T}={A^{T}_{1}}+{A^{T}_{2}}\mathrm {i}+{A^{T}_{3}}\mathrm {j}+{A^{T}_{4}}\mathrm {k}\), \(A^{*}={A^{T}_{1}}-{A^{T}_{2}}\mathrm {i}-{A^{T}_{3}}\mathrm {j}-{A^{T}_{4}}\mathrm {k}\), denote the conjugate, the transpose, the conjugate transpose of the matrix A, respectively. Fm×n denotes the set of m × n matrices over the ring of F. For any quaternion q = q1 + q2i + q3j + q4k, HCode \(\overline {q}=q_{1}-q_{2}\mathrm {i}-q_{3}\mathrm {j}-q_{4}\mathrm {k}\) is the conjugate of q, the norm of the quaternion q is defined to be \(\left \|q\right \|=\sqrt {\left |q\overline {q}\right |}= \sqrt {\left |{q_{1}^{2}}+{q_{2}^{2}}+{q_{3}^{2}}+{q_{4}^{2}}\right |}\).

2 Preliminaries

In this section, we first review some basic properties about quaternion matrices and their complex representation matrices, and then prove the full rank decomposition form of the generalized inverse matrix of a quaternion matrix.

For \(A=A_{1}+A_{2}\mathrm {i}+A_{3}\mathrm {j}+A_{4}\mathrm {k}=M_{1}+M_{2}\mathrm {j}\in \textbf {Q}^{m\times n}\), \(A_{1}, A_{2}, A_{3}, A_{4}\in \textbf {R}^{m\times n}\), a complex representation matrix Aσ of A is defined to be

$$ A^{\sigma}= \left[\begin{array}{cc} M_{1} & M_{2}\\ -\overline{M}_{2} & \overline{M}_{1} \end{array}\right] \in\textbf{C}^{2m\times 2n}, $$
(2.1)

where \(M_{1}=A_{1}+A_{2}\mathrm {i}, M_{2}=A_{3}+A_{4}\mathrm {i}\in \textbf {C}^{m\times n}\). By (2.1), we have

$$ A^{\sigma}=\left[\begin{array}{cc}A_{c}^{\sigma} & {Q_{m}^{T}}\overline{A_{c}^{\sigma}} \end{array}\right]=\left[\begin{array}{cc}A_{r}^{\sigma}\\ \overline{A_{r}^{\sigma}}Q_{n} \end{array}\right],\qquad {Q_{m}^{T}}A^{\sigma}Q_{n}=\overline{A^{\sigma}}, $$
(2.2)

where \(A_{r}^{\sigma }=\left [\begin {array}{cc}M_{1}&M_{2} \end {array}\right ], A_{c}^{\sigma }=\left [\begin {array}{cc} M_{1}\\-\overline {M}_{2} \end {array}\right ], Q_{t}=\left [\begin {array}{cc}0 & I_{t}\\-I_{t} & 0 \end {array}\right ]\). By (2.1) and (2.2), it is easy to prove the following results.

Proposition 2.1

[18] Let A, BQm×n, CQn×p, aR. Then

$$ \begin{array}{@{}rcl@{}} (A+B)^{\sigma}&=&A^{\sigma}+B^{\sigma}, (AC)^{\sigma}=A^{\sigma}C^{\sigma}, (aA)^{\sigma}=aA^{\sigma}, \end{array} $$
(2.3)
$$ \begin{array}{@{}rcl@{}} (A+B)_{r}^{\sigma}&=&A_{r}^{\sigma}+B_{r}^{\sigma}, (AC)_{r}^{\sigma}=A_{r}^{\sigma}C^{\sigma}, (aA)_{r}^{\sigma}=aA_{r}^{\sigma}. \end{array} $$
(2.4)

For any matrix AQn×n, by the Frobenius norm and 2 norm of a complex matrix, define the following Frobenius norm and 2 norm of a quaternion matrix

$$ \left\|A\right\|_{F}=\frac{1}{\sqrt{2}}\left\| A^{\sigma}\right\|_{F},\ \ \ \left\|A\right\|_{2}=\left\| A^{\sigma}\right\|_{2}, $$
(2.5)

then the Frobenius norm and 2 norm are unitarily invariant norms of a quaternion matrix. From the above discussion, it is easy to prove the following results.

Proposition 2.2

Let AQm×n, bQm. Then

$$ \left\|A\right\|_{F}=\left\| A_{r}^{\sigma}\right\|_{F},\ \left\|b\right\|_{2}=\left\| b_{c}^{\sigma}\right\|_{2}. $$
(2.6)

Proposition 2.3

For any quaternion matrices A, B, we have

  1. (1)

    \(\text {rank}(A)=\frac {1}{2}\text {rank}(A^{\sigma });\)

  2. (2)

    \(\text {rank}(AB)\leqslant \min \limits \left \{\text {rank}(A),\text {rank}(B)\right \},\ in\ which\ A\in \textbf {Q}^{m\times n}, B\in \mathbf {Q}^{n\times p};\)

  3. (3)

    \(\text {rank}(A^{*})=\text {rank}(\overline {A})=\text {rank}(A^T),\ \text {rank}(A^{*}A)=\text {rank}(A),\ in\ which\ A\in \mathbf {Q}^{m\times n};\)

  4. (4)

    rank(A) = rank(B) if and only if rank(Aσ) = rank(Bσ), in which A, BQm×n.

From [18], supposed that AQm×n, if there exists XQn×m such that

$$ \begin{array}{@{}rcl@{}} (1)\ (AX)^{*}=AX;\quad\ (2)\ (XA)^{*}=XA;\quad\ (3)\ AXA=A;\quad\ (4)\ XAX=X, \end{array} $$
(2.7)

then X is called the generalized inverse matrix of matrix A, denoted by A, and the matrix A satisfies the following properties,

  1. (1)

    (A) = (A);

  2. (2)

    If rank(A) = m, then A = A(AA)− 1;

  3. (3)

    If rank(A) = n, then A = (AA)− 1A.

Proposition 2.4

[18] Supposed that AQm×n, bQn. Then when the quaternion linear equation Ax = b is regarded as the least squares problem, its solution sets and the compatible linear system Ax = AAb are the same. A general solution of the quaternion least squares problem have the following expression x = Ab + (IAA)y, in which yQn is any quaternion vector.

Theorem 2.5

Suppose that matrix A has the full rank decomposition A = BC, in which BQm×r, CQr×n, rank(B) = rank(C) = r > 0. Then A = CB = C(CC)− 1(BB)− 1B is a generalized inverse matrix of matrix A.

Proof 1

Since B is a column full rank matrix, C is a row full rank matrix, it is easy to get B = (BB)− 1B, C = C(CC)− 1. Let X = CB, easy to verify that matrix X satisfies (2.7), i.e., X is the generalized inverse matrix of A, and A = CB = C(CC)− 1(BB)− 1B. □

3 Gaussian elimination algorithm for full rank decomposition of a quaternion matrix

Gauss elimination method is widely used in matrix decomposition, for example, full rank decomposition, LU decomposition, LDL decomposition, and Cholesky decomposition. In this section, we derive the full rank decomposition process of a quaternion matrix by the Gauss elimination method, and give a direct algorithm of the full rank decomposition.

Given a quaternion vector \(x=\left (x_{1},x_{2},\cdots ,x_{n}\right )^{T}\in \textbf {Q}^{n}\), if xi≠ 0, then

$$ l_{i}=\left( 0,\cdots,0,l_{i+1,i},l_{i+2,i},{\cdots} l_{n,i}\right)^{T}, $$
(3.1)

where \(l_{k,i}=x_{k}x_{i}^{-1}=\frac {x_{k}\overline {x}_{i}}{\left \| x_{i} \right \|^{2}},\ k=i+1,\cdots ,n\). Define the following quaternion Gauss transformation matrix,

$$ L_{i}=I_{n}-l_{i}{e_{i}^{T}}=\left[\begin{array}{llllll} 1 & {\cdots} & 0 & 0 & {\cdots} & 0\\ {\vdots} & {\ddots} &{\vdots} &{\vdots} & {\ddots} &\vdots\\ 0 & {\cdots} & 1 & 0 & {\cdots} & 0\\ 0 & {\cdots} &-l_{i+1,i} & 1 & {\cdots} & 0\\ {\vdots} & {\ddots} &{\vdots} &{\vdots} & {\ddots} &\vdots\\ 0 & {\cdots} &-l_{n,i} & 0 & {\cdots} & 1 \end{array}\right], $$
(3.2)

clearly, \(L_{i}x=\left (x_{1},\cdots ,x_{i},0,\cdots ,0\right )^{T}\). Li, li, lk, i denote the quaternion Gauss transformation matrix, quaternion Gauss transformation vector, quaternion multiplier, respectively.

For any matrix \(A=(a_{ij})\in \textbf {Q}^{m\times n}\), the process of full rank decomposition of matrix A is as follows.

  1. (1)

    If a11≠ 0, construct a quaternion Gauss transformation matrix L1 such that

    $$ L_{1}A=\left[\begin{array}{ccccc} a_{11}^{(1)}&a_{12}^{(1)}&\cdots&a_{1n}^{(1)}\\ 0&a_{22}^{(2)}&\cdots&a_{2n}^{(2)}\\ \vdots&\vdots&\vdots&\vdots\\ 0&a_{m2}^{(2)}&\cdots&a_{mn}^{(2)} \end{array}\right]=A^{(2)},\ \text{where}\ L_{1}=\left[\begin{array}{ccccc} 1 & & & & \\ -l_{21} &1 & & & \\ -l_{31} & &1 & & \\ {\vdots} & & &\ddots& \\ -l_{m1} & & & & 1 \end{array}\right]. $$
    (3.3)
  2. (2)

    If a11 = 0. Suppose that there exists ai1≠ 0. Interchange rows 1 and i of matrix A, i.e., there exists a permutation matrix P1 such that

    $$ L_{1}P_{1}A=A^{(2)}. $$
    (3.4)
  3. (3)

    Similarly, the result after step k − 1, we have the following results,

    $$ A^{(k)}=\left[\begin{array}{cccccc} a_{11}^{(1)} &a_{12}^{(1)}&{\cdots} &a_{1k}^{(1)}&\cdots&a_{1n}^{(1)}\\ & a_{22}^{(2)}&{\cdots} &a_{2k}^{(2)}&\cdots&a_{2n}^{(2)}\\ & &{\ddots} &{\vdots} &\vdots&\vdots\\ & & &a_{kk}^{(k)}&\cdots&a_{kn}^{(k)}\\ & & &{\vdots} &\vdots&\vdots\\ & & &a_{mk}^{(k)}&\cdots&a_{mn}^{(k)} \end{array}\right]. $$
    (3.5)

In the above process, if \(a_{kk}^{(k)}=a_{kk+1}^{(k)}=\cdots =a_{mk}^{(k)}=0\), then go to the operation on the k + 1 column element. After step t above, convert the original quaternion matrix to the upper triangular matrix is as follows,

$$ \begin{array}{@{}rcl@{}} L_{r}P_{r}{\cdots} L_{1}P_{1}A&=&\left[\begin{array}{ccccc} a^{(1)}_{11} & {\cdots} & a^{(1)}_{1r} & {\cdots} & a^{(1)}_{1n} \\ {\vdots} & {\ddots} & {\vdots} & & {\vdots} \\ 0 & {\cdots} & a^{(r)}_{rr} & {\cdots} & a^{(r)}_{rn} \\ 0 & & {\cdots} & & 0 \\ {\vdots} & & & & {\vdots} \\ 0 & & {\cdots} & & 0 \end{array}\right]=\left[\begin{array}{cc}C \\ 0 \end{array}\right]\\ &\Rightarrow& A=P^{-1}L^{-1}\left[\begin{array}{cc}C \\ 0 \end{array}\right]=\left[\begin{array}{cc}B & * \end{array}\right]\left[\begin{array}{cc}C \\ 0 \end{array}\right]=BC, \end{array} $$
(3.6)

in which \(L=L_{r}{\cdots } L_{1},\ P=Q_{r}{\cdots } Q_{2}P_{1},\ Q_{i}=(L_{i-1}{\cdots } L_{1})^{-1}P_{i}(L_{i-1}{\cdots } L_{1}),\ 2\leqslant i\leqslant r,\ L_{r}P_{r}{\cdots } L_{1}P_{1}A=L_{r}P_{r}{\cdots } L_{2}L_{1}Q_{2}P_{1}A=L_{r}{\cdots } L_{1}Q_{r}{\cdots } Q_{2}P_{1}A=LPA\).

Theorem 3.1

Let AQm×n,rank(A) = r, then there exist BQm×r, CQr×n and rank(B) = rank(C) = r, such that A = BC.

Next, we give a direct algorithm for full rank decomposition of a quaternion matrix. This algorithm is based on quaternion toolbox (QTFM), which directly decomposes the quaternion matrix.

figure a

4 Complex structure-preserving algorithm for the full rank decomposition of quaternion matrices

In this section, based on the algorithm of the previous chapter, we use the isomorphic complex representation of a quaternion matrix, transform the corresponding algorithm operation to the complex field, reduce the matrix dimension and the amount of calculation, and obtain a structure-preserving algorithm of the full rank decomposition.

Let \(l_{i}=l_{i}^{(1)}+l_{i}^{(2)}\mathrm {j}\in \textbf {Q}^{n}\), where \(l_{i}^{(j)}=\left (0,\cdots ,0,l_{i+1,i}^{(j)},l_{i+1,i}^{(j)},\cdots ,l_{n,i}^{(j)}\right )^{T}, i=1,2,\cdots , n-1, j=1,2\), by (3.2), the complex representation matrix of Gaussian transformation matrix Li is as follows:

(4.1)

it is easy to get,

(4.2)

For any quaternion vector, x = x(1) + x(2)j ∈Qn, \(x^{(j)}=\left (x_{1}^{(j)},\cdots ,x_{n}^{(j)}\right )^{T}\in \textbf {C}^{n}, j=1,2\), if we take the k(i < kn) member of vector and convert it to 0(xi≠ 0), let

$$ l_{i}^{(j)}=\left( 0,\cdots,0,l_{k,i}^{(j)},0,\cdots,0\right)^{T}, i<k\leq n, j=1,2, $$
(4.3)

where \( l_{k,i}^{(1)} = \left (x_{k}^{(1)}\overline {x_{i}^{(1)}} + x_{k}^{(2)}\overline {x_{i}^{(2)}} \right )/{\left \| {{x_{i}}} \right \|^{2}}, l_{k,i}^{(2)} = \left (- x_{k}^{(1)}x_{i}^{(2)} + x_{k}^{(2)}x_{i}^{(1)}\right )\) \(/{\left \| {{x_{i}}} \right \|^{2}}, {\left \| {{x_{i}}} \right \|^{2}} = {\left \| {x_{i}^{(1)}} \right \|^{2}} + {\left \| {x_{i}^{(2)}} \right \|^{2}}\), then Lix = (x1, x2,⋯xk− 1,0,xk+ 1,⋯xn)T.

For any matrix \(A={A_{1}}+{A_{2}}\mathrm {i}+{A_{3}}\mathrm {j}+{A_{4}}\mathrm {k}={M_{1}}+{M_{2}}\mathrm {j}\in {\textbf {{Q}}^{m \times n}}, {M_{1}}=\left (a_{ij}^{(1)}\right ), {M_{2}}= \left (a_{ij}^{(2)}\right ) \in \textbf {C}^{m \times n}\), let the initial matrix be as follows,

(4.4)
  1. (1)

    If \({\left \| {{a_{11}}} \right \|^{2}} = {\left \| {a_{11}^{(1)}} \right \|^{2}} + {\left \| {a_{11}^{(2)}} \right \|^{2}} >0\), construct the complex representation matrix of Gaussian transformation matrix L1,

    (4.5)

    in which, HCode \(l_{i1}^{(1)} = \left (a_{i1}^{(1)}\overline {a_{11}^{(1)}} + a_{i1}^{(2)}\overline {a_{11}^{(2)}}\right )/{\left \| {{a_{11}}} \right \|^{2}},\ l_{i1}^{(2)} = \left (- a_{i1}^{(1)}a_{11}^{(2)} + a_{i1}^{(2)}a_{11}^{(1)}\right )\) \(/{\left \| {{a_{11}}} \right \|^{2}}, i=2,...,m\), then,

    (4.6)

    clearly, \(L_{1}^{\sigma } {A^{\sigma }}\) is the complex representation matrix of matrix \(\left [ {\begin {array}{*{20}{c}}{\hat {a}_{11}^{(1)}} & {\hat {M}_{12}^{(1)}} \\0 & {\hat {M}_{22}^{(1)}} \end {array}} \right ] + \left [ {\begin {array}{*{20}{c}}{\hat {a}_{11}^{(2)}} & {\hat {M}_{12}^{(2)}} \\0 & {\hat {M}_{22}^{(2)}} \end {array}} \right ]{\text {j}}\), and \(\hat {M}_{12}^{(1)},{\text { }}\hat {M}_{12}^{(2)} \in \textbf {C}^{1 \times (n - 1)}, \hat {M}_{22}^{(1)},{\text { }}\hat {M}_{22}^{(2)} \in \textbf {C}^{(m - 1) \times (n - 1)}\) are submatrix after Gauss transformation of matrix M1, M2.

  2. (2)

    If \({\left \| {{a_{11}}} \right \|^{2}} = {\left \| {a_{11}^{(1)}} \right \|^{2}} + {\left \| {a_{11}^{(2)}} \right \|^{2}} = 0\), choose i0(2 ≤ i0m) such that, \({\left \| {{a_{{i_{0}}1}}} \right \|^{2}} = {\left \| {a_{{i_{0}}1}^{(1)}} \right \|^{2}} + {\left \| {a_{{i_{0}}1}^{(2)}} \right \|^{2}} > 0\), interchange rows 1 and i0 of A, i.e., there exists a real permutation matrix P1 such that,

    (4.7)

    in which case, \(P_{1}^{\sigma }A^{\sigma }\) satisfies (1), by (4.6), construct the matrix L1 such that,

    (4.8)

    clearly, \(L_{1}^{\sigma } P_{1}^{\sigma } {A^{\sigma }}\) is a complex representation matrix of matrix \(\left [ {\begin {array}{*{20}{c}}{\hat {a}_{11}^{(1)}} & {\hat {M}_{12}^{(1)}} \\0 & {\hat {M}_{22}^{(1)}} \end {array}} \right ] + \left [ {\begin {array}{*{20}{c}}{\hat {a}_{11}^{(2)}} & {\hat {M}_{12}^{(2)}} \\0 & {\hat {M}_{22}^{(2)}} \end {array}} \right ]{\text {j}}\), in which \(\hat {M}_{12}^{(1)},{\text { }}\hat {M}_{12}^{(2)} \in \textbf {C}^{1 \times (n - 1)},\hat {M}_{22}^{(1)},{\text { }}\hat {M}_{22}^{(2)} \in \textbf {C}^{(m - 1) \times (n - 1)}\) are submatrix after Gauss transformation of matrix M1, M2.

After the above step t, M1, M2 are two unit upper triangular matrices, i.e.,

(4.9)

For unified representation, P1,⋯ ,Pr display all exists, Pi(0 ≤ ir) not all of them exist in the actual operation, and \({\hat {M}^{(1)}}, {\hat {M}^{(2)}} \in \textbf {C}^{r \times n}\) are submatrix after Gauss transformation of matrix M1, M2, then the complex representation matrix of the full rank decomposition matrix is

$$ {C^{\sigma} } = \left[ {\begin{array}{*{20}{c}} {{{\hat{M}}^{(1)}}} & {{{\hat{M}}^{(2)}}} \\ { - {{\overline {\hat{M}} }^{(2)}}} & {{{\overline {\hat{M}} }^{(1)}}} \end{array}} \right] \in \textbf{C}^{2r \times 2n},\ B^{\sigma} = {P^{\sigma} }{L^{\sigma} }(:,{\text{ }}[1:r,n:n+r]) \in \textbf{C}^{2m \times 2r}, $$
(4.10)

where \(P^{\sigma }=\left (Q_{r}^{\sigma }{\cdots } Q_{2}^{\sigma } P_{1}^{\sigma }\right )^{-1},\ Q_{i}^{\sigma }=\left (L_{i-1}^{\sigma }{\cdots } L_{1}^{\sigma }\right )^{-1}P_{i}^{\sigma }\left (L_{i-1}^{\sigma }{\cdots } L_{1}^{\sigma }\right ),\ 2\leqslant i\leqslant r,\ {L^{\sigma }}=\left (L_{r}^{\sigma } {\cdots } L_{1}^{\sigma } \right )^{- 1}\).

Theorem 4.1

Supposed that AQm×n,rank(A) = r, then there exist BσC2m×2r, CσC2r×2n, rank(Bσ) = rank(Cσ) = 2r, satisfies Aσ = BσCσ, i.e., A = BC.

By (2.2) and Proposition 2.1, we only need to calculate the first row block of the complex representation matrix in the actual program operation. Next, we give the specific algorithm of the above structure-preserving process.

figure j

Example 4.1

For any quaternion matrix \(A={A_{1}}+{A_{2}}\mathrm {i}+{A_{3}}\mathrm {j}+{A_{4}}\mathrm {k}\in {{\textbf {Q}}^{m \times n}}\), let m = n = 10 : 10 : 500,A1 = rand(m, n),A2 = rand(m, n),A3 = rand(m, n),A4 = rand(m, n). Calculate the CPU time and relative error of two algorithms for the full rank decomposition of random quaternion matrices of order 10 to 500 \(\left (\eta _{k}=\frac {\left \|{{A_{k}}-{B_{k}}{C_{k}}}\right \|}{\left \|{A}\right \|}\right )\) as follows (Figs. 1 and 2).

Fig. 1
figure 1

CPU time scatter diagram

Fig. 2
figure 2

Error scatter diagram of algorithms 1 and 2

Example 4.1 shows that the error of Algorithm 2 is smaller than that of Algorithm 1, and the running time of CPU is also shortened obviously. It can be seen that the result of complex structure-preserving algorithm for the full rank decomposition of quaternion matrices have better timeliness, small error and short running time, which is conducive to the analysis and discussion of a quaternion matrix.

5 A fast algorithm for solving the quaternion linear equations

The main application of the full rank decomposition is to solve the generalized inverse matrix, and then to solve the general solution of corresponding linear equation. At the same time, in the process of the full rank decomposition, we can also get the rank of matrix and so on. In this section, we extend the above two full rank decomposition algorithms, give two algorithms for solving quaternion linear equations, and compare the feasibility and timeliness of the two algorithms. Finally, the complex structure-preserving algorithm of the full rank decomposition is applied to the sparse representation of color images, and the test images are classified.

figure k
figure l

In addition, the above steps 1–3 is also a fast algorithm for calculating the generalized inverse matrix of quaternion matrix.

Example 5.1

Find the general solution to the system of linear equations Ax = b by the full rank decomposition of matrix A, in which

$$ A=\left[\begin{array}{ccc} 1+2\mathrm{i} & 3\mathrm{j} & 2\mathrm{k}\\ 0 &4+2\mathrm{k} &\text{i+j}\\ 2+4\mathrm{i} &8\text{i+12j}&-3+3\mathrm{k} \end{array}\right],\ b=\left[\begin{array}{ccc} -3+14\text{j+5k} \\ 3+13\text{i+5j+k}\\-29+5\text{i+33j-11k} \end{array}\right]. $$
  1. (1)

    By (2.1) and (2.2), we have

Run Algorithm 2, and we get

$$ B_{1}=\begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 2 & \mathrm{i} \end{bmatrix},\ B_{2}=\begin{bmatrix} 0 & 0 \\ 0 & 0 \\ 0 & 2 \end{bmatrix},\ C_{1}=\begin{bmatrix} 1+2\mathrm{i} & 0 & 0 \\ 0 & 4 & \mathrm{i} \end{bmatrix},\ C_{2}=\begin{bmatrix} 0 & 3 & 2i \\ 0 & 2\mathrm{i} & 1 \end{bmatrix}, $$

i.e., rand(A) = 2, the full rank decomposition is A = BC, in which

$$ B=\left[ {\begin{array}{*{20}{c}} {\text{1}} & {\text{0}} \\ {\begin{array}{*{20}{c}} {\text{0}} \\ {\text{2}} \end{array}} & {\begin{array}{*{20}{c}} {\text{1}} \\ {{\text{i + 2j}}} \end{array}} \end{array}} \right],{\text{ }}C{\text{ = }}\left[ {\begin{array}{*{20}{c}} {{\text{1 + 2i}}} & {\begin{array}{*{20}{c}} {{\text{3j}}} & {{\text{2k}}} \end{array}} \\ {\text{0}} & {\begin{array}{*{20}{c}} {{\text{4 + 2k}}} & {{\text{i + j}}} \end{array}} \end{array}} \right]. $$
  1. (2)

    By (1) and Algorithm 4, it is easy to get

    $$ A^{\dagger}b=\left[\begin{array}{ccc} \text{$-$0.2286+0.4929i+2.8357j$-$1.0286k}\\ \text{ 1.4357+2.0143i$-$0.2429j+0.0929k}\\ \text{ 3.2000+2.4000i+0.1571j+1.2571k} \end{array}\right], $$

then the general solution of linear equations Ax = b is x = Ab + (IAA)y, in which yQn is any quaternion vector.

Example 5.2

For any quaternion matrix \(A={A_{1}}+{A_{2}}\mathrm {i}+{A_{3}}\mathrm {j}+{A_{4}}\mathrm {k}\in {{\textbf {Q}}^{m \times n}}, x={x_{1}}+{x_{2}}\mathrm {i}+{x_{3}}\mathrm {j}+{x_{4}}\mathrm {k}\in {{\textbf {Q}}^{n \times 1}}\), let m = n = 10 : 10 : 500,A1 = rand(m, n),A2 = rand(m, n),A3 = rand(m, n),A4 = rand(m, n),x1 = rand(n,1),x2 = rand(n,1),x3 = rand(n,1),x4 = rand(n,1),b = Ax. Calculate the CPU time and relative error of two algorithms for the solution of quaternion linear equations Ax = b of order 10 to 500 \(\left (\eta _{k}=\frac {\left \|{A^{\dagger }b-{x}}\right \|_{F}}{\left \|{x}\right \|_{F}}\right )\) as follows (Figs. 3 and 4).

Fig. 3
figure 3

CPU time scatter diagram

Fig. 4
figure 4

Error scatter diagram of algorithms 3 and 4

Example 5.2 shows that in terms of calculating the linear equations, the running time and calculation error of Algorithm 4 are also far less than the result of Algorithm 3. And for the calculation of high-dimensional linear equations, Algorithm 3 is difficult to implement, but Algorithm 4 is still fast. In the above chart, in order to compare the results of the two algorithms, we only performed operations within 500 dimensions. In fact, Algorithm 4 can perform higher-dimensional operations.

Example 5.3

Given a color face recognition training data set (Fig. 5), and the corresponding test sample (Figs. 67) are classified. (Color image classification for different subjects. Images of a subject from the Faces95 database.)

Fig. 5
figure 5

The set of training samples 1, c= 4

Fig. 6
figure 6

The test sample 1

Fig. 7
figure 7

The test sample 2

Let A = R i + G j + B k be quaternion representation matrix of the training data set (Fig. 5), b = Rbi + Gbj + Bbk be quaternion representation matrix of the test sample (Figs. 67), where R, Rb, G, Gb, B, Bb are real matrices standing for red, green, and blue colors. A training set is divided into c(c = 4) categories. The goal is exactly to predict the label of b from the given c class training samples. For each class i, let \(\delta _{i}: \textbf {R}^{n}\rightarrow \textbf {R}^{n}\) be the characteristic function which selects the coefficients associated with the i th class. The test sample b can be approximated by \(\hat {b}_{i}=A\delta _{i}(x) \) which uses the vector δi from each class. By using Algorithm 4 and the reconstruction residual for i th class \(r_{i}(b)=\| b-\hat {b}_{i} \|_{2}\), we can obtain the approximate sparse representation and classification results of the test image (Figs. 8910 and 11).

Fig. 8
figure 8

The approximate test sample 1

Fig. 9
figure 9

Figure 6 belongs to the first category of the set of training samples

Fig. 10
figure 10

The approximate test sample 2

Fig. 11
figure 11

Figure 7 belongs to the fourth category of the set of training samples

In Example 5.3, four subjects were randomly selected from Faces95 database, and the color face sparse representation was performed on the test images of the first subject and the fourth subject respectively. The results show that the classification is more accurate.

Example 5.4

Given a color face recognition training data set (Fig. 12), and the corresponding test data (Fig. 13) are classified. (Color image classification of different micro expressions of the same subject.) By using Algorithm 4, we can obtain the approximate sparse representation and classification results of the test image (Figs. 14 and 15).

Fig. 12
figure 12

The set of training samples 2, c= 4

Fig. 13
figure 13

The test sample 1

Fig. 14
figure 14

The approximate test sample 1

Fig. 15
figure 15

Figure 13 belongs to the first category of the set of training samples

In Example 5.4, several sets of micro expression data sets of a subject are randomly selected from the network to sparse represent the different expression images of the same subject. The results show that the classification method is more accurate.

6 Conclusions

In this paper, based on the Gauss elimination method of quaternion matrix, we discuss the direct Gauss transform (based on quaternion toolbox (QTFM) and the complex structure-preserving Gauss transform of a quaternion matrix. By using the Gauss transformation, we obtain the direct algorithm and complex structure-preserving algorithm for the full rank decomposition of a quaternion matrix. Finally, this algorithm can also be applied to the sparse representation classification of color images. The feasibility and timeliness of the proposed algorithms are demonstrated by the corresponding application examples.