Abstract
In this paper, incorporating the quaternion matrix framework, the logarithmic norm of quaternion matrices is employed to approximate rank. Unlike conventional sparse representation techniques for matrices, which treat RGB channels separately, quaternion-based methods maintain image structure by representing color images within a pure quaternion matrix. Leveraging the logarithmic norm, factorization and truncation techniques can be applied for proficient image recovery. Optimization of these approaches is facilitated through an alternate minimization framework, supplemented by meticulous mathematical scrutiny ensuring convergence. Finally, some numerical examples are used to demonstrate the effectiveness of the proposed algorithms.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In today’s era of information explosion, data volumes across various industries are experiencing exponential growth, posing significant challenges in storage, processing, and analysis. High-dimensional data, such as images [1], videos [2], sensor readings [3] and genomic sequences [4], are becoming increasingly prevalent in scientific research [5], industrial applications [6] and daily life. However, high-dimensional data often come with substantial computational and storage costs, along with the potential for containing a considerable amount of redundant information. To address this challenge, researchers have turned to dimensionality reduction techniques aimed at representing complex high-dimensional data in lower-dimensional spaces while preserving their fundamental structure and information [7, 8].
A prominent dimensionality reduction technique is the sparse representation model [9,10,11], which aims to reduce data dimensionality by finding a sparse set of linear combinations to express the data [12]. By emphasizing the sparsity of the representation matrix, redundant information in the data can be eliminated, resulting in a more concise and interpretable data representation. The application of rank-constrained optimization problems is widespread. For example, in the field of computer vision, sparse representation has been used for tasks such as image denoising [13], image restoration [14], and image classification [15]. In signal processing, sparse representation models have been applied to audio and speech processing for tasks like source separation [16], speech enhancement [17], and audio compression [18]. Additionally, sparse representation models find wide applications in neuroscience [19], bioinformatics [20], genomics [21], and other fields.
In prior research, the nuclear norm of a matrix has been demonstrated as an effective alternative method to traditional rank function. Unlike the nuclear norm, which treats each singular value equally [22], larger singular values may contain richer and more useful information compared to smaller ones [22, 23]. To address these limitations, several non-convex alternatives have been proposed, such as the truncated nuclear norm, weighted nuclear norm, and weighted Schatten-p norm, as suggested in [22,23,24]. Additionally, the log-determinant alternating method has been proven to be a more accurate approximation of rank [25], particularly evident in neural networks compared to the nuclear norm. These strategies are all based on approximate optimization of rank, where singular value decomposition (SVD) plays an indispensable role. However, the high complexity involved in SVD computations poses limitations when dealing with high-dimensional or large datasets [26]. To overcome this challenge, low-rank matrix decomposition is applied to the sparse representation of matrices [26]. Specifically, decomposing the target matrix into two smaller factor matrices describes the low-rank property of the target matrix. This approach not only satisfies the requirement of low-rank matrices but also benefits from rapid numerical optimization.
Currently, sparse representation methods for color images primarily focus on processing two-dimensional data. Consequently, when dealing with color images, it is common to process the RGB channels separately. However, this approach may lead to overlooking the intrinsic relationships among the three channels, resulting in the loss of their potential connections.
Based on the analysis above, there are two main issues with low-rank structured sparse representation of color images.
The first challenge in sparse representation of color images is that the correlation between RGB channels cannot be adequately maintained
Consequently, researchers have turned their attention to color image processing methods based on quaternion frameworks. Due to the unique structure of quaternion, each pixel in a color image can be represented using pure quaternions, forming a quaternion matrix. This approach has been widely applied in areas such as face recognition [27, 28], image edge detection [29], and image denoising [30, 31]. Other applications of color image processing can be found in references [32,33,34].
The second challenge is how to accurately describe the underlying low-rank structure
In the sparse representation of color images in quaternion space, many studies have been based on non-convex functions used for approximating the rank of quaternion matrices, including weighted Schatten-p norm, and Laplacian approximation. These functions highlight the advantages of quaternion matrices, which have been validated experimentally and theoretically. However, these methods require full processing of QSVD for quaternion matrices, which incurs high computational costs. To address this challenge, researchers have extended low-rank matrix decomposition to the quaternion algebra. The authors decompose the target quaternion matrix into dual-factor quaternion matrices for low-rank quaternion matrix completion [35]. These factorization-based methods only require optimization of two smaller quaternion matrices, thus significantly reducing any associated computational costs.
To address the aforementioned challenges, two quaternion sparse representation models have been proposed: Quaternion Logarithmic Norm Factorization Sparse Representation (QLNFSR) and Truncated Quaternion Logarithmic Norm Sparse Representation (TQLNSR). Both models are designed to approximate the rank in quaternion algebra more accurately and efficiently, thereby better utilizing the structure of color images. This paper utilizes the quaternion log-norm as a non-convex substitute for rank, which provides a more reliable description of low-rank matrices through compared to traditional approximation methods such as quaternion nuclear norm. With the increasing of singular values, the penalty of the logarithmic function becomes more lenient, hence the smaller singular values may receive more punishment. In Fig. 1, an intuitive explanation of rank approximation using the logarithmic function for scalar cases can be observed. When \(\textbf{X}=x\in \mathbb {R}\), \(\mathrm{{rank}}(x)=0\) if \(x=0\) and \(\mathrm{{rank}}(x)=1\) otherwise. Additionally, for x bounded by a positive constant M, denoted as \(|x|\le M\), \(\frac{\Vert x \Vert _{*}}{M}=\frac{\mid x \mid }{M}\) represents the convex envelope of \(\mathrm{{rank}}(x)\) on the interval \(\{x\mid \,|x|\le M\}\) [36]. Consequently, the slope of \(\mathrm{{rank}}(x)\) at the origin is infinite, while the convex envelope (\(|x|\le M\)) exhibits a consistent slope. In contrast, the slope of the logarithmic function at the origin is \(1/\delta \), where \(\delta \) is a small positive constant that ensures the logarithmic norm closely approximates \(\mathrm{{rank}}(x)\) compared to the convex envelope [37].
Subsequently, the quaternion logarithmic norm is applied to two smaller quaternion matrices, which are the factor quaternion matrices of the target quaternion matrix based on the quaternion log-norm factorization algorithm. Therefore, the expensive QSVD only needs to act on the smaller factor quaternion matrices, thereby improving algorithm efficiency. In the truncated quaternion logarithmic norm approximation algorithm, the quaternion logarithmic norm is first truncated, and then the shrinkage operator of the quaternion logarithmic norm is directly applied to optimize the target quaternion matrix. Thus, the main contributions of this paper can be summarized as follows.
-
The rank of quaternion matrices is not affected by the maximum singular value. Therefore, truncated quaternion logarithmic norm operates is to truncate the largest singular value, and then utilize the quaternion logarithmic norm in the truncation problem.
-
In order to solve the QLNFSR and TQLNSR models, we adopt the idea of alternating direction method of multipliers (ADMM) and establish two main algorithms called QLNFSR algorithm (Algorithm 1) and TQLNSR algorithm (Algorithm 2). Experimental results confirm the effectiveness of two algorithms in color image sparse representation processes.
This paper is organized as follows. In Section 2, we review some notations, definitions and lemmas with regard to the quaternion matrix. In Section 3, we review sparse representation methods based on low-rank matrices and introduce two strategies for low-rank sparse representation based on quaternion. In Section 4, we provide convergence analysis of the algorithms. In Section 5, the proposed algorithms have been applied to color image reconstruction. The feasibility and effectiveness of the algorithms are verified. Finally, we give some conclusions in Section 6.
Notation
In this article, \(\mathbb {R}\) and \(\mathbb {Q}\) denote the real space and quaternion space, respectively. A scalar and a matrix are written as a and A, respectively. \(\varvec{a}\) and \(\varvec{A}\) represent a quaternion number and a quaternion matrix, respectively. The symbols \((\cdot )^{*}\), \((\cdot )^{-1}\), \((\cdot )^{T}\) and \((\cdot )^{H}\) denote the conjugation, inverse, transpose and conjugate transpose, respectively. The symbols \(|\cdot |\), \(\Vert \cdot \Vert _{F}\) and \(\Vert \cdot \Vert _{*}\) are the absolute value or modulus, the Frobenius norm and the nuclear norm, respectively. The symbols \(\langle \cdot ,\cdot \rangle \), \(\textrm{tr}\{\cdot \}\) and \(\textrm{rank}(\cdot )\) denote the inner product operation, the trace and rank operators, respectively. The real part of quaternion (scalar, vector, and matrix) denotes \(\mathfrak {R}(\cdot )\). The symbol I represents the identity matrix with appropriate size.
2 Preliminaries
In this section, we recall some preliminary results that will be used in the following discussion. Firstly, we introduce the definition of quaternion.
Definition 2.1
[38] A quaternion \(\varvec{q} \in \mathbb {Q}\) is expressed as
where \(q_0, q_1, q_2, q_3\in \mathbb {R}\), and three imaginary units \(\varvec{i}, \varvec{j}, \varvec{k}\) satisfy
One of the most important properties of quaternion is that the multiplication of quaternion is noncommutative as these rules, that is \(\varvec{pq} \ne \varvec{qp}\) in general for \(\varvec{p}, \varvec{q} \in \mathbb {Q}\). For example, it is obvious that \(\varvec{pq} \ne \varvec{qp}\) while \(\varvec{p}=\varvec{i}\) and \(\varvec{q}=\varvec{j}\).
Definition 2.2
[38] A quaternion matrix \(\varvec{A} \in \mathbb {Q}^{m\times n}\) is expressed as
where \(A_0, A_1, A_2, A_3\in \mathbb {R}^{m\times n}\). The conjugate transpose matrix of \(\varvec{A}\) is defined as
We get the following definitions about the norm of the quaternion and quaternion matrix.
Definition 2.3
[38] Let \(\varvec{a} \in \mathbb {Q}\) and \(\varvec{A} \in \mathbb {Q}^{m\times n}\). The norm of a quaternion \(\varvec{a}=a_0+a_1\varvec{i}+a_2\varvec{j}+a_3\varvec{k}\) and the Frobenius norm of the quaternion matrix \(\varvec{A}=A_0+A_1\varvec{i}+A_2\varvec{j}+A_3\varvec{k}=(\varvec{a}_{ij})\), are defined as
and
Below we give the definition of the generalized inverse of a quaternion matrix.
Definition 2.4
[39] For given \(\varvec{A}\in \mathbb {Q}^{m \times n}\), the generalized inverse of the quaternion matrix \(\varvec{A}\) is defined as \(\varvec{X}\), which satisfies the following conditions
We denote \(\varvec{X}\) by \(\varvec{A}^{\dagger }\).
Specially, if \(\varvec{A}\) is invertible, it is clear that \(\varvec{X}=\varvec{A}^{-1}\) trivially satisfies (2.1).
Definition 2.5
(QNN, [40]) Given \(\varvec{X}\in \mathbb {Q}^{m \times n}\), the nuclear norm of the quaternion matrix is
where \(\sigma _{i}(\varvec{X})\) denotes the i-th singular value of \(\varvec{X}\).
Lemma 2.1
(Binary factorization framework, [35]) Let the quaternion matrix \(\varvec{A}\in \mathbb {Q}^{m \times n}\) with \(\textrm{rank}(\varvec{A})=r\le d\). Then the binary factorization framework is devised as :
where \(\varvec{U}\in \mathbb {Q}^{m \times d}\) and \(\varvec{V}\in \mathbb {Q}^{n \times d}\) such that \(\textrm{rank}(\varvec{U})=\textrm{rank}(\varvec{V})=r\).
Lemma 2.2
(Quaternion Singular Value Decomposition (QSVD), [19]) Let \(\varvec{A} \in \mathbb {Q}^{m \times n}\), then there exist two unitary quaternion matrices \(\varvec{U} \in \mathbb {Q}^{m \times m}\) and \(\varvec{V} \in \mathbb {Q}^{n \times n}\) such that
where \(\varvec{\Sigma }=\textrm{diag}(\sigma _{1}, \sigma _{2},\cdots ,\sigma _{l})\), \(\sigma _{1}\ge \sigma _{2}\ge \cdots \ge \sigma _{l}\ge 0\), and the diagonal elements of \(\varvec{\Sigma }\) are all the nonnegative singular values of the matrix \(\varvec{A}\) and \(l=\textrm{min}(m,n)\).
Definition 2.6
(QLN, [41]) Let \(\varvec{X}\in \mathbb {Q}^{m \times n}\). The logarithmic norm of the quaternion matrix with \(0\le p\le 1\) and \(\epsilon > 0\) is defined as
where \(\sigma _{i}(\varvec{X})\) denotes the i-th singular value of \(\varvec{X}\).
Lemma 2.3
[41] Let the quaternion matrix \(\varvec{X}\in \mathbb {Q}^{m \times n}\) with \(\textrm{rank}(\varvec{X})=r\le d \le \textrm{min}\{m,n\}\). There exist \(\varvec{U}\in \mathbb {Q}^{m \times d}\) and \(\varvec{V}\in \mathbb {Q}^{N \times d}\) such that \(\varvec{X}=\varvec{U}\varvec{V}^{H}\). Then we have :
Lemma 2.4
(Quaternion Logarithmic Singular Value Thresholding (QLSVT), [41]) Let the quaternion matrix \(\varvec{A}\in \mathbb {Q}^{m \times n}\) and \(\lambda > 0\). If QSVD of \(\varvec{A}\) is \(\varvec{A}=\varvec{U}_{\tiny {\varvec{A}}}\varvec{\Sigma }_{\tiny {\varvec{A}}}\varvec{V}^{H}_{\tiny {\varvec{A}}}\), then the closed solution of the problem
is provided by \(\varvec{X}=\varvec{U}_{\tiny {\varvec{A}}} \varvec{\Delta }_{\lambda ,\epsilon ,{\tiny {\varvec{A}}}}\varvec{V}^{H}_{\tiny {\varvec{A}}}\). Here, the soft thresholding operator \(\varvec{\Delta }_{\lambda ,\epsilon ,\tiny {\varvec{A}}}\) is defined as :
with
where \(\delta =(x-\epsilon )^{2}-4(\lambda -x\epsilon )\) and the function \(h: \mathbb {R}^{+}\rightarrow \mathbb {R}^{+}\) is defined as \(h(a):=\frac{1}{2}(a-x)^{2}+\lambda \textrm{log}(a+\epsilon )\).
3 Quaternion matrix sparse representation model
Due to the pronounced non-local self-similarity evident in the structure of visual data, often observed as low-rank features, the goal of matrix sparse representation models is to tackle image rendering problems using the following approach:
where \(\mathrm{{rank}}({X})\) is the rank function, B is the known data matrix, A is the constraint matrix, and X is the matrix to be found. Problem (3.1) constitutes a combinatorial optimization challenge, typically addressable by optimizing convex surrogates for the rank function [42].
Problem (3.1) briefly introduces the classic matrix sparse representation model, which fundamentally optimizes the sparse representation of grayscale images and other two-dimensional data. When processing color images, the model in (3.1) needs to decompose the RGB channels, while the quaternion matrix sparse representation model can assemble the RGB channels. Therefore, it can be expressed as:
where \(\mathrm{{rank}}(\varvec{X})\) is the quaternion matrix rank function, \(\varvec{B}\) is the known quaternion matrix, \(\varvec{A}\) is the constraint quaternion matrix, and \(\varvec{X}\) is the quaternion matrix to be found.
The main sparse representation model in quaternion algebra primarily focuses on low-rank minimization. Similar to the classical matrix sparse representation model, the rank function in model (3.2) is challenging to optimize. Therefore, according to Definition 2.5, the low-rank minimization model can be expressed as:
A rank, akin to a value in the real domain, is represented by a real number. As depicted in Fig. 1, QLN provides a finer approximation compared to QNN. Furthermore, drawing on both the bi-factor surrogate theorem for matrix logarithmic norm mentioned in [37] and Lemma 2.1, it becomes feasible to formulate the bi-factor surrogate theorem for QLN. Therefore, combined with the Definition 2.6 and Lemma 2.3, this paper proposes two quaternion sparse representation models: Quaternion Logarithmic Norm Factorization Sparse Representation (QLNFSR) and Truncated Quaternion Logarithmic Norm Sparse Representation (TQLNSR) as follows.
3.1 Quaternion logarithmic norm factorization sparse representation
According to Definition 2.6 and Lemma 2.3, on the basis of model (3.3), the following sparse representation model based on quaternion framework is proposed:
where \(\varvec{B}\) is the known quaternion matrix, \(\varvec{A}\) is the constraint quaternion matrix, and \(\varvec{X}\) is the quaternion matrix to be found. Our aim is to minimize the disparity between \(\varvec{B}\) and \(\varvec{AX}\), while simultaneously reducing the rank of \(\varvec{X}\) to its lowest possible value. Combining the aforementioned objectives into a single equation allows us to represent (3.4) as:
It’s worth noting that (3.5) decomposes the interconnected terms, enabling them to be tackled separately. Subsequently, the challenges posed by (3.5) can be addressed using the ADMM framework.
Initially, we address the task described in (3.5) by minimizing the augmented Lagrangian function given by:
where \(\varvec{F}_{1}\), \(\varvec{F}_{2}\) and \(\varvec{F}_{3}\) are Lagrange multipliers, \(\alpha >0\) is the penalty parameter.
Updating \(\varvec{M}\) and \(\varvec{N}\)
In the \((k+1)\)-th iteration, while keeping the other variables at their most recent values, \(\varvec{M}\) and \(\varvec{N}\) are determined as the optimal solutions of the subsequent problems:
Let
and
Applying the relevant principles regarding quaternion matrix derivatives as outlined in [43], the gradient of \(\mathcal {Q}(\varvec{M})\) can be calculated as
By setting (3.8) equal to zero, the solution can be derived as
Utilizing a comparable approach, we can derive the optimal solution for \(\varvec{N}^{k +1}\) as follows:
Updating \(\varvec{Y}\) and \(\varvec{Z}\)
In the \((k+1)\)-th iteration, while maintaining the remaining variables at their most recent values, \(\varvec{Y}^{k+1}\) and \(\varvec{Z}^{k+1}\) represent the optimal solutions of the subsequent problems:
According to Lemma 2.4, we can utilize the QLSVT technique to update \(\varvec{Y}^{k +1}\) and \(\varvec{Z}^{k+1}\) in reference to (3.11), that are
where \(\varvec{S}_{1}=\varvec{M}^{k+1}+\frac{1}{\alpha ^{k}}\varvec{F}^{k}_{1}\) and \(\varvec{S}_{2}=\varvec{N}^{k+1}+\frac{1}{\alpha ^{k}}\varvec{F}^{k}_{2}\). Let \(\varvec{S}_{1}=\varvec{U}_{\tiny {\varvec{S}_1}}\varvec{\Sigma }_{\tiny {\varvec{S}_1}}\varvec{V}^{H}_{\tiny {\varvec{S}_1}}\) and \(\varvec{S}_{2}=\varvec{U}_{\tiny {\varvec{S}_2}}\varvec{\Sigma }_{\tiny {\varvec{S}_2}}\varvec{V}^{H}_{\tiny {\varvec{S}_2}}\) be the QSVD of quaternion matrices \(\varvec{S}_{1}\) and \(\varvec{S}_{2}\), respectively.
Updating \(\varvec{X}\)
In the \((k+1)\)-th iteration, fixing the other variables at their latest values, \(\varvec{X}\) is the optimal solutions of the following problem:
Let
The gradient of \(\mathcal {H}(\varvec{X})\) can be calculated as
Setting (3.14) to zero, we can obtain a unique solution
Updating \(\varvec{F}_{1}\), \(\varvec{F}_{2}\), \(\varvec{F}_{3}\) and \(\alpha \)
The update formats are as follows:
Algorithm 1 outlines the complete process, detailing each step sequentially.
3.2 Truncated quaternion logarithmic norm sparse representation
The truncated nuclear norm can achieve a better approximation of the rank function than the nuclear norm. According to this property, the method adopted in this paper combines the truncation skill and QLN, and the definition of truncated logarithmic norm based on quaternion is introduced below.
Definition 3.1
(TQLN, [10]) Given \(\varvec{X}\in \mathbb {Q}^{m \times n}\), the truncated logarithmic norm of the quaternion matrix with \(0\le p\le 1\) and \(\epsilon > 0\) is defined as the sum of logarithmic function of \(\textrm{min}(M,N)-r\) minimum singular values :
where \(\sigma _{i}(\varvec{X})\) denotes the i-th singular value of \(\varvec{X}\).
Given that the initial large singular values do not affect the rank, they are disregarded in TQLN. Subsequently, the focus shifts towards optimizing the smallest \(\textrm{min}(M,N)-r\) singular values to achieve a more precise low-rank estimation. According to TQLN principles, the completion process based on the low-rank minimization model (3.2) can be expressed as follows:
Lemma 3.1
[41] Let \(\varvec{X} \in \mathbb {Q}^{m \times n}\), and the matrices \(\varvec{U} \in \mathbb {Q}^{r \times m}\) and \(\varvec{V} \in \mathbb {Q}^{r \times n}\) with \(\varvec{U}\varvec{U}^{H}=I_{r}\), \(\varvec{V}\varvec{V}^{H}=I_{r}\). Here, r is any integer \((r\le \textrm{min}(m,n))\). Then it has
Based on Definition 3.1 and Lemma 3.1, we introduce a sparse representation model utilizing the quaternion-based framework:
The procedure is outlined in Algorithm 2.
In Algorithm 2, \(\varvec{C}^{k}\) and \(\varvec{D}^{k}\) are first obtained by QSVD of \(\varvec{X}^{k}\). Next, we will focus on \(\textbf{Step}~5\) of Algorithm 2, which can be expressed as the following formula:
It’s worth noting that the problem (3.20) can be addressed using the ADMM framework. Initially, we tackle the problem (3.5) by minimizing the augmented Lagrangian function provided below:
where \(\alpha \) and \(\varvec{F}\) are a positive penalty parameter and a Lagrange multiplier, respectively.
Updating \(\varvec{K}\)
In the \((k+1)\)-th iteration, fixing the other variables at their latest values, \(\varvec{K}\) is the optimal solution of the following problem:
Let
Applying the relevant principles regarding quaternion matrix derivatives as outlined in [43], the gradient of \(\mathcal {W}(\varvec{K})\) can be calculated as
Setting (3.23) to zero, we can obtain a unique solution
Updating \(\varvec{X}\)
In the \((k+1)\)-th iteration, while keeping the other variables constant at their most recent values, \(\varvec{X}^{k+1}\) represents the optimal solution of the subsequent problem:
By Lemma 2.4, the QLSVT can be applied to (3.25) for updating \(\varvec{X}^{k +1}\) when \(p=1\), that is
where \(\varvec{S}_{3}=\varvec{K}^{k+1}+\frac{1}{\alpha ^{k}}\varvec{F}^{k}\). Let \(\varvec{S}_{3}=\varvec{U}_{\tiny {\varvec{S}_3}}\varvec{\Sigma }_{\tiny {\varvec{S}_3}}\varvec{V}^{H}_{\tiny {\varvec{S}_3}}\) be the QSVD of quaternion matrices \(\varvec{S}_{3}\).
Updating \(\varvec{F}\) and \(\alpha \)
The update formats are as follows:
Algorithm 3 outlines the complete process, detailing each step sequentially.
The termination condition for Algorithms 1, 2 and 3 is defined as the following relative error:
where \(\textrm{tol}>0\) is the stopping tolerance. In the experiments, we set \(\textrm{tol}\)=1e-4.
4 Convergence analysis
No definite convergence property of the ADMM has been established for non-convex problems (or convex problems involving more than two blocks of variables), even within the real number field. Therefore, concerning the formidable problems (3.5) and (3.20), we empirically demonstrate their convergence behavior. Additionally, alongside the empirical observations, we provide a weak convergence property for problem (3.5) (similarly for problem (3.20)), subject to mild conditions, as outlined in the following theorems.
Theorem 4.1
Let \(\Theta _{1}:=(\varvec{X},\varvec{Y},\varvec{Z},\varvec{M},\varvec{N},\varvec{F}_{1},\varvec{F}_{2},\varvec{F}_{3})\) and \(\{\Theta _{1}^{k}\}^{\infty }_{k=1}\) be generated by Algorithm 1. Assume that \(\{\Theta _{1}^{k}\}^{\infty }_{k=1}\) is bounded, and \(\{\alpha ^{k}\}^{\infty }_{k=1}\) is non-decreasing and bounded. Then,
(1) \(\{\varvec{X}^{k}\},\{\varvec{Y}^{k}\},\{\varvec{Z}^{k}\},\{\varvec{M}^{k}\}\) and \(\{\varvec{N}^{k}\}\) are Cauchy sequences;
(2) any accumulation point of \(\{\Theta _{1}^{k}\}^{\infty }_{k=1}\) satisfies the Karush-Kuhn-Tucker (KKT) conditions for the problem (3.5).
Proof
(1) According to (3.16), we have
Due to the assumptions that the quaternion matrix sequences \(\{\varvec{F}_{1}^{k}\},\{\varvec{F}_{2}^{k}\}\) and \(\{\varvec{F}_{3}^{k}\}\) are bounded, we have
which imply that
Hence, \(\{(\varvec{X}^{k},\varvec{Y}^{k},\varvec{Z}^{k},\varvec{M}^{k},\varvec{N}^{k})\}\) indeed approaches to a feasible solution.
Next, we show that \(\{\varvec{M}^{k}\}\) and \(\{\varvec{N}^{k}\}\) are Cauchy sequences. Note that
Then, from (3.12) and (3.13), we have
and
Based on (4.1) and (4.2), we can respectively obtain
and
Recall that \(\alpha ^{k}=\textrm{min}(\beta \alpha ^{k-1},\alpha _{\max })\), it follows that \(\{\alpha ^{k}\}^{\infty }_{k=1}\) is non-decreasing and bounded, we have
where the constant \(\eta _{1}\) is defined as
And it has
where the constants \(\eta _{2}\) and \(\eta _{3}\) are defined as
and
From (4.5) and (4.6), we know that \(\textrm{lim} _{k \rightarrow \infty }\Vert \varvec{M}^{k +1}-\varvec{M}^{k}\Vert _{F}=0\) and \(\textrm{lim} _{k \rightarrow \infty }\Vert \varvec{N}^{k +1}-\varvec{N}^{k}\Vert _{F}=0\). Hence, \(\{\varvec{M}^{k}\}\) and \(\{\varvec{N}^{k}\}\) are Cauchy sequences. Similarly, one can also verify \(\{\varvec{X}^{k}\}\) is Cauchy sequence.
Later, we establish that the sequences \(\{\varvec{Y}^{k}\}\), \(\{\varvec{Z}^{k}\}\) and \(\{\varvec{X}^{k}\}\) are Cauchy sequences. Let \(\varvec{U}_{y}^{k}\varvec{\Sigma }_{y}^{k}(\varvec{V}_{y}^{H})^{k}\) be QSVD of the matrix \(\varvec{M}^{k}+\frac{1}{\alpha ^{k}}\varvec{F}^{k}_{1}\) in the \((k+1)\)-th iteration. Then utilizing the QLSVT, we can get:
The soft thresholding operator \(\varvec{\Delta }_{\lambda ,\epsilon ,\tiny {\varvec{A}}}\) is defined as :
with
where \(\delta =(x-\epsilon )^{2}-4(\lambda -x\epsilon )\) and the function \(h: \mathbb {R}^{+}\rightarrow \mathbb {R}^{+}\) is defined as \(h(a):=\frac{1}{2}(a-x)^{2}+\lambda \textrm{log}(a+\epsilon )\). Let \(x_{0}\) present an arbitrary singular value \((x_{0}\ge 0)\) and \(g(\lambda ,x_{0})=x_{0}-l_{\lambda ,\epsilon }(x_{0})\), then we can obtain \(\textrm{max}\big (g(\lambda ,x_{0})\big )=2\sqrt{\lambda }\) as follows.
Case 1: \(\delta \le 0\). From the definition of \(\delta \), we have \(0\le x_{0}\le 2\sqrt{\lambda }-\epsilon \). Moreover, \({l}_{\lambda ,\epsilon }(x_{0})\) in this case, so we have \(\textrm{max}\big (g(\lambda ,x_{0})\big )=2\sqrt{\lambda }\).
Case 2: \(\delta > 0\). In this case, \(x_{0}> 2\sqrt{\lambda }-\epsilon \) and we have
Then it holds
We could know that \(g(\lambda ,x_{0})<g(\lambda ,2\sqrt{\lambda }-\epsilon )=\sqrt{\lambda }\). For \(\alpha ^{k}>0\), from (3.16), we have
Similarly, one can also verify that \(\{\varvec{Z}^{k}\}\) and \(\{\varvec{X}^{k}\}\) are Cauchy sequences.
(2) Let \((\varvec{Y}_{*},\varvec{Z}_{*},\varvec{M}_{*},\varvec{N}_{*},\varvec{X}_{*})\) be a stationary point of (3.5). Then, it satisfies the following KKT conditions
Subsequently, we will confirm that every limit point of \((\varvec{Y}^{k},\varvec{Z}^{k},\varvec{M}^{k},\varvec{N}^{k},\varvec{X}^{k})\) aforementioned KKT conditions.
From (3.11) and (3.13), we can respectively obtain
Since \(\{\varvec{Y}^{k}\}, \{\varvec{Z}^{k}\}, \{\varvec{M}^{k}\}\), \(\{\varvec{N}^{k}\}\) and \(\{\varvec{X}^{k}\}\) are Cauchy sequences, we can let \(\varvec{Y}^{\infty },\varvec{Z}^{\infty },\varvec{M}^{\infty },\varvec{N}^{\infty }\) and \(\varvec{X}^{\infty }\) be their accumulation points, respectively. Then, together with the results in Algorithm 1, we have that \(\varvec{Y}^{\infty }=\varvec{M}^{\infty }\), \(\varvec{Z}^{\infty }=\varvec{N}^{\infty }\), \(\varvec{X}^{\infty }=\varvec{Y}^{\infty }(\varvec{Z}^{\infty })^{H}\). Thus, when \(k\rightarrow \infty \), (4.10) becomes
Consequently, any accumulation point \(\{\varvec{Y}^{\infty },\varvec{Z}^{\infty },\varvec{M}^{\infty },\varvec{N}^{\infty },\varvec{X}^{\infty }\}\) of the sequence \(\{(\varvec{Y}^{k},\varvec{Z}^{k},\) \(\varvec{M}^{k},\varvec{N}^{k},\varvec{X}^{k})\}\) generated by Algorithm 1 indeed satisfies the KKT conditions for the problem (3.5). This proof is complete. \(\square \)
Below we give the weak convergence property of the problem (3.20), but under mild conditions, as described in the following theorem.
Theorem 4.2
Let \(\Theta _{2}:=(\varvec{K},\varvec{X},\varvec{F})\) and \(\{\Theta _{2}^{k}\}^{\infty }_{k=1}\) be generated by Algorithm 3. Assume that \(\{\Theta _{2}^{k}\}^{\infty }_{k=1}\) is bounded, and \(\{\alpha ^{k}\}^{\infty }_{k=1}\) is non-decreasing and bounded. Then,
(1) \(\{\varvec{K}^{k}\}\) and \(\{\varvec{X}^{k}\}\) are Cauchy sequences;
(2) any accumulation point of \(\{\Theta _{2}^{k}\}^{\infty }_{k=1}\) satisfies the Karush-Kuhn-Tucker (KKT) conditions for the problem (3.20).
Proof
(1) According to (3.27), we have
Due to the assumptions that \(\{\varvec{F}^{k}\}\) is bounded and \(\{\alpha ^{k}\}^{\infty }_{k=1}\) is non-decreasing and bounded, we have
which implies that
Hence, \(\{(\varvec{K}^{k},\varvec{X}^{k})\}\) indeed approaches to a feasible solution.
Next, we show that \(\{\varvec{K}^{k}\}\) and \(\{\varvec{X}^{k}\}\) are Cauchy sequences. Let \(\varvec{U}_{x}^{k}\varvec{\Sigma }_{x}^{k}(\varvec{V}_{x}^{H})^{k}\) be the QSVD of \(\varvec{K}^{k}+\frac{1}{\alpha ^{k}}\varvec{F}^{k}\) in the \((k+1)\)-th iteration.
For \(\alpha ^{k}>0\), from (3.27) and the proof of Theorem 4.1, we know that
Similarly, one can also verify that \(\{\varvec{H}^{k}\}\) is Cauchy sequence.
(2) Let \((\varvec{K}_{*},\varvec{X}_{*})\) be a stationary point of (3.20). Then, it satisfies the following KKT conditions
Subsequently, we will confirm that every limit point of \((\varvec{K}^{k},\varvec{X}^{k})\) aforementioned KKT conditions.
From (3.22) and (3.25), we can respectively obtain
Since \(\{\varvec{K}^{k}\}\) and \(\{\varvec{X}^{k}\}\) are Cauchy sequences, let \(\varvec{K}^{\infty }\) and \(\varvec{X}^{\infty }\) be their accumulation points, respectively. Then, together with the results in Algorithm 3, we have that \(\varvec{K}^{\infty }=\varvec{X}^{\infty }\). Thus, when \(k\rightarrow \infty \), (4.13) becomes
Consequently, any accumulation point \(\{\varvec{K}^{\infty },\varvec{X}^{\infty }\}\) of the sequence \(\{(\varvec{K}^{k},\varvec{X}^{k})\}\) generated by Algorithm 3 indeed satisfies the KKT conditions for the problem (3.20). \(\square \)
5 Numerical experiments
In this section, based on the discussions in Sections 3 and 4, we give some numerical examples of color image sparse representation to prove the feasibility and effectiveness of Algorithms 1-3. We implemented all algorithms in MATLAB R2020a on a personal computer with Inter(R) Core(TM) i7-10700 CPU @ 2.90GHz and 8 GB memory.
Let \(\varvec{A}=\textrm{rand}\left( n,n\right) +\textrm{rand}\left( n,n\right) \varvec{i}+\textrm{rand}\left( n,n\right) \varvec{j}+\textrm{rand}\left( n,n\right) \varvec{k} \in \mathbb {Q}^{n\times n}\) with \(n=20\). We employ Algorithms 1-3 to individually compute the reconstructed quaternion matrix and random matrix under the models (3.5) and (3.19). The three-dimensional correlation between the target value and the parameter values \(\alpha \) and \(\rho \) is illustrated in Figs. 2 and 3.
From Figs. 2 and 3, it’s evident that the matrices reconstructed by our proposed Algorithms 1-3 consistently yield lower target values within the model compared to the random matrices. Therefore, the feasibility of our proposed algorithms is demonstrated. We represent a color image as \(\varvec{A}=R\varvec{i}+G\varvec{j}+B\varvec{k} \in \mathbb {Q}^{m\times n}\), where R, G and B respectively represent the real matrix corresponding to the red, green and blue channels in the color image. The original color images selected in the experiments are \(64\times 64 \times 3\) pixels. Obviously, every color image matrix is a pure imaginary quaternion matrix. Using this representation method, we can make better use of the relationship between the three channels of the color image and process a color image as a whole. We compare the proposed algorithm with truncated QSVD algorithm [44] and quaternion nuclear norm algorithm [45].
Parameters and initialization setting
we set \(\varvec{B}=\varvec{A}\), \(\lambda =0.05\sqrt{\textrm{max}(m,n)}\), \(\rho =0.05\), \(d=10\), \(\alpha _{\max }=10^{7}\) and \(\beta =1.03\). Let \(\alpha ^{0}=0.01\), \(\varvec{X}_{0}=\textbf{randQ}(m,n)\), \([\varvec{U}^{x},\sim ,\varvec{V}^{x}]=\textbf{svdQ}(\varvec{X}_{0})\), \(\varvec{M}^{0}=\varvec{Y}^{0}=\varvec{U}^{x}(1:d,:)\), \(\varvec{N}^{0}=\varvec{Z}^{0}=\varvec{V}^{x}(1:d,:)\) and \(\varvec{F}_1^{0}=\varvec{F}_2^{0}=\varvec{F}_3^{0}=\varvec{0}\). In TQLNSR, as the exact number of truncated singular values is unknown beforehand, experimenting with \(r=1\) can yield useful insights. Adjusting r appropriately can lead to improved outcomes. The randomly generated quaternion matrix function \(\textbf{randQ}\) and the quaternion singular value decomposition function \(\textbf{svdQ}\) are derived from the Structure-preserving Quaternion Toolbox [46].
Quantitative assessment
To assess the effectiveness of the proposed methods, we utilize three commonly employed quantitative quality metrics, namely the peak signal-to-noise ratio (PSNR), the mean structural similarity index (MSSIM) and the sparsity of the proportion of zero elements, in addition to evaluating visual quality. The calculation formulas are as follows.
(1) PSNR
where L is the maximum value of the data type of the color image. MSE is the mean square error, where \(\varvec{X}=X_{1}\varvec{i}+X_{2}\varvec{j}+X_{3}\varvec{k}\in \mathbb {Q}^{m\times n}\), \(\varvec{Y}=Y_{1}\varvec{i}+Y_{2}\varvec{j}+Y_{3}\varvec{k} \in \mathbb {Q}^{m\times n}\) and (w, u) represent the original image, the reconstructed image and the coordinates, respectively. m and n are the number of rows and columns of the image matrix, respectively.
Generally speaking, PSNR in the range of 30(dB) to 40(dB) indicates good image quality. That is, the distortion is perceptible but acceptable. When PSNR is higher than 40(dB), the image quality is excellent. The encrypted image is very close to the original image.
(2) MSSIM
where \(\mu ({X_{h}})/ \mu ({Y_{h}})\) and \(\sigma ({X_{h}})/\sigma ({Y_{h}})\) represent the mean value and standard deviation of \(X_{h}/Y_{h}\) with \(h=1,2,3\); \(\sigma ({X_{h},Y_{h}})\) denotes the covariance matrix of \(X_{h}\) and \(Y_{h}\). \(c_1\) and \(c_2\) are constants in order to avoid denominators of 0 and maintain stability. The value of MSSIM ranges from 0 to 1. Notice that it has a well similarity of two images while the value of MSSIM is closed to 1.
(3) Sparsity
where m and n represent the dimensions of the quaternion matrix row and column, respectively. \(T_z\) represents the number of non-zero elements of the quaternion matrix. The sparsity measure ranges between 0 and 1. A sparsity value closer to 0 indicates a denser matrix, while a value closer to 1 indicates a sparser matrix.
The reconstruction results about PSNR, MSSIM, Sparsity and CPU time(s) for four test methods on eight test images are shown in Table 1. When the value of PSNR is inf, the image similarity is very high. Additionally, Fig. 4 illustrates the visual contrast between the four test methods on the eight test color images. As can be seen from Fig. 4 and Table 1, Algorithms 1-3 proposed in this paper can be well applied to the reconstruction after sparse representation of color images.
6 Conclusions
In conclusion, this paper introduces a novel approach for image restoration using quaternion matrix framework and logarithmic norm. By representing color images in a pure quaternion matrix, the proposed method preserves image structure while approximating rank efficiently. Furthermore, leveraging factorization and truncation techniques based on the logarithmic norm ensures effective image recovery. The alternate minimization framework facilitates optimization of these techniques, with rigorous mathematical analysis validating convergence. Numerical examples provided in the paper demonstrate the efficacy of the proposed algorithms in practice, highlighting their potential for various applications in image processing and computer vision.
Data Availability
No datasets were generated or analysed during the current study.
References
Gao, L.L., Song, J.K., Liu, X.Y., Shao, J.M., Liu, J.J., Shao, J.: Learning in high-dimensional multimedia data: the state of the art. Multimed Syst. 23, 303–313 (2017)
Kompella, V.R., Stollenga, M., Luciw, M., Schmidhuber, J.: Continual curiosity-driven skill acquisition from high-dimensional video inputs for humanoid robots. Artif. Intell. 247, 313–335 (2017)
Zhang, C., Liu, Y.A., Wu, F., Fan, W.H., Tang, J.L., Liu, H.S.: Multi-dimensional joint prediction model for IoT sensor data search. IEEE Access. 7, 90863–90873 (2019)
Tillquis, R.C., Lladser, M.E.: Low-dimensional representation of genomic sequences. J. Math. Biol. 79, 1–29 (2019)
Ray, P., Reddy, S.S., Banerjee, T.: Various dimension reduction techniques for high dimensional data analysis: a review. Artif Intell Rev. 54, 3473–3515 (2021)
Zhang, L., Lin, J., Karim, R.: An angle-based subspace anomaly detection approach to high-dimensional data: with an application to industrial fault detection. Reliab. Eng. Syst. Safe. 142, 482–497 (2015)
Reddy, G.T., Reddy, M.P.K., Lakshmanna, K., Kaluri, R.: Analysis of dimensionality reduction techniques on big data. IEEE Access. 8, 54776–54788 (2020)
Ayesha, S., Hanif, M.K., Talib, R.: Overview and comparative study of dimensionality reduction techniques for high dimensional data. Inf. Fusion. 59, 44–58 (2020)
Yu, Z.Y., Zheng, X.P., Huang, F.W., Guo, W.Z., Lin, S., Yu, Z.W.: A framework based on sparse representation model for time series prediction in smart city. Front. Comput. Sci. 15, 1–13 (2021)
Zhang, Z., Xu, Y., Yang, J., Li, X.L., Zhang, D.: A survey of sparse representation: algorithms and applications. IEEE Access. 3, 490–530 (2015)
Bai, T., Li, Y.F.: Robust visual tracking with structured sparse representation appearance model. Pattern Recogniti. 45, 2390–2404 (2012)
Zhang, L., Zhang, L., Du, B.: Deep learning for remote sensing data: a technical tutorial on the state of the art. IEEE Trans. Geosci. Remote Sens. 4, 22–40 (2016)
Jia, X.X., Feng, X.C., Wang, W.W.: Rank constrained nuclear norm minimization with application to image denoising. Signal Process. 129, 1–11 (2016)
Liu, Y.Y., Zhao, X.L., Zheng, Y.B., Ma, T.H., Zhang, H.: Hyperspectral image restoration by tensor fibered rank constrained optimization and plug-and-play regularization. IEEE Trans. Geosci. Remote. 60, 1–17 (2021)
Hui, K.F., Shen, X.J., Abhadiomhen, S.E., Zhan, Y.Z.: Robust low-rank representation via residual projection for image classification. Knowl Based Syst. 241, 108230 (2022)
Kondo, Y., Kubo, Y., Takamune, N., Kitamura, D., Saruwatari, H.: Deficient-basis-complementary rank-constrained spatial covariance matrix estimation based on multivariate generalized Gaussian distribution for blind speech extraction. EURASIP J. Adv. Signal Process. 1, 88–112 (2022)
Min, G., Zhang, X. W., Yang, J. B., Han, W., Zou, X.: A perceptually motivated approach via sparse and low-rank model for speech enhancement. 2016 IEEE International Conference on Multimedia and Expo (ICME), pp. 1-6 (2016)
Abood, E.W., Hussien, Z.A., Kawi, H.A., Abduljabbar, Z.A., Nyangaresi, V.O., Kalafy, S.A.: Provably secure and efficient audio compression based on compressive sensing. Int. J. Electr. Comput. Eng. (IJECE). 13, 335–346 (2023)
Asari, H., Pearlmutter, B.A., Zador, A.M.: Sparse representations for the cocktail party problem. J. Neurosci. 26, 7477–7490 (2006)
Han, S.G., Wang, N., Guo, Y.X., Tang, F.R., Xu, L., Ju, Y., Shi, L.: Application of sparse representation in bioinformatics. Front Genet. 12, 810875 (2021)
Pique-Regi, R., Monso-Varona, J., Ortega, A., Seeger, R.C., Triche, T.J., Asgharzadeh, S.: Sparse representation and Bayesian detection of genome copy number alterations from microarray data. Bioinformatics. 24, 309–318 (2008)
Hu, Y., Zhang, D.B., Ye, J.P., Li, X.L., He, X.F.: Fast and accurate matrix completion via truncated nuclear norm regularization. IEEE Trans. Pattern Anal. Mach. Intell. 35, 2117–2130 (2012)
Xie, Y., Gu, S.H., Liu, Y., Zuo, W.M., Zhang, W.S., Zhang, L.: Weighted Schatten p-norm minimization for image denoising and background subtraction. IEEE Trans. Image Process. 25, 4842–4857 (2016)
Gu, S.H., Xie, Q., Meng, D.Y., Zuo, W.M., Feng, X.C., Zhang, L.: Weighted nuclear norm minimization and its applications to low level vision. Int. J. Comput Vision. 121, 183–208 (2017)
Kang, Z., Peng, C., Cheng, J., Cheng, Q.: Logdet rank minimization with application to subspace clustering. Comput. Intel. Neurosc. 1, 1–10 (2015)
Shang, F., Cheng, J., Liu, Y., Luo, Z.Q., Lin, Z.: Bilinear factor matrix norm minimization for robust PCA: algorithms and applications. IEEE Trans. Pattern Anal. Mach. Intell. 40, 2066–2080 (2017)
Ke, Y.F., Ma, C.F., Jia, Z.G., Xie, Y.J., Liao, R.W.: Quasi non-negative quaternion matrix factorization with application to color face recognition. J. Sci. Comput. 95, 38 (2023)
Jia, Z.G., Ma, R.R., Zhao, M.X.: A new structure-preserving method for recognition of color face images. Computer Science and Artificial Intelligence: Proceedings of the International Conference on Computer Science and Artificial Intelligence (CSAI2016), pp. 427-432 (2018)
Liu, D.J., Pu, G.L., Wu, X.Y.: Quaternion-based improved cuckoo algorithm for colour UAV image edge detection. IET Image Process. 16, 926–935 (2022)
Jia, Z.G., Ng, M.K., Wang, W.: Color image restoration by saturation-value total variation. SIAM J. Imaging Sci. 12, 972–1000 (2019)
Xu, X., Zhang, Z., Crabbe, M.J.C.: Quaternion quasi-Chebyshev non-local means for color image denoising. Chinese J. Electron. 32, 1–18 (2023)
Miao, J.F., Kou, K.I.: Color image recovery using low-rank quaternion matrix completion algorithm. IEEE Trans. Image Process. 31, 190–201 (2021)
Miao, J.F., Kou, K.I., Cheng, D., Liu, W.K.: Quaternion higher-order singular value decomposition and its applications in color image processing. Inform Fusion. 92, 139–153 (2023)
Miao, J.F., Kou, K.I., Yang, L.Q., Han, J.: Quaternion matrix completion using untrained quaternion convolutional neural network for color image inpainting. Signal Process. 221, 109504 (2024)
Miao, J.F., Kou, K.I.: Quaternion-based bilinear factor matrix norm minimization for color image inpainting. IEEE Trans. Signal Process. 68, 5617–5631 (2020)
Fazel, M., Hindi, H., Boyd, S. P.: Log-det heuristic for matrix rank minimization with applications to Hankel and Euclidean distance matrices. Proc. of the 2003 American Control Conf. IEEE. 3, 2156-2162 (2003)
Chen, L., Jiang, X., Liu, X.Z., Zhou, Z.X.: Logarithmic norm regularized low-rank factorization for matrix and tensor completion. IEEE Trans. Image Process. 30, 3434–3449 (2021)
Zhang, F.Z.: Quaternions and matrices of quaternions. Linear Algebra Appl. 251, 21–57 (1997)
Wei, M.S., Li, Y., Zhang, F.X., Zhao, J.L.: Quaternion Matrix Computations. Nova Science Publishers (2018)
Chen, Y.Y., Xiao, X.L., Zhou, Y.C.: Low-rank quaternion approximation for color image processing. IEEE Trans. Image Process. 29, 1426–1439 (2019)
Yang, L.Q., Miao, J.F., Kou, K.I.: Quaternion-based color image completion via logarithmic approximation. Inf. Sci. 588, 82–105 (2022)
Tang, K.W., Liu, R.S., Su, Z.X., Zhang, J.: Structure-constrained low-rank representation. IEEE Trans. Neural Networks Learn. Syst. 25, 2167–2179 (2014)
Xu, D., Mandic, D.P.: The theory of quaternion matrix derivatives. IEEE Trans. Signal Process. 63, 1543–1556 (2015)
Jia, Z.G., Ng, M.K., Song, G.J.: Lanczos method for large-scale quaternion singular value decomposition. Numer Algorithms. 82, 699–717 (2019)
Yu, Y.B., Zhang, Y.L., Yuan, S.F.: Quaternion-based weighted nuclear norm minimization for color image denoising. Neurocomputing. 332, 283–297 (2019)
Jia, Z.G.: Structure-preserving quaternion toolbox. http://maths.jsnu.edu.cn/_t1395/5134/main.htm
Acknowledgements
The authors are grateful for the Editor-in-Chief, Associate Editor, and Reviewers for their valuable comments and insightful suggestions that helped to improve this research significantly.
Funding
This research is supported by the National Natural Science Foundation of China (Nos. 62105064 and 12371378) and the Natural Science Foundation of Fujian Province, China (Nos. 2023J011127 and 2023J01955).
Author information
Authors and Affiliations
Contributions
All authors contributed to the study conception and design. All authors performed material preparation, data collection, and analysis. The authors read and approved the final manuscript.
Corresponding authors
Ethics declarations
Conflict of Interest
The author declares that they have no conflict of interest.
Ethical Approval
This manuscript does not contain any studies with human participants or animals performed by any of the authors.
Competing Interests
The authors declare no competing interests
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Cai, XM., Ke, YF., Ma, CF. et al. Logarithmic norm minimization of quaternion matrix decomposition for color image sparse representation. Numer Algor (2024). https://doi.org/10.1007/s11075-024-01887-9
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11075-024-01887-9