Abstract
This paper focuses on the sufficient condition of block sparse recovery with the \(l_{2}/l_{1}\)-minimization. We show that if the measurement matrix satisfies the block restricted isometry property with \(\delta_{2s|\mathcal{I}}< 0.6246\), then every block s-sparse signal can be exactly recovered via the \(l_{2}/l_{1}\)-minimization approach in the noiseless case and is stably recovered in the noisy measurement case. The result improves the bound on the block restricted isometry constant \(\delta_{2s|\mathcal {I}}\) of Lin and Li (Acta Math. Sin. Engl. Ser. 29(7):1401-1412, 2013).
Similar content being viewed by others
1 Introduction
Compressed sensing [2–4] is a scheme which shows that some signals can be reconstructed from fewer measurements compared to the classical Nyquist-Shannon sampling method. This effective sampling method has a number of potential applications in signal processing, as well as other areas of science and technology. Its essential model is
where \(\|x\|_{0}\) denotes the number of non-zero entries of the vector x, an s-sparse vector \(x \in\mathbb{R}^{N}\) is defined by \(\|x\|_{0}\leq s \ll N\). However, the \(l_{0}\)-minimization (1) is a nonconvex and NP-hard optimization problem [5] and thus is computationally infeasible. To overcome this problem, one proposed the \(l_{1}\)-minimization [4, 6–9].
where \(\|x\|_{1}=\sum_{i=1}^{N}|x_{i}|\). Candès [10] proved that the solutions to (2) are equivalent to those of (1) provided that the measurement matrices satisfy the restricted isometry property (RIP) [9, 11] with some definite restricted isometry constant (RIC) \(\delta_{s} \in(0,1)\), here \(\delta_{s} \) is defined as the smallest constant satisfying
for any s-sparse vectors \(x \in\mathbb{R}^{N}\).
However, the standard compressed sensing only considers the sparsity of the recovered signal, but it does not take into account any further structure. In many practical applications, for example, DNA microarrays [12], face recognition [13], color imaging [14], image annotation [15], multi-response linear regression [16], etc., the non-zero entries of sparse signal can be aligned or classified into blocks, which means that they appear in regions in a regular order instead of arbitrarily spread throughout the vector. These signals are called the block sparse signals and has attracted considerable interests; see [17–23] for more information.
Suppose that \(x\in\mathbb{R}^{N}\) is split into m blocks, \(x[1],x[2],\ldots,x[m]\), which are of length \(d_{1}, d_{2},\ldots,d_{m}\), respectively, that is,
and \(N=\sum_{i=1}^{m}d_{i}\). A vector \(x\in\mathbb{R}^{N}\) is called block s-sparse over \(\mathcal {I}=\{d_{1},d_{2},\ldots,d_{m}\}\) if \(x[i]\) is non-zero for at most s indices i [18]. Obviously, \(d_{i}=1\) for each i, the block sparsity reduces to the conventional definition of a sparse vector. Let
where \(I(x)\) is an indicator function that equals 1 if \(x>0\) and 0 otherwise. So a block s-sparse vector x can be defined by \(\|x\|_{2,0}\leq s\), and \(\|x\|_{0}=\sum_{i=1}^{m}\|x[i]\|_{0}\). Also, let \(\Sigma_{s}\) denote the set of all block s-sparse vectors: \(\Sigma_{s}=\{x\in\mathbb{R}^{N}:\|x\|_{2,0}\leq s \}\).
To recover a block sparse signal, similar to the standard \(l_{0}\)-minimization, one seeks the sparsest block sparse vector via the following \(l_{2}/l_{0}\)-minimization [13, 17, 18]:
But the \(l_{2}/l_{0}\)-minimization problem is also NP-hard. It is natural to use the \(l_{2}/l_{1}\)-minimization to replace the \(l_{2}/l_{0}\)-minimization [13, 17, 18, 24].
where
To characterize the performance of this method, Eldar and Mishali [18] proposed the block restricted isometry property (block RIP).
Definition 1
Block RIP
Given a matrix \(A \in\mathbb{R}^{n\times N}\), for every block s-sparse \(x\in\mathbb{R}^{N}\) over \(\mathcal{I}=\{d_{1},d_{2},\ldots,d_{m}\}\), there exists a positive constant \(0<\delta_{s|\mathcal{I}}<1\), such that
then the matrix A satisfies the s-order block RIP over \(\mathcal {I}\), and the smallest constant \(\delta_{s|\mathcal{I}}\) satisfying the above inequality (8) is called the block RIC of A.
Obviously, the block RIP is an extension of the standard RIP, but it is a less stringent requirement comparing to the standard RIP [18, 25]. Eldar et al. [18] proved that the \(l_{2}/l_{1}\)-minimization can exactly recover any block s-sparse signal when the measurement matrices A satisfy the block RIP with \(\delta_{2s|\mathcal{I}}<0.414\). The block RIC can be improved, for example, Lin and Li [1] improved the bound to \(\delta_{2s|\mathcal{I}}<0.4931\), and established another sufficient condition \(\delta_{s|\mathcal {I}}<0.307\) for exact recovery. So far, to the best of our knowledge, there is no paper that further focuses on improvement of the block RIC. As mentioned in [1, 26, 27], like RIC, there are several benefits for improving the bound on \(\delta_{2s|\mathcal{I}}\). First, it allows more measurement matrices to be used in compressed sensing. Secondly, for the same matrix A, it allows for recovering a block sparse signal with more non-zero entries. Furthermore, it gives better error estimation in a general problem to recover noisy compressible signals. Therefore, this paper addresses improvement of the block RIC, we consider the following minimization for the inaccurate measurement, \(y=Ax+e\) with \(\|e\|_{2} \leq\epsilon\):
Our main result is stated in the following theorem.
Theorem 1
Suppose that the 2s block RIC of the matrix \(A\in\mathbb{R}^{n\times N}\) satisfies
If \(x^{\ast}\) is a solution to (9), then there exist positive constants \(C_{1}\), \(D_{1}\) and \(C_{2}\), \(D_{2}\), and we have
where the constants \(C_{1}\), \(D_{1}\) and \(C_{2}\), \(D_{2}\) depend only on \(\delta _{2s|\mathcal{I}}\), written as
and \(\sigma_{s}(x)_{2,1}\) denotes the best block s-term approximation error of \(x \in\mathbb{R}^{N}\) in \(l_{2}/l_{1}\) norm, i.e.,
Corollary 1
Under the same assumptions as in Theorem 1, suppose that \(e=0\) and x is block s-sparse, then x can be exactly recovered via the \(l_{2}/l_{1}\)-minimization (6).
The remainder of the paper is organized as follows. In Section 2, we introduce the \(l_{2,1}\) robust block NSP that can characterize the stability and robustness of the \(l_{1}\)-minimization with noisy measurement (9). In Section 3, we show that the condition (10) can conclude the \(l_{2,1}\) robust block NSP, which means to implement the proof of our main result. Section 4 is for our conclusions. The last section is an appendix including an important lemma.
2 Block null space property
Although null space property (NSP) is a very important concept in approximation theory [28, 29], it provides a necessary and sufficient condition of the existence and uniqueness of the solution to the \(l_{1}\)-minimization (2), so NSP has drawn extensive attention for studying the characterization of measurement matrix in compressed sensing [30]. It is natural to extend the classic NSP to the block sparse case. For this purpose, we introduce some notations. Suppose that \(x\in\mathbb{R}^{N}\) is an m-block signal, whose structure is like (4), we set \(S\subset\{1,2,\ldots,m\}\) and by \(S^{C}\) we mean the complement of the set S with respect to \(\{1,2,\ldots,m\}\), i.e., \(S^{C}=\{1,2,\ldots,m\}\setminus S\). Let \(x_{S}\) denote the vector equal to x on a block index set S and zero elsewhere, then \(x=x_{S} + x_{S^{C}}\). Here, to investigate the solution to the model (9), we introduce the \(l_{2,1}\) robust block NSP, for more information on other forms of block NSP, we refer the reader to [23, 31].
Definition 2
\(l_{2,1}\) robust block NSP
Given a matrix \(A \in\mathbb{R}^{n\times N}\), for any set \(S\subset\{ 1,2,\ldots,m\}\) with \(\operatorname{card}(S)\leq s\) and for all \(v \in\mathbb{R}^{N}\), if there exist constants \(0< \tau< 1\) and \(\gamma>0\), such that
then the matrix A is said to satisfy the \(l_{2,1}\) robust block NSP of order s with τ and γ.
Our main result relies heavily on this definition. A natural question is what relationship between this robust block NSP and the block RIP. Indeed, from the next section, we shall see that the block RIP with condition (10) can lead to the \(l_{2,1}\) robust block NSP, that is, the \(l_{2,1}\) robust block NSP is weaker than the block RIP to some extent. The spirit of this definition is first to imply the following theorem.
Theorem 2
For any set \(S\subset\{1,2,\ldots,m\}\) with \(\operatorname{card}(S)\leq s\), the matrix \(A \in\mathbb{R}^{n \times N}\) satisfies the \(l_{2,1}\) robust block NSP of order s with constants \(0< \tau< 1\) and \(\gamma>0\), then, for all vectors \(x,z \in\mathbb{R}^{N}\),
Proof
For \(x,z \in\mathbb{R}^{N}\), setting \(v=x-z\), we have
which yield
Clearly, for an m-block vector \(x\in\mathbb{R}^{N}\) is like (4), \(l_{2}\)-norm \(\|x\|_{2}\) can be rewritten as
Thus, we have \(\|v_{S}\|_{2,2}=\|v_{S}\|_{2}\) and \(\|v_{S}\|_{2,1}\leq\sqrt {s}\|v_{S}\|_{2,2}\). So the \(l_{2,1}\) robust block NSP implies
Combining (20) with (22), we can get
Using (22) once again, we derive
which is the desired inequality. □
The \(l_{2,1}\) robust block NSP is vital to characterize the stability and robustness of the \(l_{2}/l_{1}\)-minimization with noisy measurement (9), which is the following result.
Theorem 3
Suppose that the matrix \(A\in\mathbb{R}^{n\times N}\) satisfies the \(l_{2,1}\) robust block NSP of order s with constants \(0< \tau<1\) and \(\gamma>0\), if \(x^{\ast}\) is a solution to the \(l_{2}/l_{1}\)-minimization with \(y=Ax+e\) and \(\|e\|_{2}\leq\epsilon\), then there exist positive constants \(C_{3}\), \(D_{3}\) and \(C_{4}\), \(D_{4}\), and we have
where
Proof
In Theorem 2, by S denote an index set of s largest \(l_{2}\)-norm terms out of m blocks in x, (23) is a direct corollary of Theorem 2 if we notice that \(\|x_{S^{C}}\|_{2,1}=\sigma_{s}(x)_{2,1}\) and \(\|A(x-x^{\ast})\|_{2} \leq 2\epsilon\). Equation (24) is a result of Theorem 7 for \(q=1\) in [23]. □
3 Proof of the main result
From Theorem 3, we see that the inequalities (23) and (24) are the same as in (11) and (12) up to constants, respectively. This means that we shall only show that the condition (10) implies the \(l_{2,1}\) robust block NSP for implementing the proof of our main result.
Theorem 4
Suppose that the 2s block RIC of the matrix \(A\in\mathbb{R}^{n\times N}\) obeys (10), then the matrix A satisfies the \(l_{2,1}\) robust block NSP of order s with constants \(0< \tau<1\) and \(\gamma>0\), where
Proof
The proof relies on a technique introduced in [30]. Suppose that the matrix A has the block RIP with \(\delta_{2s|\mathcal{I}}\). Let v be divided into m blocks whose structure is like (38). Let \(S=:S_{0}\) be an index set of s largest \(l_{2}\)-norm terms out of m blocks in v. We begin by dividing \(S^{C}\) into subsets of size s, \(S_{1}\) is the first s largest \(l_{2}\)-norm terms in \(S^{C}\), \(S_{2}\) is the next s largest \(l_{2}\)-norm terms in \(S^{C}\), etc. Since the vector \(v_{S}\) is block s-sparse, according to the block RIP, for \(|t|\leq\delta_{2s|\mathcal{I}}\), we can write
We are going to establish that, for any \(j\geq1\),
To do so, we normalize the vectors \(v_{S}\) and \(v_{S_{j}}\) by setting \(u=:v_{S}/\|v_{S}\|_{2}\) and \(w=:v_{S_{j}}/\|v_{S_{j}}\|_{2}\). Then, for \(\alpha, \beta>0\), we write
By the block RIP, on the one hand, we have
Making the choice \(\alpha=\frac{(\delta_{2s|\mathcal{I}}+t)}{\sqrt {\delta_{2s|\mathcal{I}}^{2}-t^{2}}}\), \(\beta=\frac{(\delta_{2s|\mathcal{I}}-t)}{\sqrt{\delta_{2s|\mathcal {I}}^{2}-t^{2}}}\), we derive
On the other hand, we also have
Making the choice \(\alpha=\frac{(\delta_{2s|\mathcal{I}}-t)}{\sqrt {\delta_{2s|\mathcal{I}}^{2}-t^{2}}}\), \(\beta=\frac{(\delta_{2s|\mathcal{I}}+t)}{\sqrt{\delta_{2s|\mathcal {I}}^{2}-t^{2}}}\), we get
Combining (31) with (32) yields the desired inequality (29). Next, noticing that \(Av_{S}=A(v-\sum_{j\geq1}Av_{S_{j}})\), we have
According to Lemma A.1 and the setting of \(S_{j}\), we have
Substituting (34) into (33) and noticing (28), we also have
that is,
Let
then it is not difficult to conclude that \(f(t)\) has a maximum point \(t=-\delta_{2s|\mathcal{I}}^{2}\) in the closed interval \([-\delta_{2s|\mathcal{I}}, \delta_{2s|\mathcal{I}}]\), so for \(|t|\leq\delta_{2s|\mathcal{I}}\), we have
Therefore,
that is,
Here, we require
which implies \(\delta_{2s|\mathcal{I}}^{2}<\frac{16}{41}\), that is, \(\delta_{2s|\mathcal{I}}<\frac{4}{\sqrt{41}}\approx0.6246\). □
Remark 1
Substituting (27) into (25) and (26), we can obtain the constants in Theorem 1.
Remark 2
Our result improves that of [1], that is, the bound of block RIC \(\delta_{2s|\mathcal{I}}\) is improved from 0.4931 to 0.6246.
4 Conclusions
In this paper, we gave a new bound on the block RIC \(\delta_{2s|\mathcal {I}}<0.6246\), under this bound, every block s-sparse signal can be exactly recovered via the \(l_{2}/l_{1}\)-minimization approach in the noiseless case and is stably recovered in the noisy measurement case. The result improves the bound on the block RIC \(\delta_{2s|\mathcal{I}}\) in [1].
References
Lin, J, Li, S: Block sparse recovery via mixed \(l_{2}/l_{1}\) minimization. Acta Math. Sin. Engl. Ser. 29(7), 1401-1412 (2013)
Donoho, D: Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289-1306 (2006)
Candès, E, Romberg, J, Tao, T: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489-509 (2006)
Candès, E, Tao, T: Near-optimal signal recovery from random projections: universal encoding strategies. IEEE Trans. Inf. Theory 52(12), 5406-5425 (2006)
Natarajan, BK: Sparse approximate solutions to linear systems. SIAM J. Comput. 24, 227-234 (1995)
Donoho, D, Huo, X: Uncertainty principles and ideal atomic decompositions. IEEE Trans. Inf. Theory 47(4), 2845-2862 (2001)
Elad, M, Bruckstein, A: A generalized uncertainty principle and sparse representation in pairs of bases. IEEE Trans. Inf. Theory 48(9), 2558-2567 (2002)
Gribonval, R, Nielsen, M: Sparse representations in unions of bases. IEEE Trans. Inf. Theory 49(5), 3320-3325 (2003)
Candès, E, Tao, T: Decoding by linear programming. IEEE Trans. Inf. Theory 51(12), 4203-4215 (2005)
Candès, E: The restricted isometry property and its implications for compressed sensing. C. R. Math. Acad. Sci. Paris, Sér. I 346, 589-592 (2008)
Baraniuk, R, Davenport, M, DeVore, R, Wakin, M: A simple proof of the restricted isometry property for random matrices. Constr. Approx. 28, 253-263 (2008)
Parvaresh, F, Vikalo, H, Misra, S, Hassibi, B: Recovering sparse signals using sparse measurement matrices in compressed DNA microarrays. IEEE J. Sel. Top. Signal Process. 2(3), 275-285 (2008)
Elhamifar, E, Vidal, R: Block-sparse recovery via convex optimization. IEEE Trans. Signal Process. 60(8), 4094-4107 (2012)
Majumdar, A, Ward, R: Compressed sensing of color images. Signal Process. 90, 3122-3127 (2010)
Huang, J, Huang, X, Metaxas, D: Learning with dynamic group sparsity. In: IEEE 12th International Conference on Computer Vision, pp. 64-71 (2009)
Simila, T, Tikka, J: Input selection and shrinkage in multiresponse linear regression. Comput. Stat. Data Anal. 52, 406-422 (2007)
Eldar, Y, Kuppinger, P, Bolcskei, H: Block-sparse signals: uncertainty relations and efficient recovery. IEEE Trans. Signal Process. 58(6), 3042-3054 (2010)
Eldar, Y, Mishali, M: Robust recovery of signals from a structured union of subspaces. IEEE Trans. Inf. Theory 55(11), 5302-5316 (2009)
Fu, Y, Li, H, Zhang, Q, Zou, J: Block-sparse recovery via redundant block OMP. Signal Process. 97, 162-171 (2014)
Afdideh, F, Phlypo, R, Jutten, C: Recovery guarantees for mixed norm \(l_{p1,p2}\) block sparse representations. In: 24th European Signal Processing Conference (EUSIPCO), pp. 378-382 (2016)
Wen, J, Zhou, Z, Liu, Z, Lai, M-J, Tang, X: Sharp sufficient conditions for stable recovery of block sparse signals by block orthogonal matching pursuit (2016). arXiv:1605.02894v1
Karanam, S, Li, Y, Radke, RJ: Person re-identification with block sparse recovery. Image Vis. Comput. (2017). doi:10.1016/j.imavis.2016.11.015
Gao, Y, Peng, J, Yue, S: Stability and robustness of the \(l_{2}/l_{q}\)-minimization for block sparse recovery. Signal Process. 137, 287-297 (2017)
Stojnic, M, Parvaresh, F, Hassibi, B: On the reconstruction of block-sparse signals with an optimal number of measurements. IEEE Trans. Signal Process. 57(8), 3075-3085 (2009)
Baraniuk, R, Cevher, V, Duarte, M, Hegde, C: Model-based compressive sensing. IEEE Trans. Inf. Theory 56(4), 1982-2001 (2010)
Mo, Q, Li, S: New bounds on the restricted isometry constant \(\delta_{2k}\). Appl. Comput. Harmon. Anal. 31, 460-468 (2011)
Lin, J, Li, S, Shen, Y: New bounds for restricted isometry constants with coherent tight frames. IEEE Trans. Signal Process. 61(3), 611-621 (2013)
Cohen, A, Dahmen, W, DeVore, A: Compressed sensing and best k-term approximation. J. Am. Math. Soc. 22(1), 211-231 (2009)
Pinkus, A: On L 1-Approximation. Cambridge University Press, Cambridge (1989)
Foucart, S, Rauhut, H: A Mathematical Introduction to Compressive Sensing. Springer, New York (2013)
Gao, Y, Peng, J, Yue, S, Zhao, Y: On the null space property of \(l_{q}\)-minimization for \(0< q\leq 1\) in compressed sensing. J. Funct. Spaces 2015, Article ID 579853 (2015). doi:10.1155/2015/579853
Acknowledgements
This work was supported by the Science Research Project of Ningxia Higher Education Institutions of China (NGY2015152), National Natural Science Foundation of China (NSFC) (11561055), and Science Research Project of Beifang University of Nationalities (2016SXKY07).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
The two authors contributed equally to this work, and they read and approved the final manuscript.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
Lemma A.1
Suppose that \(v\in\mathbb{R}^{N}\) is split into m blocks, \(v[1],v[2],\ldots,v[m]\), which are of length \(d_{1}, d_{2},\ldots,d_{m}\), respectively, that is,
Suppose that the m blocks in x are rearranged by nonincreasing order for which
Then
Proof
See Lemma 6.14 in [30] for the details. □
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Gao, Y., Ma, M. A new bound on the block restricted isometry constant in compressed sensing. J Inequal Appl 2017, 174 (2017). https://doi.org/10.1186/s13660-017-1448-2
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-017-1448-2