Abstract
The restricted isometry property (RIP) is a well-known matrix condition that provides state-of-the-art reconstruction guarantees for compressed sensing. While random matrices are known to satisfy this property with high probability, deterministic constructions have found less success. In this paper, we consider various techniques for demonstrating RIP deterministically, some popular and some novel, and we evaluate their performance. In evaluating some techniques, we apply random matrix theory and inadvertently find a simple alternative proof that certain random matrices are RIP. Later, we propose a particular class of matrices as candidates for being RIP, namely, equiangular tight frames (ETFs). Using the known correspondence between real ETFs and strongly regular graphs, we investigate certain combinatorial implications of a real ETF being RIP. Specifically, we give probabilistic intuition for a new bound on the clique number of Paley graphs of prime order, and we conjecture that the corresponding ETFs are RIP in a manner similar to random matrices.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Let x be an unknown N-dimensional vector with the property that at most K of its entries are nonzero, that is, x is K-sparse. The goal of compressed sensing is to construct relatively few non-adaptive linear measurements along with a stable and efficient reconstruction algorithm that exploits this sparsity structure. Expressing each measurement as a row of an M×N matrix Φ, we have the following noisy system:
In the spirit of compressed sensing, we only want a few measurements: M≪N. Also, in order for there to exist an inversion process for (1), Φ must map K-sparse vectors injectively, or equivalently, every subcollection of 2K columns of Φ must be linearly independent. Unfortunately, the natural reconstruction method in this general case, i.e., finding the sparsest approximation of y from the dictionary of columns of Φ, is known to be NP-hard [22]. Moreover, the independence requirement does not impose any sort of dissimilarity between columns of Φ, meaning distinct identity basis elements could lead to similar measurements, thereby bringing instability in reconstruction.
To get around the NP-hardness of sparse approximation, we need more structure in the matrix Φ. Instead of considering linear independence of all subcollections of 2K columns, it has become common to impose a much stronger requirement: that every submatrix of 2K columns of Φ be well-conditioned. To be explicit, we have the following definition:
Definition 1
The matrix Φ has the (K,δ)-restricted isometry property (RIP) if
for every K-sparse vector x. The smallest δ for which Φ is (K,δ)-RIP is the restricted isometry constant (RIC) δ K .
In words, matrices which satisfy RIP act as a near-isometry on sufficiently sparse vectors. Note that a (2K,δ)-RIP matrix with δ<1 necessarily has that all subcollections of 2K columns are linearly independent. Also, the well-conditioning requirement of RIP forces dissimilarity in the columns of Φ to provide stability in reconstruction. Most importantly, the additional structure of RIP allows for the possibility of getting around the NP-hardness of sparse approximation. Indeed, a significant result in compressed sensing is that RIP sensing matrices enable efficient reconstruction:
Theorem 2
(Theorem 1.3 in [9])
Suppose an M×N matrix Φ has the (2K,δ)-restricted isometry property for some \(\delta<\sqrt{2}-1\). Assuming ∥z∥≤ε, then for every K-sparse vector \(x\in\mathbb{R}^{N}\), the following reconstruction from (1):
satisfies \(\|\tilde{x}-x\|\leq C\varepsilon\), where C only depends on δ.
The fact that RIP sensing matrices convert an NP-hard reconstruction problem into an ℓ 1-minimization problem has prompted many in the community to construct RIP matrices. Among these constructions, the most successful have been random matrices, such as matrices with independent Gaussian or Bernoulli entries [5], or matrices whose rows were randomly selected from the discrete Fourier transform matrix [26]. With high probability, these random constructions support sparsity levels K on the order of \(\smash{\frac{M}{\log^{\alpha}N}}\) for some α≥1. Intuitively, this level of sparsity is near-optimal because K cannot exceed \(\smash{\frac{M}{2}}\) by the linear independence condition. Unfortunately, it is difficult to check whether a particular instance of a random matrix is (K,δ)-RIP, as this involves the calculation of singular values for all \(\smash{\binom{N}{K}}\) submatrices of K columns of the matrix. In particular, it was recently shown that certifying that a matrix satisfies RIP is NP-hard [4]. For this reason, and for the sake of reliable sensing standards, many have become interested in finding deterministic RIP matrix constructions.
In the next section, we review the well-understood techniques that are commonly used to analyze the restricted isometry of deterministic constructions: the Gershgorin circle theorem, and the spark of a matrix. Unfortunately, neither technique demonstrates RIP for sparsity levels as large as what random constructions are known to support; rather, with these techniques, a deterministic M×N matrix Φ can only be shown to have RIP for sparity levels on the order of \(\sqrt{M}\). This limitation has become known as the “square-root bottleneck,” and it poses an important problem in matrix design [31].
To date, the only deterministic construction that manages to go beyond this bottleneck is given by Bourgain et al. [8]; in Sect. 3, we discuss what they call flat RIP, which is the technique they use to demonstrate RIP. It is important to stress the significance of their contribution: Before [8], it was unclear how deterministic analysis might break the bottleneck, and as such, their result is a major theoretical achievement. On the other hand, their improvement over the square-root bottleneck is notably slight compared to what random matrices provide. However, by our Theorem 14, their technique can actually be used to demonstrate RIP for sparsity levels much larger than \(\sqrt{M}\), meaning one could very well demonstrate random-like performance given the proper construction. Our result applies their technique to random matrices, and it inadvertently serves as a simple alternative proof that certain random matrices are RIP. In Sect. 4, we introduce an alternate technique, which by our Theorem 17, can also demonstrate RIP for large sparsity levels.
After considering the efficacy of these techniques to demonstrate RIP, it remains to find a deterministic construction that is amenable to analysis. To this end, we discuss various properties of a particularly nice matrix which comes from frame theory, called an equiangular tight frame (ETF). Specifically, real ETFs can be characterized in terms of their Gram matrices using strongly regular graphs [34]. By applying the techniques of Sects. 3 and 4 to real ETFs, we derive equivalent combinatorial statements in graph theory. By focussing on the ETFs which correspond to Paley graphs of prime order, we are able to make important statements about their clique numbers and provide some intuition for an open problem in number theory. We conclude by conjecturing that the Paley ETFs are RIP in a manner similar to random matrices.
2 Well-Understood Techniques
2.1 Applying Gershgorin’s Circle Theorem
Take an M×N matrix Φ. For a given K, we wish to find some δ for which Φ is (K,δ)-RIP. To this end, it is useful to consider the following expression for the restricted isometry constant:
Here, \(\varPhi_{\mathcal{K}}\) denotes the submatrix consisting of columns of Φ indexed by \(\mathcal{K}\). Note that we are not tasked with actually computing δ K ; rather, we recognize that Φ is (K,δ)-RIP for every δ≥δ K , and so we seek an upper bound on δ K . The following classical result offers a particularly easy-to-calculate bound on eigenvalues:
Theorem 3
(Gershgorin circle theorem [18])
For each eigenvalue λ of a K×K matrix A, there is an index i∈{1,…,K} such that
To use this theorem, take some Φ with unit-norm columns. Note that \(\varPhi_{\mathcal{K}}^{*}\varPhi_{\mathcal{K}}\) is the Gram matrix of the columns indexed by \(\mathcal{K}\), and as such, the diagonal entries are 1, and the off-diagonal entries are inner products between distinct columns of Φ. Let μ denote the worst-case coherence of Φ=[φ 1⋯φ N ]:
Then the size of each off-diagonal entry of \(\varPhi_{\mathcal{K}}^{*}\varPhi_{\mathcal{K}}\) is ≤μ, regardless of our choice for \(\mathcal{K}\). Therefore, for every eigenvalue λ of \(\varPhi_{\mathcal{K}}^{*}\varPhi_{\mathcal{K}}-\mathrm{I}_{K}\), the Gershgorin circle theorem gives
Since (3) holds for every eigenvalue λ of \(\varPhi_{\mathcal{K}}^{*}\varPhi_{\mathcal{K}}-\mathrm{I}_{K}\) and every choice of \(\mathcal{K}\subseteq\{1,\ldots,N\}\), we conclude from (2) that δ K ≤(K−1)μ, i.e., Φ is (K,(K−1)μ)-RIP. This process of using the Gershgorin circle theorem to demonstrate RIP for deterministic constructions has become standard in the community [3, 15, 17].
Recall that random RIP constructions support sparsity levels K on the order of \(\smash{\frac{M}{\log^{\alpha}N}}\) for some α≥1. To see how well the Gershgorin circle theorem demonstrates RIP, we need to express μ in terms of M and N. To this end, we consider the following result:
Theorem 4
(Welch bound [35])
Every M×N matrix with unit-norm columns has worst-case coherence
To use this result, we consider matrices whose worst-case coherence achieves equality in the Welch bound. These are known as equiangular tight frames [30], which can be defined as follows:
Definition 5
A matrix is said to be an equiangular tight frame (ETF) if
-
(i)
the columns have unit norm,
-
(ii)
the rows are orthogonal with equal norm, and
-
(iii)
the inner products between distinct columns are equal in modulus.
To date, there are three general constructions that build several families of ETFs [17, 34, 36]. Since ETFs achieve equality in the Welch bound, we can further analyze what it means for an M×N ETF Φ to be (K,(K−1)μ)-RIP. In particular, since Theorem 2 requires that Φ be (2K,δ)-RIP for \(\delta<\sqrt{2}-1\), it suffices to have \(\smash{\frac{2K}{\sqrt{M}}<\sqrt{2}-1}\), since this implies
That is, ETFs form sensing matrices that support sparsity levels K on the order of \(\sqrt{M}\). Most other deterministic constructions have identical bounds on sparsity levels [3, 15, 17, 32]. In fact, since ETFs minimize coherence, they are necessarily optimal constructions in terms of the Gershgorin demonstration of RIP, but the question remains whether they are actually RIP for larger sparsity levels; the Gershgorin demonstration fails to account for cancellations in the sub-Gram matrices \(\varPhi_{\mathcal{K}}^{*}\varPhi_{\mathcal{K}}\), and so this technique is too weak to indicate either possibility.
2.2 Spark Considerations
Recall that, in order for an inversion process for (1) to exist, Φ must map K-sparse vectors injectively, or equivalently, every subcollection of 2K columns of Φ must be linearly independent. This linear independence condition can be nicely expressed in more general terms, as the following definition provides:
Definition 6
The spark of a matrix Φ is the size of the smallest linearly dependent subset of columns, i.e.,
This definition was introduced by Dohono and Elad [16] to help build a theory of sparse representation that later gave birth to modern compressed sensing. The concept of spark is also found in matroid theory, where it goes by the name girth [1]. The condition that every subcollection of 2K columns of Φ is linearly independent is equivalent to Spark(Φ)>2K. Relating spark to RIP, suppose Φ is (K,δ)-RIP with Spark(Φ)≤K. Then there exists a nonzero K-sparse vector x such that
and so δ≥1. The reason behind this stems from our necessary linear independence condition: RIP implies linear independence, and so small spark implies linear dependence, which in turn implies not RIP.
As an example of using spark to analyze RIP, we now consider a construction that dates back to Seidel [28], and was recently developed further in [17]. Here, a special type of block design is used to build an ETF. Let’s start with a definition:
Definition 7
A (t,k,v)-Steiner system is a v-element set V with a collection of k-element subsets of V, called blocks, with the property that any t-element subset of V is contained in exactly one block. The {0,1}-incidence matrix A of a Steiner system has entries A[i,j], where A[i,j]=1 if the ith block contains the jth element, and otherwise A[i,j]=0.
One example of a Steiner system is a set with all possible two-element blocks. This forms a (2,2,v)-Steiner system because every pair of elements is contained in exactly one block. The following theorem details how to construct ETFs using Steiner systems.
Theorem 8
(Theorem 1 in [17])
Every (2,k,v)-Steiner system can be used to build a \(\smash{\frac{v(v-1)}{k(k-1)}\times v(1+\frac{v-1}{k-1})}\) equiangular tight frame Φ according the following procedure:
-
(i)
Let A be the \(\frac{v(v-1)}{k(k-1)}\times v\) incidence matrix of a (2,k,v)-Steiner system.
-
(ii)
Let H be a \((1+\frac{v-1}{k-1})\times(1+\frac{v-1}{k-1})\) (possibly complex) Hadamard matrix.
-
(iii)
For each j=1,…,v, let Φ j be a \(\frac{v(v-1)}{k(k-1)}\times(1+\frac{v-1}{k-1})\) matrix obtained from the jth column of A by replacing each of the one-valued entries with a distinct row of H, and every zero-valued entry with a row of zeros.
-
(iv)
Concatenate and rescale the Φ j ’s to form \(\varPhi=(\frac{k-1}{v-1})^{\frac{1}{2}}[\varPhi_{1}\cdots \varPhi_{v}]\).
As an example, we build an ETF from a (2,2,4)-Steiner system. In this case, we make use of the corresponding incidence matrix A along with a 4×4 Hadamard matrix H:
In both of these matrices, pluses represent 1’s, minuses represent −1’s, and blank spaces represent 0’s. For the matrix A, each row represents a block. Since each block contains two elements, each row of the matrix has two ones. Also, any two elements determines a unique common row, and so any two columns have a single one in common. To form the corresponding 6×16 ETF Φ, we replace the three ones in each column of A with the second, third, and fourth rows of H. Normalizing the columns gives the following 6×16 ETF:
It is easy to verify that Φ satisfies Definition 5. Several infinite families of (2,k,v)-Steiner systems are already known, and Theorem 8 says that each one can be used to build a different ETF. Recall from the previous subsection that Steiner ETFs, being ETFs, are optimal constructions in terms of the Gershgorin demonstration of RIP. We now use the notion of spark to further analyze Steiner ETFs. Specifically, note that the first four columns in (5) are linearly dependent. As such, Spark(Φ)≤4. In general, the spark of a Steiner ETF is \(\smash{\leq\frac{v-1}{k-1}\leq\sqrt{2M}}\) (see Theorem 3 of [17] and discussion thereafter), and so having K on the order of \(\sqrt{M}\) is necessary for a Steiner ETF to be (K,δ)-RIP for some δ<1. This answers the closing question of the previous subsection: in general, ETFs are not RIP for sparsity levels larger than the order of \(\sqrt{M}\). This contrasts with random constructions, which support sparsity levels as large as the order of \(\smash{\frac{M}{\log^{\alpha}N}}\) for some α≥1. That said, are there techniques to demonstrate that certain deterministic matrices are RIP for sparsity levels larger than the order of \(\sqrt{M}\)?
3 Flat Restricted Orthogonality
In [8], Bourgain et al. provided a deterministic construction of M×N RIP matrices that support sparsity levels K on the order of M 1/2+ε for some small value of ε. To date, this is the only known deterministic RIP construction that breaks the so-called “square-root bottleneck.” In this section, we analyze their technique for demonstrating RIP, but first, we provide some historical context. We begin with a definition:
Definition 9
The matrix Φ has (K,θ)-restricted orthogonality (RO) if
for every pair of K-sparse vectors x,y with disjoint support. The smallest θ for which Φ has (K,θ)-RO is the restricted orthogonality constant (ROC) θ K .
In the past, restricted orthogonality was studied to produce reconstruction performance guarantees for both ℓ 1-minimization and the Dantzig selector [10, 11]. Intuitively, restricted orthogonality is important to compressed sensing because any stable inversion process for (1) would require Φ to map vectors of disjoint support to particularly dissimilar measurements. For the present paper, we are interested in upper bounds on RICs; in this spirit, the following result illustrates some sort of equivalence between RICs and ROCs:
Lemma 10
(Lemma 1.2 in [10])
θ K ≤δ 2K ≤θ K +δ K .
To be fair, the above upper bound on δ 2K does not immediately help in estimating δ 2K , as it requires one to estimate δ K . Certainly, we may iteratively apply this bound to get
Note that δ 1 is particularly easy to calculate:
which is zero when the columns of Φ have unit norm. In pursuit of a better upper bound on δ 2K , we use techniques from [8] to remove the log factor from (6):
Lemma 11
δ 2K ≤2θ K +δ 1.
Proof
Given a matrix Φ=[φ 1⋯φ N ], we want to upper-bound the smallest δ for which (1−δ)∥x∥2≤∥Φx∥2≤(1+δ)∥x∥2, or equivalently:
for every nonzero 2K-sparse vector x. We observe from (7) that we may take x to have unit norm without loss of generality. Letting \(\mathcal{K}\) denote a size-2K set that contains the support of x, and letting \(\{x_{k}\}_{k\in\mathcal{K}}\) denote the corresponding entries of x, the triangle inequality gives
Since \(\sum_{i\in\mathcal{K}}|x_{i}|^{2}=1\), the second term of (8) satisfies
and so it remains to bound the first term of (8). To this end, we note that for each \(i,j\in\mathcal{K}\) with j≠i, the term 〈x i φ i ,x j φ j 〉 appears in
as many times as there are size-K subsets of \(\mathcal{K}\) which contain i but not j, i.e., \(\binom{2K-2}{K-1}\) times. Thus, we use the triangle inequality and the definition of restricted orthogonality to get
At this point, x having unit norm implies \((\sum_{i\in\mathcal{I}}|x_{i}|^{2})^{1/2}(\sum_{j\in\mathcal{K}\setminus\mathcal{I}}|x_{j}|^{2})^{1/2}\leq\frac{1}{2}\), and so
Applying both this and (9) to (8) gives the result. □
Having discussed the relationship between restricted isometry and restricted orthogonality, we are now ready to introduce the property used in [8] to demonstrate RIP:
Definition 12
The matrix Φ=[φ 1⋯φ N ] has \((K,\hat{\theta})\) -flat restricted orthogonality if
for every disjoint pair of subsets \(\mathcal{I},\mathcal{J}\subseteq\{1,\ldots,N\}\) with \(|\mathcal{I}|,|\mathcal{J}|\leq K\).
Note that Φ has (K,θ K )-flat restricted orthogonality (FRO) by taking x and y in Definition 9 to be the characteristic functions \(\chi_{\mathcal{I}}\) and \(\chi_{\mathcal{J}}\), respectively. Also to be clear, flat restricted orthogonality is called flat RIP in [8]; we feel the name change is appropriate considering the preceeding literature. Moreover, the definition of flat RIP in [8] required Φ to have unit-norm columns, whereas we strengthen the corresponding results so as to make no such requirement. Interestingly, FRO bears some resemblence to the cut-norm of the Gram matrix Φ ∗ Φ, defined as the maximum value of \(|\sum_{i\in\mathcal{I}}\sum_{j\in\mathcal{J}}\langle\varphi_{i},\varphi_{j}\rangle|\) over all subsets \(\mathcal{I},\mathcal{J}\subseteq\{1,\ldots,N\}\); the cut-norm has received some attention recently for the hardness of its approximation [2]. The following theorem illustrates the utility of flat restricted orthogonality as an estimate of the RIC:
Theorem 13
A matrix with \((K,\hat{\theta})\)-flat restricted orthogonality has a restricted orthogonality constant θ K which is \(\leq C\hat{\theta}\log K\), and we may take C=75.
Indeed, when combined with Lemma 11, this result gives an upper bound on the RIC: \(\delta_{2K}\leq 2C\hat{\theta}\log K + \delta_{1}\). The noteworthy benefit of this upper bound is that the problem of estimating singular values of submatrices is reduced to a combinatorial problem of bounding the coherence of disjoint sums of columns. Furthermore, this reduction comes at the price of a mere log factor in the estimate. In [8], Bourgain et al. managed to satisfy this combinatorial coherence property using techniques from additive combinatorics. While we will not discuss their construction, we find the proof of Theorem 13 to be instructive; our proof is valid for all values of K (as opposed to sufficiently large K in the original [8]), and it has near-optimal constants where appropriate. The proof can be found in the Appendix.
To reiterate, Bourgain et al. [8] used flat restricted orthogonality to build the only known deterministic construction of M×N RIP matrices that support sparsity levels K on the order of M 1/2+ε for some small value of ε. We are particularly interested in the efficacy of FRO as a technique to demonstrate RIP in general. Certainly, [8] shows that FRO can produce at least an ε improvement over the Gershgorin technique discussed in the previous section, but it remains to be seen whether FRO can do better.
In the remainder of this section, we will use random matrices to show that flat restricted orthogonality is actually capable of demonstrating RIP with sparsity levels only logarithmic factors away from optimal (see [5]) so still much higher than indicated by [8]. Hopefully, this realization will spur further research in deterministic constructions which satisfy FRO. To evaluate FRO, we investigate how well it performs with random matrices; in doing so, we give an alternative proof that certain random matrices satisfy RIP with high probability:
Theorem 14
Construct an M×N matrix Φ by drawing each of its entries independently from a Gaussian distribution with mean zero and variance \(\frac{1}{M}\), take C to be the constant from Theorem 13, and set α=0.01. Then Φ has \((K,\frac{(1-\alpha)\delta}{2C\log K})\)-flat restricted orthogonality and δ 1≤αδ, and therefore Φ has the (2K,δ)-restricted isometry property, with high probability provided \(M\geq\frac{33C^{2}}{\delta^{2}}K\log^{2} K\log N\).
In proving this result, we will make use of the following Bernstein inequality:
Theorem 15
Let \(\{Z_{m}\}_{m=1}^{M}\) be independent random variables of mean zero with bounded moments, and suppose there exists L>0 such that
for every k≥2. Then
provided
Proof of Theorem 14
Considering Lemma 11, it suffices to show that Φ has restricted orthogonality and that δ 1 is sufficiently small. First, to demonstrate restricted orthogonality, it suffices to demonstrate FRO by Theorem 13, and so we will ensure that the following quantity is small:
Notice that \(X_{m}:=\sum_{i\in\mathcal{I}}\varphi_{i}[m]\) and \(Y_{m}:=\sum_{j\in\mathcal{J}}\varphi_{j}[m]\) are mutually independent over all m=1,…,M since \(\mathcal{I}\) and \(\mathcal{J}\) are disjoint. Also, X m is Gaussian with mean zero and variance \(\frac{|\mathcal{I}|}{M}\), while Y m similarly has mean zero and variance \(\frac{|\mathcal{J}|}{M}\). Viewed this way, (12) being small corresponds to the sum of independent random variables Z m :=X m Y m having its probability measure concentrated at zero. To this end, Theorem 15 is naturally applicable, as the absolute central moments of a Gaussian random variable X with mean zero and variance σ 2 are well known:
Since Z m =X m Y m is a product of independent Gaussian random variables, this gives
Further since \(\mathbb{E}|Z_{m}|^{2}=\frac{|\mathcal{I}||\mathcal{J}|}{M^{2}}\), we may define \(L:=2\frac{(|\mathcal{I}||\mathcal{J}|)^{1/2}}{M}\) to get (10). Later, we will take \(\hat{\theta}<\delta<\sqrt{2}-1<\frac{1}{2}\). Considering
we therefore have (11), which in this case has the form
where the probability is doubled due to the symmetric distribution of \(\sum_{m=1}^{M} Z_{m}\). Since we need to account for all possible choices of \(\mathcal{I}\) and \(\mathcal{J}\), we will perform a union bound. The total number of choices is given by
and so the union bound gives
Thus, Gaussian matrices tend to have FRO, and hence restricted orthogonality by Theorem 13; this is made more precise below.
Again by Lemma 11, it remains to show that δ 1 is sufficiently small. To this end, we note that M∥φ n ∥2 has chi-squared distribution with M degrees of freedom, and so we can use another (simpler) concentration-of-measure result; see Lemma 1 of [21]:
for any t>0. Specifically, we pick
and we perform a union bound over the N choices for φ n :
To summarize, Lemma 11, the union bound, Theorem 13, and (13) and (14) give
and so \(M\geq\frac{33C^{2}}{\delta^{2}}K\log^{2} K\log N\) gives that Φ has (2K,δ)-RIP with high probability. □
We note that a version of Theorem 14 also holds for matrices whose entries are independent Bernoulli random variables taking values \(\pm\frac{1}{\sqrt{M}}\) with equal probability. In this case, one can again apply Theorem 15 by comparing moments with those of the Gaussian distribution; also, a union bound with δ 1 will not be necessary since the columns have unit norm, meaning δ 1=0.
4 Restricted Isometry by the Power Method
In the previous section, we established the efficacy of flat restricted orthogonality as a technique to demonstrate RIP. While flat restricted orthogonality has proven useful in the past [8], future deterministic RIP constructions might not use this technique. Indeed, it would be helpful to have other techniques available that demonstrate RIP beyond the square-root bottleneck. In pursuit of such techniques, we recall that the smallest δ for which Φ is (K,δ)-RIP is given in terms of operator norms in (2). In addition, we notice that for any self-adjoint matrix A and any 1≤p≤∞,
where λ(A) denotes the spectrum of A with multiplicities. Let A=UDU ∗ be the eigenvalue decomposition of A. When p is even, we can express ∥λ(A)∥ p in terms of an easy-to-calculate trace:
Combining these ideas with the fact that ∥⋅∥ p →∥⋅∥∞ pointwise, as p→∞, leads to the following:
Theorem 16
Given an M×N matrix Φ, define
Then Φ has the (K,δ K;q )-restricted isometry property for every q≥1. Moreover, the restricted isometry constant of Φ is approached by these estimates:
Similar to flat restricted orthogonality, this power method has a combinatorial aspect that prompts one to check every sub-Gram matrix of size K; one could argue that the power method is slightly less combinatorial, as flat restricted orthogonality is a statement about all pairs of disjoint subsets of size ≤K. Regardless, the work of Bourgain et al. [8] illustrates that combinatorial properties can be useful, and there may exist constructions to which the power method would be naturally applied. Moreover, we note that since δ K;q approaches δ K , a sufficiently large choice of q should deliver better-than-ε improvement over the Gershgorin analysis. How large should q be? If we assume Φ has unit-norm columns, taking q=1 gives
where μ is the worst-case coherence of Φ. Equality is achieved above whenever Φ is an ETF, in which case (15) along with reasoning similar to (4) demonstrates that Φ is RIP with sparsity levels on the order of \(\sqrt{M}\), as the Gershgorin analysis established. It remains to be shown how δ K;2 compares. To make this comparison, we apply the power method to random matrices:
Theorem 17
Construct an M×N matrix Φ by drawing each of its entries independently from a Gaussian distribution with mean zero and variance \(\frac{1}{M}\), and take δ K;q to be as defined in Theorem 16. Then δ K;q ≤δ, and therefore Φ has the (K,δ)-restricted isometry property, with high probability provided \(M\geq\frac{81}{\delta^{2}}K^{1+1/q}\log\frac{eN}{K}\).
While flat restricted orthogonality comes with a negligible penalty of log2 K in the number of measurements, the power method has a penalty of K 1/q. As such, the case q=1 uses the order of K 2 measurements, which matches our calculation in (15). Moreover, the power method with q=2 can demonstrate RIP with K 3/2 measurements, i.e., K∼M 1/2+1/6, which is considerably better than an ε improvement over the Gershgorin technique.
Proof of Theorem 17
Take \(t:=\frac{\delta}{3K^{1/2q}}-(\frac{K}{M})^{1/2}\) and pick \(\mathcal{K}\subseteq\{1,\ldots,N\}\). Then Theorem II.13 of [14] states
Continuing, we use the fact that \(\lambda(\varPhi_{\mathcal{K}}^{*}\varPhi_{\mathcal{K}})=\sigma(\varPhi_{\mathcal{K}})^{2}\) to get
where the last inequality follows from the fact that \((\frac{K}{M})^{1/2}+t<1\). Since \(\varPhi_{\mathcal{K}}^{*}\varPhi_{\mathcal{K}}\) and I K are simultaneously diagonalizable, the spectrum of \(\varPhi_{\mathcal{K}}^{*}\varPhi_{\mathcal{K}}-\mathrm{I}_{K}\) is given by \(\lambda(\varPhi_{\mathcal{K}}^{*}\varPhi_{\mathcal{K}}-\mathrm{I}_{K})=\lambda(\varPhi_{\mathcal{K}}^{*}\varPhi_{\mathcal{K}})-1\). Combining this with (16) then gives
Considering \(\mathrm{Tr}[A^{2q}]^{\frac{1}{2q}}=\|\lambda(A)\|_{2q}\leq K^{\frac{1}{2q}}\|\lambda(A)\|_{\infty}\), we continue:
From here, we perform a union bound over all possible choices of \(\mathcal{K}\):
Rearranging \(M\geq\frac{81}{\delta^{2}}K^{1+1/q}\log\frac{eN}{K}\) gives \(K^{1/2}\leq\frac{\delta M^{1/2}}{9K^{1/2q}\log^{1/2}(eN/K)}\leq\frac{\delta M^{1/2}}{9K^{1/2q}}\), and so
5 Equiangular Tight Frames as RIP Candidates
In Sect. 2, we observed that equiangular tight frames (ETFs) are optimal RIP matrices under the Gershgorin analysis. In the present section, we reexamine ETFs as prospective RIP matrices. Specifically, we consider the possibility that certain classes of M×N ETFs support sparsity levels K larger than the order of \(\sqrt{M}\). Before analyzing RIP, let’s first observe some important features of ETFs. Recall that Definition 5 characterized ETFs in terms of their rows and columns. Interestingly, real ETFs have a natural alternative characterization.
Let Φ be a real M×N ETF, and consider the corresponding Gram matrix Φ ∗ Φ. Observing Definition 5, we have from (i) that the diagonal entries of Φ ∗ Φ are 1’s. Also, (iii) indicates that the off-diagonal entries are equal in absolute value (to the Welch bound); since Φ has real entries, the phase of each off-diagonal entry of Φ ∗ Φ is either positive or negative. Letting μ denote the absolute value of the off-diagonal entries, we can decompose the Gram matrix as Φ ∗ Φ=I N +μS, where S is a matrix of zeros on the diagonal and ±1’s on the off-diagonal. Here, S is referred to as a Seidel adjacency matrix, as S encodes the adjacency rule of a simple graph with i↔j whenever S[i,j]=−1; this correspondence originated in [33].
There is an important equivalence class amongst ETFs: given an ETF Φ, one can negate any of the columns to form another ETF Φ′. Indeed, the ETF properties in Definition 5 are easily verified to hold for this new matrix. For obvious reasons, Φ and Φ′ are called flipping equivalent. This equivalence plays a key role in the following result, which characterizes real ETFs in terms of a particular class of strongly regular graphs:
Definition 18
We say a simple graph G is strongly regular of the form srg(v,k,λ,μ) if
-
(i)
G has v vertices,
-
(ii)
every vertex has k neighbors (i.e., G is k-regular),
-
(iii)
every two adjacent vertices have λ common neighbors, and
-
(iv)
every two non-adjacent vertices have μ common neighbors.
Theorem 19
(Corollary 5.6 in [34])
Every real M×N equiangular tight frame with N>M+1 is flipping equivalent to a frame whose Seidel adjacency matrix corresponds to the join of a vertex with a strongly regular graph of the form
Conversely, every such graph corresponds to flipping equivalence classes of equiangular tight frames in the same manner.
The previous two sections illustrated the main issue with the Gershgorin analysis: it ignores important cancellations in the sub-Gram matrices. We suspect that such cancellations would be more easily observed in a real ETF, since Theorem 19 neatly represents the Gram matrix’s off-diagonal oscillations in terms of adjacencies in a strongly regular graph. The following result gives a taste of how useful this graph representation can be:
Theorem 20
Take a real equiangular tight frame Φ with worst-case coherence μ, and let G denote the corresponding strongly regular graph in Theorem 19. Then the restricted isometry constant of Φ is given by δ K =(K−1)μ for every K≤ω(G)+1, where ω(G) denotes the size of the largest clique in G.
Proof
The Gershgorin analysis (3) gives the bound δ K ≤(K−1)μ, and so it suffices to prove δ K ≥(K−1)μ. Since K≤ω(G)+1, there exists a clique of size K in the join of G with a vertex. Let \(\mathcal{K}\) denote the vertices of this clique, and take \(S_{\mathcal{K}}\) to be the corresponding Seidel adjacency submatrix. In this case, \(S_{\mathcal{K}}=\mathrm{I}_{K}-\mathrm{J}_{K}\), where J K is the K×K matrix of all 1’s. Observing the decomposition \(\varPhi_{\mathcal{K}}^{*}\varPhi_{\mathcal{K}}=\mathrm{I}_{K}+\mu S_{\mathcal{K}}\), it follows from (2) that
which concludes the proof. □
This result indicates that the Gershgoin analysis is tight for all real ETFs, at least for sufficiently small values of K. In particular, in order for a real ETF to be RIP beyond the square-root bottleneck, its graph must have a small clique number. As an example, note that the first four columns of the Steiner ETF in (5) have negative inner products with each other, and thus the corresponding subgraph is a clique. In general, each block of an M×N Steiner ETF, whose size is guaranteed to be \(\mathrm{O}(\sqrt{M})\), is a lower-dimensional simplex and therefore has this property; this is an alternative proof that the Gershgorin analysis of Steiner ETFs is tight for \(K=\mathrm{O}(\sqrt{M})\).
5.1 Equiangular Tight Frames with Flat Restricted Orthogonality
To find ETFs that are RIP beyond the square-root bottleneck, we must apply better techniques than Gershgorin. We first consider what it means for an ETF to have \((K,\hat{\theta})\)-flat restricted orthogonality. Take a real ETF Φ=[φ 1⋯φ N ] with worst-case coherence μ, and note that the corresponding Seidel adjacency matrix S can be expressed in terms of the usual {0,1}-adjacency matrix A of the same graph: S[i,j]=1−2A[i,j] whenever i≠j. Therefore, for every disjoint \(\mathcal{I},\mathcal{J}\subseteq\{1,\ldots,N\}\) with \(|\mathcal{I}|,|\mathcal{J}|\leq K\), we want
where \(E(\mathcal{I},\mathcal{J})\) denotes the number of edges between \(\mathcal{I}\) and \(\mathcal{J}\) in the graph. This condition bears a striking resemblence to the following well-known result in graph theory:
Lemma 21
(Expander mixing lemma [20])
Given a d-regular graph of n vertices, the second largest eigenvalue λ of its adjacency matrix satisfies
for every pair of vertex subsets \(\mathcal{I}, \mathcal{J}\).
In words, the expander mixing lemma says that the number of edges between vertex subsets of a regular graph is roughly what you would expect in a random regular graph. For this lemma to be applicable to (19), we need the strongly regular graph of Theorem 19 to satisfy \(\frac{L}{N-1}=\frac{d}{n}\approx\frac{1}{2}\). Using the formula for L, it is not difficult to show that \(|\frac{L}{N-1}-\frac{1}{2}|=\mathrm{O}(M^{-1/2})\) provided N=O(M) and N≥2M. Furthermore, the second largest eigenvalue of the strongly regular graph will be \(\lambda\approx\frac{1}{2}N^{1/2}\), and so the expander mixing lemma says the optimal \(\hat{\theta}\) is \(\leq 2\mu\lambda\approx(\frac{N-M}{M})^{1/2}\) since \(\mu=(\frac{N-M}{M(N-1)})^{1/2}\). This is a rather weak estimate for \(\hat{\theta}\) because the expander mixing lemma does not account for the sizes of \(\mathcal{I}\) and \(\mathcal{J}\) being ≤K. Put in this light, a real ETF that has flat restricted orthogonality corresponds to a strongly regular graph that satisfies a particularly strong version of the expander mixing lemma.
5.2 Equiangular Tight Frames and the Power Method
Next, we try applying the power method to ETFs. Given a real ETF Φ=[φ 1⋯φ N ], let H:=Φ ∗ Φ−I N denote the “hollow” Gram matrix. Also, take \(E_{\mathcal{K}}\) to be the N×K matrix built from the columns of I N that are indexed by \(\mathcal{K}\). Then
Since \(E_{\mathcal{K}}E_{\mathcal{K}}^{*}=\sum_{k\in\mathcal{K}}\delta_{k}\delta_{k}^{*}\), where δ k is the kth identity basis element, we continue:
where the last step used the cyclic property of the trace. From here, note that H has a zero diagonal, meaning several of the terms in (20) are zero, namely, those for which k ℓ+1=k ℓ for some \(\ell\in\mathbb{Z}_{2q}\). To simplify (20), take \(\mathcal{K}^{(2q)}\) to be the set of 2q-tuples satisfying k ℓ+1≠k ℓ for every \(\ell\in\mathbb{Z}_{2q}\):
where μ is the wost-case coherence of Φ, and S is the corresponding Seidel adjacency matrix. Note that the left-hand side is necessarily nonnegative, while it is not immediate why the right-hand side should be. This indicates that more simplification can be done, but for the sake of clarity, we will perform this simplification in the special case where q=2; the general case is very similar. When q=2, we are concerned with 4-tuples \(\{k_{0},k_{1},k_{2},k_{3}\}\in\mathcal{K}^{(4)}\). Let’s partition these 4-tuples according to the value taken by k 0 and k q =k 2. Note, for a fixed k 0 and k 2, that k 1 can be any value other than k 0 or k 2, as can k 3. This leads to the following simplification:
The first term above is K(K−1)2, while the other term is not as easy to analyze, as we expect a certain degree of cancellation. Substituting this simplification into (21) gives
If there were no cancellations in the second term, then it would equal K(K−1)(K−2)2, thereby dominating the expression. However, if oscillations occured as a ±1 Bernoulli random variable, we could expect this term to be on the order of K 3, matching the order of the first term. In this hypothetical case, since μ≤M −1/2, the parameter \(\delta_{K;2}^{4}\) defined in Theorem 16 scales as \(\frac{K^{3}}{M^{2}}\), and so M∼K 3/2; this corresponds to the behavior exhibited in Theorem 17. To summarize, much like flat restricted orthogonality, applying the power method to ETFs leads to interesting combinatorial questions regarding subgraphs, even when q=2.
5.3 The Paley Equiangular Tight Frame as an RIP Candidate
Pick some prime p≡1mod4, and build an M×p matrix H by selecting the \(M:=\frac{p+1}{2}\) rows of the p×p discrete Fourier transform matrix which are indexed by Q, the quadratic residues modulo p (including zero). To be clear, the entries of H are scaled to have unit modulus. Next, take D to be an M×M diagonal matrix whose zeroth diagonal entry is p −1/2, and whose remaining M−1 entries are \((\frac{2}{p})^{1/2}\). Now build the matrix Φ by concatenating DH with the zeroth identity basis element; for example, when p=5, we have a 3×6 matrix:
We claim that in general, this process produces an M×2M equiangular tight frame, which we call the Paley ETF [25]. Presuming for the moment that this claim is true, we have the following result which lends hope for the Paley ETF as an RIP matrix:
Lemma 22
An M×2M Paley equiangular tight frame has restricted isometry constant δ K <1 for all K≤M.
Proof
First, we note that Theorem 6 of [1] used Chebotarëv’s theorem [29] to prove that the spark of the M×2M Paley ETF Φ is M+1, that is, every size-M subcollection of columns of Φ forms a spanning set. Thus, for every \(\mathcal{K}\subseteq\{1,\ldots,2M\}\) of size ≤M, the smallest singular value of \(\varPhi_{\mathcal{K}}\) is positive. It remains to show that the square of the largest singular value is strictly less than 2. Let x be a unit vector for which \(\|\varPhi_{\mathcal{K}}^{*}x\|=\|\varPhi_{\mathcal{K}}^{*}\|_{2}\). Then since the spark of Φ is M+1, the columns of \(\varPhi_{\mathcal{K}^{\mathrm{c}}}\) span, and so
where the final step follows from Definition 5(i)–(ii), which imply ΦΦ ∗=2I M . □
Now that we have an interest in the Paley ETF Φ, we wish to verify that it is, in fact, an ETF. It suffices to show that the columns of Φ have unit norm, and that the inner products between distinct columns equal the Welch bound in absolute value. Certainly, the zeroth identity basis element is unit-norm, while the squared norm of each of the other columns is given by \(\frac{1}{p}+(M-1)\frac{2}{p}=\frac{2M-1}{p}=1\). Also, the inner product between the zeroth identity basis element and any other column equals the zeroth entry of that column: \(p^{-1/2}= (\frac{N-M}{M(N-1)})^{1/2}\). It remains to calculate the inner product between distinct columns which are not identity basis elements. To this end, note that since a 2=b 2 if and only if a=±b, the sequence \(\{k^{2}\}_{k=1}^{p-1}\subseteq\mathbb{Z}_{p}\) doubly covers Q∖{0}, and so
This well-known expression is called a quadratic Gauss sum, and since p≡1mod4, its value is determined by the Legendre symbol in the following way: \(\langle\varphi_{n},\varphi_{n'}\rangle=\frac{1}{\sqrt{p}}(\frac{n'-n}{p})\) for every \(n,n'\in\mathbb{Z}_{p}\) with n≠n′, where
Having established that Φ is an ETF, we notice that the inner products between distinct columns of Φ are real. This implies that the columns of Φ can be unitarily rotated to form a real ETF Ψ; indeed, one may take Ψ to be the M×2M matrix formed by taking the nonzero rows of L T in the Cholesky factorization Φ ∗ Φ=LL T. As such, we consider the Paley ETF to be real. From here, Theorem 19 prompts us to find the corresponding strongly regular graph. First, we can flip the identity basis element so that its inner products with the other columns of Φ are all negative. As such, the corresponding vertex in the graph will be adjacent to each of the other vertices; naturally, this will be the vertex to which the strongly regular graph is joined. For the remaining vertices, n↔n′ precisely when \((\frac{n'-n}{p})=-1\), that is, when n′−n is not a quadratic residue. The corresponding subgraph is therefore the complement of the Paley graph, namely, the Paley graph [27]. In general, Paley graphs of order p necessarily have p≡1mod4, and so this correspondence is particularly natural.
One interesting thing about the Paley ETF’s restricted isometry is that it lends insight into important properties of the Paley graph. The following is the best known upper bound for the clique number of the Paley graph of prime order (see Theorem 13.14 of [7] and discussion thereafter), and we give a new proof of this bound using restricted isometry:
Theorem 23
Let G denote the Paley graph of prime order p. Then the size of the largest clique is \(\omega(G)<\sqrt{p}\).
Proof
We start by showing ω(G)+1≤M. Suppose otherwise: that there exists a clique \(\mathcal{K}\) of size M+1 in the join of a vertex with G. Then the corresponding sub-Gram matrix of the Paley ETF has the form \(\varPhi_{\mathcal{K}}^{*}\varPhi_{\mathcal{K}}=(1+\mu)\mathrm{I}_{M+1}-\mu\mathrm{J}_{M+1}\), where μ=p −1/2 is the worst-case coherence and J M+1 is the (M+1)×(M+1) matrix of 1’s. Since the largest eigenvalue of J M+1 is M+1, the smallest eigenvalue of \(\varPhi_{\mathcal{K}}^{*}\varPhi_{\mathcal{K}}\) is \(1+p^{-1/2}-(M+1)p^{-1/2}=1-\frac{1}{2}(p+1)p^{-1/2}\), which is negative when p≥5, contradicting the fact that \(\varPhi_{\mathcal{K}}^{*}\varPhi_{\mathcal{K}}\) is positive semidefinite.
Since ω(G)+1≤M, we can apply Lemma 22 and Theorem 20 to get
and rearranging gives the result. □
It is common to apply probabilistic and heuristic reasoning to gain intuition in number theory. For example, consecutive entries of the Legendre symbol are known to mimic certain properties of a ±1 Bernoulli random variable [23]. Moreover, Paley graphs enjoy a certain quasi-random property that was studied in [12]. On the other hand, Graham and Ringrose [19] showed that, while random graphs of size p have an expected clique number of (1+o(1))2logp/log2, Paley graphs of prime order deviate from this random behavior, having a clique number ≥clogplogloglogp infinitely often. The best known universal lower bound, (1/2+o(1))logp/log2, is given in [13], which indicates that the random graph analysis is at least tight in some sense. Regardless, this has a significant difference from the upper bound \(\sqrt{p}\) in Theorem 23, and it would be nice if probabilistic arguments could be leveraged to improve this bound, or at least provide some intuition.
Note that our proof (22) hinged on the fact that δ ω(G)+1<1, courtesy of Lemma 22. Hence, any improvement to our estimate for δ ω(G)+1 would directly lead to the best known upper bound on the Paley graph’s clique number. To approach such an improvement, note that for large p, the Fourier portion of the Paley ETF DH is not significatly different from the normalized partial Fourier matrix \((\frac{2}{p+1})^{1/2}H\); indeed, \(\|H_{\mathcal{K}}^{*}D^{2}H_{\mathcal{K}}-\frac{2}{p+1}H_{\mathcal{K}}^{*}H_{\mathcal{K}}\|_{2}\leq\frac{2}{p}\) for every \(\mathcal{K}\subseteq\mathbb{Z}_{p}\) of size \(\leq\frac{p+1}{2}\), and so the difference vanishes. If we view the quadratic residues modulo p (the row indices of H) as random, then a random partial Fourier matrix serves as a proxy for the Fourier portion of the Paley ETF. This in mind, we appeal to the following:
Theorem 24
(Theorem 3.2 in [24])
Draw rows from the N×N discrete Fourier transform matrix uniformly at random with replacement to construct an M×N matrix, and then normalize the columns to form Φ. Then Φ has restricted isometry constant δ K ≤δ with probability 1−ε provided \(\frac{M}{\log M}\geq \frac{C}{\delta^{2}}K\log^{2} K\log N\log\varepsilon^{-1}\), where C is a universal constant.
In our case, both M and N scale as p, and so picking δ to achieve equality above gives
Continuing as in (22), denote ω=ω(G) and take K=ω to get
and then rearranging gives ω/log2 ω≤C″log2 plogε −1 with probability 1−ε. Interestingly, having ω/log2 ω=O(log3 p) with high probability (again, under the model that quadratic residues are random) agrees with the results of Graham and Ringrose [19]. This gives some intuition for what we can expect the size of the Paley graph’s clique number to be, while at the same time demonstrating the power of Paley ETFs as RIP candidates. We conclude with the following, which can be reformulated in terms of both flat restricted orthogonality and the power method:
Conjecture 25
The Paley equiangular tight frame has the (K,δ)-restricted isometry property with some \(\delta<\sqrt{2}-1\) whenever \(K\leq\frac{Cp}{\log^{\alpha}p}\), for some universal constants C and α.
References
Alexeev, B., Cahill, J., Mixon, D.G.: Full spark frames. J. Fourier Anal. Appl. 18(6), 1167–1194 (2012)
Alon, N., Naor, A.: Approximating the cut-norm via Grothendieck’s inequality. SIAM J. Comput. 35, 787–803 (2006)
Applebaum, L., Howard, S.D., Searle, S., Calderbank, R.: Chirp sensing codes: deterministic compressed sensing measurements for fast recovery. Appl. Comput. Harmon. Anal. 26, 283–290 (2009)
Bandeira, A.S., Dobriban, E., Mixon, D.G., Sawin, W.F.: Certifying the restricted isometry property is hard. IEEE Trans. Inf. Theory 59, 3448–3450 (2013)
Baraniuk, R., Davenport, M., DeVore, R., Wakin, M.: A simple proof of the restricted isometry property for random matrices. Constr. Approx. 28, 253–263 (2008)
Bernstein, S.N.: Theory of Probability, 4th edn. Gostechizdat, Moscow-Leningrad (1946)
Bollobás, B.: Random Graphs, 2nd edn. Cambridge Univ. Press, Cambridge (2001)
Bourgain, J., Dilworth, S., Ford, K., Konyagin, S., Kutzarova, D.: Explicit constructions of RIP matrices and related problems. Duke Math. J. 159, 145–185 (2011)
Candès, E.J.: The restricted isometry property and its implications for compressed sensing. C. R. Acad. Sci. Paris, Ser. I 346, 589–592 (2008)
Candès, E.J., Tao, T.: Decoding by linear programming. IEEE Trans. Inf. Theory 44, 4203–4215 (2005)
Candès, E.J., Tao, T.: The Dantzig selector: statistical estimation when p is much larger than n. Ann. Stat. 35, 2313–2351 (2007)
Chung, F.R.K., Graham, R.L., Wilson, R.M.: Quasi-random graphs. Combinatorica 9, 345–362 (1989)
Cohen, S.D.: Clique numbers of Paley graphs. Quaest. Math. 11, 225–231 (1988)
Davidson, K.R., Szarek, S.J.: Local operator theory, random matrices and Banach spaces. In: Johnson, W.B., Lindenstrauss, J. (eds.) Handbook in Banach Spaces, vol. I, pp. 317–366. Elsevier, Amsterdam (2001)
DeVore, R.A.: Deterministic constructions of compressed sensing matrices. J. Complex. 23, 918–925 (2007)
Donoho, D.L., Elad, M.: Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ 1 minimization. Proc. Natl. Acad. Sci. USA 100, 2197–2202 (2003)
Fickus, M., Mixon, D.G., Tremain, J.C.: Steiner equiangular tight frames. Linear Algebra Appl. 436, 1014–1027 (2012)
Gerschgorin, S.: Über die Abgrenzung der Eigenwerte einer Matrix. Izv. Akad. Nauk. USSR Otd. Fiz.-Mat. 7, 749–754 (1931)
Graham, S.W., Ringrose, C.J.: Lower bounds for least quadratic non-residues. Prog. Math. 85, 269–309 (1990)
Hoory, S., Linial, N., Wigderson, A.: Expander graphs and their applications. Bull. Am. Math. Soc. 43, 439–561 (2006)
Laurent, B., Massart, P.: Adaptive estimation of a quadratic functional by model selection. Ann. Stat. 28, 1302–1338 (2000)
Natarajan, B.K.: Sparse approximate solutions to linear systems. SIAM J. Comput. 24, 227–234 (1995)
Peralta, R.: On the distribution of quadratic residues and nonresidues modulo a prime number. Math. Comput. 58, 433–440 (1992)
Rauhut, H.: Stability results for random sampling of sparse trigonometric polynomials. IEEE Trans. Inf. Theory 54, 5661–5670 (2008)
Renes, J.M.: Equiangular tight frames from Paley tournaments. Linear Algebra Appl. 426, 497–501 (2007)
Rudelson, M., Vershynin, R.: On sparse reconstruction from Fourier and Gaussian measurements. Commun. Pure Appl. Math. 61, 1025–1045 (2008)
Sachs, H.: Über selbstkomplementäre Graphen. Publ. Math. (Debr.) 9, 270–288 (1962)
Seidel, J.J.: A survey of two-graphs. In: Proc. Intern. Coll. Teorie Combinatorie, pp. 481–511 (1973)
Stevenhagen, P., Lenstra, H.W.: Chebotarëv and his density theorem. Math. Intell. 18, 26–37 (1996)
Strohmer, T., Heath, R.W.: Grassmannian frames with applications to coding and communication. Appl. Comput. Harmon. Anal. 14, 257–275 (2003)
Tao, T.: Open question: deterministic UUP matrices. http://terrytao.wordpress.com/2007/07/02/open-question-deterministic-uup-matrices
Temlyakov, V.: Greedy Approximations. Cambridge University Press, Cambridge (2011)
van Lint, J.H., Seidel, J.J.: Equilateral point sets in elliptic geometry. Nederl. Akad. Wetensch. Proc. Ser. A 69, 335–348 (1966). Indag. Math. 28
Waldron, S.: On the construction of equiangular frames from graphs. Linear Algebra Appl. 431, 2228–2242 (2009)
Welch, L.R.: Lower bounds on the maximum cross correlation of signals. IEEE Trans. Inf. Theory 20, 397–399 (1974)
Xia, P., Zhou, S., Giannakis, G.B.: Achieving the Welch bound with difference sets. IEEE Trans. Inf. Theory 51, 1900–1907 (2005)
Yurinskii, V.V.: Exponential inequalities for sums of random vectors. J. Multivar. Anal. 6, 473–499 (1976)
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Vladimir Temlyakov.
The authors thank Prof. Peter Sarnak and Joel Moreira for insightful discussions and helpful suggestions. ASB was supported by NSF Grant No. DMS-0914892, MF was supported by NSF Grant No. DMS-1042701 and AFOSR Grant Nos. F1ATA01103J001 and F1ATA00183G003, and DGM was supported by the A.B. Krongard Fellowship. The views expressed in this article are those of the authors and do not reflect the official policy or position of the United States Air Force, Department of Defense, or the US Government.
Appendix
Appendix
In this section, we prove Theorem 13, which states that a matrix with \((K,\hat{\theta})\)-flat restricted orthogonality has \(\theta_{K}\leq C\hat{\theta}\log K\), that is, it has restricted orthogonality. The proof below is adapted from the proof of Lemma 3 in [8]. Our proof has the benefit of being valid for all values of K (as opposed to sufficiently large K in the original [8]), and it has near-optimal constants where appropriate. Moreover in this version, the columns of the matrix are not required to have unit norm.
Proof of Theorem 13
Given arbitrary disjoint subsets \(\mathcal{I},\mathcal{J}\subseteq\{1,\ldots,N\}\) with \(|\mathcal{I}|,|\mathcal{J}|\leq K\), we will bound the following quantity three times, each time with different constraints on \(\{x_{i}\}_{i\in\mathcal{I}}\) and \(\{y_{j}\}_{j\in\mathcal{J}}\):
To be clear, our third bound will have no constraints on \(\{x_{i}\}_{i\in\mathcal{I}}\) and \(\{y_{j}\}_{j\in\mathcal{J}}\), thereby demonstrating restricted orthogonality. Note that by assumption, (23) is \(\leq\hat{\theta}(|\mathcal{I}||\mathcal{J}|)^{1/2}\) whenever the x i ’s and y j ’s are in {0,1}. We first show that this bound is preserved when we relax the x i ’s and y j ’s to lie in the interval [0,1].
Pick a disjoint pair of subsets \(\mathcal{I}',\mathcal{J}'\subseteq\{1,\ldots,N\}\) with \(|\mathcal{I}'|,|\mathcal{J}'|\leq K\). Starting with some \(k\in\mathcal{I}'\), note that flat restricted orthogonality gives that
for every disjoint \(\mathcal{I},\mathcal{J}\subseteq\{1,\ldots,N\}\) with \(|\mathcal{I}|,|\mathcal{J}|\leq K\) and \(k\in\mathcal{I}\). Thus, we may take any x k ∈[0,1] to form a convex combination of these two expressions, and then the triangle inequality gives
Since (24) holds for every disjoint \(\mathcal{I},\mathcal{J}\subseteq\{1,\ldots,N\}\) with \(|\mathcal{I}|,|\mathcal{J}|\leq K\) and \(k\in\mathcal{I}\), we can do the same thing with an additional index \(i\in\mathcal{I}'\) or \(j\in\mathcal{J}'\), and replace the corresponding unit coefficient with some x i or y j in [0,1]. Continuing in this way proves the claim that (23) is \(\leq\hat{\theta}(|\mathcal{I}||\mathcal{J}|)^{1/2}\) whenever the x i ’s and y j ’s lie in the interval [0,1].
For the second bound, we assume the x i ’s and y j ’s are nonnegative with unit norm: \(\sum_{i\in\mathcal{I}}x_{i}^{2}=\sum_{j\in\mathcal{J}}y_{j}^{2}=1\). To bound (23) in this case, we partition \(\mathcal{I}\) and \(\mathcal{J}\) according to the size of the corresponding coefficients:
Note the unit-norm constraints ensure that \(\mathcal{I}=\bigcup_{k=0}^{\infty}\mathcal{I}_{k}\) and \(\mathcal{J}=\bigcup_{k=0}^{\infty}\mathcal{J}_{k}\). The triangle inequality thus gives
By the definitions of \(\mathcal{I}_{k_{1}}\) and \(\mathcal{J}_{k_{2}}\), the coefficients of φ i and φ j in (25) all lie in [0,1]. As such, we continue by applying our first bound:
We now observe from the definition of \(\mathcal{I}_{k}\) that
Thus for any positive integer t, the Cauchy-Schwarz inequality gives
and similarly for the \(\mathcal{J}_{k}\)’s. For a fixed K, we note that (27) is minimized when \(K^{1/2}2^{-t}=\frac{t^{-1/2}}{2\log 2}\), and so we pick t to be the smallest positive integer such that \(K^{1/2}2^{-t}\leq\frac{t^{-1/2}}{2\log 2}\). With this, we continue (26):
From here, we claim that \(t\leq\lceil\frac{\log K}{\log 2}\rceil\). Considering the definition of t, this is easily verified for K=2,3,…,7 by showing \(K^{1/2}2^{-s}\leq\frac{s^{-1/2}}{2\log 2}\) for \(s=\lceil\frac{\log K}{\log 2}\rceil\). For K≥8, one can use calculus to verify the second inequality of the following:
meaning \(t\leq\lceil\frac{\log K}{\log 2}\rceil\). Substituting \(t\leq\frac{\log K}{\log 2}+1\) and t≥1 into (28) then gives
with C 0≈5.77, C 1≈11.85. As such, (23) is \(\leq C'\hat{\theta}\log K\) with \(C'=C_{0}+\frac{C_{1}}{\log 2}\) in this case.
We are now ready for the final bound on (23) in which we apply no constraints on the x i ’s and y j ’s. To do this, we consider the positive and negative real and imaginary parts of these coefficients:
and similarly for the y j ’s. With this decomposition, we apply the triangle inequality to get
Finally, we normalize the coefficients by \((\sum_{i\in\mathcal{I}}x_{i,k_{1}}^{2})^{1/2}\) and \((\sum_{j\in\mathcal{J}}y_{j,k_{2}}^{2})^{1/2}\) so we can apply our second bound:
where C=4C′≈74.17 by the Cauchy-Schwarz inequality, and so we are done. □
Rights and permissions
About this article
Cite this article
Bandeira, A.S., Fickus, M., Mixon, D.G. et al. The Road to Deterministic Matrices with the Restricted Isometry Property. J Fourier Anal Appl 19, 1123–1149 (2013). https://doi.org/10.1007/s00041-013-9293-2
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00041-013-9293-2