Abstract
This paper considers the problem of estimation of a low-rank matrix when most of its entries are not observed and some of the observed entries are corrupted. The observations are noisy realizations of a sum of a low-rank matrix, which we wish to estimate, and a second matrix having a complementary sparse structure such as elementwise sparsity or columnwise sparsity. We analyze a class of estimators obtained as solutions of a constrained convex optimization problem combining the nuclear norm penalty and a convex relaxation penalty for the sparse constraint. Our assumptions allow for simultaneous presence of random and deterministic patterns in the sampling scheme. We establish rates of convergence for the low-rank component from partial and corrupted observations in the presence of noise and we show that these rates are minimax optimal up to logarithmic factors.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In the recent years, there have been a considerable interest in statistical inference for high-dimensional matrices. One particular problem is matrix completion where one observes only a small number \(N \ll m_1m_2\) of the entries of a high-dimensional \(m_1\times m_2\) matrix \(L_0\) of rank r and aims at inferring the missing entries. In general, recovery of a matrix from a small number of observed entries is impossible, but, if the unknown matrix has low rank, then accurate and even exact recovery is possible. In the noiseless setting, [7, 14, 22] established the following remarkable result: assuming that the matrix \(L_0\) satisfies some low coherence condition, this matrix can be recovered exactly by a constrained nuclear norm minimization with high probability from only \(N \gtrsim r \max \{m_1,m_2\} \log ^2(m_1+m_2)\) entries observed uniformly at random. A more common situation in applications corresponds to the noisy setting in which the few available entries are corrupted by noise. Noisy matrix completion has been in the focus of several recent studies (see, e.g., [5, 12, 17, 18, 20, 21, 23]).
The matrix completion problem is motivated by a variety of applications. An important question in applications is whether or not matrix completion procedures are robust to corruptions. Suppose that we observe noisy entries of \(A_0 = L_0 + S_0\) where \(L_0\) is an unknown low-rank matrix and \(S_0\) corresponds to some gross/malicious corruptions. We wish to recover \(L_0\) but we observe only few entries of \(A_0\) and, among those, a fraction happens to be corrupted by \(S_0\). Of course, we do not know which entries are corrupted. It has been shown empirically that uncontrolled and potentially adversarial gross errors affecting only a small portion of observations can be particularly harmful. For example, Xu et al. [16] showed that a very popular matrix completion procedure using nuclear norm minimization can fail dramatically even if \(S_0\) contains only a single nonzero column. It is particularly relevant in applications to recommendation systems where malicious users try to manipulate the outcome of matrix completion algorithms by introducing spurious perturbations \(S_0\). Hence, there is a need for new matrix completion techniques that are robust to the presence of corruptions \(S_0\).
With this motivation, we consider the following setting of robust matrix completion. Let \(A_0\in {\mathbb {R}}^{m_{1}\times m_{2}}\) be an unknown matrix that can be represented as a sum \(A_0=L_0+S_0\) where \(L_0\) is a low-rank matrix and \(S_0\) is a matrix with some low complexity structure such as entrywise sparsity or columnwise sparsity. We consider the observations \((X_i,Y_i), i=1,\dots ,N,\) satisfying the trace regression model
where \({\mathrm {tr}}(\mathrm{M})\) denotes the trace of matrix \(\mathrm{M}\). Here, the noise variables \(\xi _{i}\) are independent and centered, and \(X_{i}\) are \(m_1\times m_2\) matrices taking values in the set
where \(e_l(m)\), \(l=1,\dots ,m,\) are the canonical basis vectors in \({\mathbb {R}}^{m}\). Thus, we observe some entries of matrix \(A_0\) with random noise. Based on the observations \((X_i,Y_i)\), we wish to obtain accurate estimates of the components \(L_0\) and \(S_0\) in the high-dimensional setting \(N\ll m_1m_2\). Throughout the paper, we assume that \((X_1,\dots ,X_n)\) is independent of \((\xi _1,\dots ,\xi _n)\).
We assume that the set of indices i of our N observations is the union of two disjoint components \(\Omega \) and \({\tilde{\Omega }}\). The first component \(\Omega \) corresponds to the “non-corrupted” noisy entries of \(L_0\), i.e., to the observations, for which the entry of \(S_0\) is zero. The second set \({\tilde{\Omega }}\) corresponds to the observations, for which the entry of \(S_0\) is nonzero. Given an observation, we do not know whether it belongs to the corrupted or non-corrupted part of the observations and we have \(\vert \Omega \vert +\vert {\tilde{\Omega }}\vert =N\), where \(\vert \Omega \vert \) and \(\vert {\tilde{\Omega }}\vert \) are non-random numbers of non-corrupted and corrupted observations, respectively.
A particular case of this setting is the matrix decomposition problem where \(N=m_1m_2\), i.e., we observe all entries of \(A_0\). Several recent works consider the matrix decomposition problem, mostly in the noiseless setting, \(\xi _i \equiv 0\). Chandrasekaran et al. [8] analyzed the case when the matrix \(S_0\) is sparse, with small number of non-zero entries. They proved that exact recovery of \((L_0,S_0)\) is possible with high probability under additional identifiability conditions. This model was further studied by Hsu et al. [15] who give milder conditions for the exact recovery of \((L_0,S_0)\). Also in the noiseless setting, Candes et al. [6] studied the same model but with positions of corruptions chosen uniformly at random. Xu et al. [16] studied a model, in which the matrix \(S_0\) is columnwise sparse with sufficiently small number of non-zero columns. Their method guarantees approximate recovery for the non-corrupted columns of the low-rank component \(L_0\). Agarwal et al. [1] consider a general model, in which the observations are noisy realizations of a linear transformation of \(A_0\). Their setup includes the matrix decomposition problem and some other statistical models of interest but does not cover the matrix completion problem. Agarwal et al. [1] state a general result on approximate recovery of the pair \((L_0,S_0)\) imposing a “spikiness condition” on the low-rank component \(L_0\). Their analysis includes as particular cases both the entrywise corruptions and the columnwise corruptions.
The robust matrix completion setting, when \(N<m_1m_2\), was first considered by Candes et al. [6] in the noiseless case for entrywise sparse \(S_0\). Candes et al. [6] assumed that the support of \(S_0\) is selected uniformly at random and that N is equal to \(0.1m_1m_2\) or to some other fixed fraction of \(m_1m_2\). Chen et al. [9] considered also the noiseless case but with columnwise sparse \(S_0\). They proved that the same procedure as in [8] can recover the non-corrupted columns of \(L_0\) and identify the set of indices of the corrupted columns. This was done under the following assumptions: the locations of the non-corrupted columns are chosen uniformly at random; \(L_0\) satisfies some sparse/low-rank incoherence condition; the total number of corrupted columns is small and a sufficient number of non-corrupted entries is observed. More recently, Chen et al. [10] and Li [27] considered noiseless robust matrix completion with entrywise sparse \(S_0\). They proved exact recovery of the low-rank component under an incoherence condition on \(L_0\) and some additional assumptions on the number of corrupted observations.
To the best of our knowledge, the present paper is the first study of robust matrix completion with noise. Our analysis is general and covers in particular the cases of columnwise sparse corruptions and entrywise sparse corruptions. It is important to note that we do not require strong assumptions on the unknown matrices, such as the incoherence condition, or additional restrictions on the number of corrupted observations as in the noiseless case. This is due to the fact that we do not aim at exact recovery of the unknown matrix. We emphasize that we do not need to know the rank of \(L_0\) nor the sparsity level of \(S_0\). We do not need to observe all entries of \(A_0\) either. We only need to know an upper bound on the maximum of the absolute values of the entries of \(L_0\) and \(S_0\). Such information is often available in applications; for example, in recommendation systems, this bound is just the maximum rating. Another important point is that our method allows us to consider quite general and unknown sampling distribution. All the previous works on noiseless robust matrix completion assume the uniform sampling distribution. However, in practice the observed entries are not guaranteed to follow the uniform scheme and the sampling distribution is not exactly known.
We establish oracle inequalities for the cases of entrywise sparse and columnwise sparse \(S_0\). For example, in the case of columnwise corruptions, we prove the following bound on the normalized Frobenius error of our estimator \(({\hat{L}}, {\hat{S}})\) of \((L_0, S_0)\): with high probability
where the symbol \(\lesssim \) means that the inequality holds up to a multiplicative absolute constant and a factor, which is logarithmic in \(m_1\) and \(m_2\). Here, r denotes the rank of \(L_0\), and s is the number of corrupted columns. Note that, when the number of corrupted columns s and the proportion of corrupted observations \(\vert {\tilde{\Omega }}\vert / \vert \Omega \vert \) are small, this bound implies that \(O(r\,\max (m_1,m_2))\) observations are enough for successful and robust to corruptions matrix completion. We also show that, both under the columnwise corruptions and entrywise corruptions, the obtained rates of convergence are minimax optimal up to logarithmic factors.
This paper is organized as follows. Section 2.1 contains the notation and definitions. We introduce our estimator in Sect. 2.2 and we state the assumptions on the sampling scheme in Sect. 2.3. Section 3 presents a general upper bound for the estimation error. In Sects. 4 and 5, we specialize this bound to the settings with columnwise corruptions and entrywise corruptions, respectively. In Sect. 6, we prove that our estimator is minimax rate optimal up to a logarithmic factor. The Appendix contains the proofs.
2 Preliminaries
2.1 Notation and definitions
2.1.1 General notation
For any set I, \(\vert I\vert \) denotes its cardinality and \({\bar{I}}\) its complement. We write \(a\vee b=\max (a,b)\) and \(a\wedge b=\min (a,b)\).
For a matrix A, \(A^{i}\) is its ith column and \(A_{ij}\) is its (i, j)th entry. Let \(I\subset \{1,\dots m_1\}\times \{1,\dots m_2\}\) be a subset of indices. Given a matrix A, we denote by \(A_{I}\) its restriction on I, that is, \(\left( A_{I}\right) _{ij}=A_{ij}\) if \((i,j)\in I\) and \(\left( A_{I}\right) _{ij}=0\) if \((i,j)\not \in I\). In what follows, \({\mathbf {Id}}\) denotes the matrix of ones, i.e., \({\mathbf {Id}}_{ij}=1\) for any (i, j) and \(\mathbf{0}\) denotes the zero matrix, i.e., \(\mathbf{0}_{ij}=0\) for any (i, j).
For any \(p\ge 1\), we denote by \(\Vert \cdot \Vert _{p}\) the usual \(l_p-\)norm. Additionally, we use the following matrix norms: \(\Vert A\Vert _{*}\) is the nuclear norm (the sum of singular values), \(\Vert A\Vert \) is the operator norm (the largest singular value), \(\Vert A\Vert _{\infty }\) is the largest absolute value of the entries:
the norm \(\Vert A\Vert _{2,1}\) is the sum of \(l_2\) norms of the columns of A and \(\Vert A\Vert _{2,\infty }\) is the largest \(l_2\) norm of the columns of A:
The inner product of matrices A and B is defined by \(\langle A,B \rangle = {\mathrm {tr}}(AB^\top )\).
2.1.2 Notation related to corruptions
We first introduce the index sets \(\mathcal {I}\) and \({\tilde{\mathcal {I}}}\). These are subsets of \(\{1,\dots ,m_1\}\times \{1,\dots ,m_2\}\) that are defined differently for the settings with columnwise sparse and entrywise sparse corruption matrix \(S_0\).
For the columnwise sparse matrix \(S_0\), we define
where \(J\subset \{1,\dots ,m_2\}\) is the set of indices of the non-zero columns of \(S_0\). For the entrywise sparse matrix \(S_0\), we denote by \({\tilde{\mathcal {I}}}\) the set of indices of the non-zero elements of \(S_0\). In both settings, \(\mathcal {I}\) denotes the complement of \({\tilde{\mathcal {I}}}\).
Let \(\mathcal {R}\,:\,{\mathbb {R}}^{m_1\times m_2}\rightarrow {\mathbb {R}}_{+}\) be a norm that will be used as a regularizer relative to the corruption matrix \(S_0\). The associated dual norm is defined by the relation
Let \(\vert A\vert \) denote the matrix whose entries are the absolute values of the entries of matrix A. The norm \(\mathcal {R}(\cdot )\) is called absolute if it depends only on the absolute values of the entries of A:
For instance, the \(l_p\)-norm and the \(\Vert \cdot \Vert _{2,1}\)-norm are absolute. We call \(\mathcal {R}(\cdot )\) monotonic if \(\vert A\vert \le \vert B\vert \) implies \(\mathcal {R}(A)\le \mathcal {R}(B)\). Here and below, the inequalities between matrices are understood as entry-wise inequalities. Any absolute norm is monotonic and vice versa (see, e.g., [3]).
2.1.3 Specific notation
-
We set \(d=m_1+m_2\), \(m=m_1\wedge m_2\), and \(M=m_1\vee m_2\).
-
Let \(\{\epsilon _i\}_{i=1}^{n}\) be a sequence of i.i.d. Rademacher random variables. We define the following random variables called the stochastic terms:
$$\begin{aligned} \Sigma _R=\dfrac{1}{n} \sum _{i\in \Omega } \epsilon _i X_i, \quad \Sigma =\dfrac{1}{N} \sum _{i\in \Omega } \xi _iX_i,\quad \text {and} \qquad W=\dfrac{1}{N} \sum _{i\in \Omega } X_i. \end{aligned}$$ -
We denote by r the rank of matrix \(L_0\).
-
We denote by N the number of observations, and by \(n=\vert \Omega \vert \) the number of non-corrupted observations. The number of corrupted observations is \(\vert {\tilde{\Omega }}\vert = N-n\). We set \(\ae =N/n\).
-
We use the generic symbol C for positive constants that do not depend on \(n, m_1, m_2,r,s\) and can take different values at different appearances.
2.2 Convex relaxation for robust matrix completion
For the usual matrix completion, i.e., when the corruption matrix \(S_0=\mathbf {0}\), one of the most popular methods of solving the problem is based on constrained nuclear norm minimization. For example, the following constrained matrix Lasso estimator is introduced in [18]:
where \(\lambda >0\) is a regularization parameter and \({\mathbf {a}}\) is an upper bound on \(\left\| L_0\right\| _{\infty }\).
To account for the presence of non-zero corruptions \(S_0\), we introduce an additional norm-based penalty that should be chosen depending on the structure of \(S_0\). We consider the following estimator \(({\hat{L}},{\hat{S}})\) of the pair \((L_0,S_0)\):
Here \(\lambda _1>0\) and \(\lambda _2>0\) are regularization parameters and \({\mathbf {a}}\) is an upper bound on \(\left\| L_0\right\| _{\infty }\) and \(\left\| S_0\right\| _{\infty }\). Note that this definition and all the proofs can be easily adapted to the setting with two different upper bounds for \(\left\| L_0\right\| _{\infty }\) and \(\left\| S_0\right\| _{\infty }\) as it can be the case in some applications. Thus, the results of the paper extend to this case as well.
For the following two key examples of sparsity structure of \(S_0\), we consider specific regularizers \(\mathcal {R}\).
-
Example 1. Suppose that \(S_0\) is columnwise sparse, that is, it has a small number \(s<m_2\) of non-zero columns. We use the \(\Vert \cdot \Vert _{2,1}\)-norm regularizer for such a sparsity structure: \(\mathcal {R}(S)=\Vert S\Vert _{2,1}.\) The associated dual norm is \(\mathcal {R}^{*}(S)=\Vert S\Vert _{2,\infty }\).
-
Example 2. Suppose now that \(S_0\) is entrywise sparse, that is, that it has \(s\ll m_1m_2\) non-zero entries. The usual choice of regularizer for such a sparsity structure is the \(l_1\) norm: \(\mathcal {R}(S)=\Vert S\Vert _{1}\). The associated dual norm is \(\mathcal {R}^{*}(S)=\Vert S \Vert _{\infty }\).
In these two examples, the regularizer \(\mathcal {R}\) is decomposable with respect to a properly chosen set of indices I. That is, for any matrix \(A\in {\mathbb {R}}^{m_1\times m_2}\) we have
For instance, the \(\Vert \cdot \Vert _{2,1}\)-norm is decomposable with respect to any set I such that
where \(J\subset \{1,\dots ,m_2\}\). The usual \(l_1\) norm is decomposable with respect to any subset of indices I.
2.3 Assumptions on the sampling scheme and on the noise
In the literature on the usual matrix completion (\(S_0=\mathbf {0}\)), it is commonly assumed that the observations \(X_i\) are i.i.d. For robust matrix completion, it is more realistic to assume the presence of two subsets in the observed \(X_i\). The first subset \(\{X_{i},\,i\in \Omega \}\) is a collection of i.i.d. random matrices with some unknown distribution on
These \(X_i\)’s are of the same type as in the usual matrix completion. They are the X-components of non-corrupted observations (recall that the entries of \(S_0\) corresponding to indices in \({\mathcal {I}}\) are equal to zero). On this non-corrupted part of observations, we require some assumptions on the sampling distribution (see Assumptions 1, 2, 5, and 9 below).
The second subset \(\{X_{i},\;i\in {\tilde{\Omega }}\}\) is a collection of matrices with values in
These are the X-components of corrupted observations. Importantly, we make no assumptions on how they are sampled. Thus, for any \(i\in {\tilde{\Omega }}\), we have that the index of the corresponding entry belongs to \({\tilde{\mathcal {I}}}\) and we make no further assumption. If we take the example of recommendation systems, this partition into \(\{X_{i},\,i\in \Omega \}\) and \(\{X_{i},\,i\in {\tilde{\Omega }}\}\) accounts for the difference in behavior of normal and malicious users.
As there is no hope for recovering the unobserved entries of \(S_0\), one should consider only the estimation of the restriction of \(S_0\) to \({\tilde{\Omega }}\). This is equivalent to assume that we estimate the whole \(S_0\) when all unobserved entries of \(S_0\) are equal to zero, cf. [9]. This assumption will be done throughout the paper.
For \(i\in \Omega \), we suppose that \(X_i\) are i.i.d realizations of a random matrix X having distribution \(\Pi \) on the set \({\mathcal {X}}'\). Let \(\pi _{jk}={\mathbb {P}}(X=e_j(m_1)e_k^{T}(m_2))\) be the probability to observe the (j, k)th entry. One of the particular settings of this problem is the case of the uniform on \({\mathcal {X}}'\) distribution \(\Pi \). It was previously considered in the context of noiseless robust matrix completion, see, e.g., [9]. We consider here a more general sampling model. In particular, we suppose that any non-corrupted element is sampled with positive probability:
Assumption 1
There exists a positive constant \(\mu \ge 1\) such that, for any \((j,k)\in {\mathcal {I}}\),
If \(\Pi \) is the of uniform distribution on \({\mathcal {X}}'\) we have \(\mu =1\). For \(A\in {\mathbb {R}}^{m_1\times m_2}\) set
Assumption 1 implies that
Denote by \(\pi _{\cdot k}=\sum _{j=1}^{{m_1}}\pi _{jk}\) the probability to observe an element from the kth column and by \(\pi _{j\cdot }=\sum _{k=1}^{{m_2}}\pi _{jk}\) the probability to observe an element from the jth row. The following assumption requires that no column and no row is sampled with too high probability.
Assumption 2
There exists a positive constant \(L\ge 1\) such that
This assumption will be used in Theorem 1 below. In Sects. 4 and 5, we apply Theorem 1 to the particular cases of columnwise sparse and entrywise sparse corruptions. There, we will need more restrictive assumptions on the sampling distribution (see Assumptions 5 and 9).
We assume below that the noise variables \(\xi _i\) are sub-gaussian:
Assumption 3
There exist positive constants \(\sigma \) and \(c_1\) such that
3 Upper bounds for general regularizers
In this section we state our main result which applies to a general convex program (5) where \(\mathcal {R}\) is an absolute norm and a decomposable regularizer. In the next sections, we consider in detail two particular choices, \(\mathcal {R}(\cdot )=\Vert \cdot \Vert _{1}\) and \(\mathcal {R}(\cdot )=\Vert \cdot \Vert _{2,1}\). Introduce the notation:
where \(d=m_1+m_2\).
Theorem 1
Let \(\mathcal {R}\) be an absolute norm and a decomposable regularizer. Assume that \(\left\| L_0\right\| _{\infty }\le {\mathbf {a}}\), \(\left\| S_0\right\| _{\infty }\le {\mathbf {a}}\) for some constant \({\mathbf {a}}\) and let Assumptions 1–3 be satisfied. Let \(\lambda _1>4\left\| \Sigma \right\| \), and \(\lambda _2\ge 4\left( \mathcal {R}^{*}(\Sigma )+2{\mathbf {a}}\mathcal {R}^{*}(W)\right) \). Then, with probability at least \(1-4.5\,d^{-1}\),
where C is an absolute constant. Moreover, with the same probability,
The term \(\Psi _{1}\) in (11) corresponds to the estimation error associated with matrix completion of a rank r matrix. The second and the third terms account for the error induced by corruptions. In the next two sections we apply Theorem 1 to the settings with the entrywise sparse and columnwise sparse corruption matrices \(S_0\).
4 Columnwise sparse corruptions
In this section, we assume that that \(S_0\) has at most s non-zero columns, and \(s\le m_2/2\). We use here the \(\Vert \cdot \Vert _{2,1}\)-norm regularizer \(\mathcal {R}\). Then, the convex program (5) takes form
Since \(S_0\) has at most s non-zero columns, we have \(\vert {\tilde{\mathcal {I}}}\vert =m_1s\). Furthermore, by the Cauchy–Schwarz inequality, \(\Vert {\mathbf {Id}}_{\tilde{\Omega }}\Vert _{2,1}\le \sqrt{s\vert {\tilde{\Omega }}\vert }\). Using these remarks we replace \(\Psi _2\), \(\Psi _3\) and \(\Psi _4\) by the larger quantities
Specializing Theorem 1 to this case yields the following corollary.
Corollary 4
Assume that \(\left\| L_{0}\right\| _{\infty }\le {\mathbf {a}}\) and \(\left\| S_{0}\right\| _{\infty }\le {\mathbf {a}}\). Let the regularization parameters \((\lambda _1,\lambda _2)\) satisfy
Then, with probability at least \(1-4.5\,d^{-1}\), for any solution \(({\hat{L}},{\hat{S}})\) of the convex program (13) with such regularization parameters \((\lambda _1,\lambda _2)\) we have
where C is an absolute constant. Moreover, with the same probability,
In order to get a bound in a closed form, we need to obtain suitable upper bounds on the stochastic terms \(\Sigma \), \(\Sigma _{R}\) and W. We derive such bounds under an additional assumption on the column marginal sampling distribution. Set \(\pi _{\cdot ,k}^{(2)} =\sum _{j=1}^{m_1} \pi _{jk}^2\).
Assumption 5
There exists a positive constant \(\gamma \ge 1\) such that
This condition prevents the columns from being sampled with too high probability and guarantees that the non-corrupted observations are well spread out among the columns. Assumption 5 is clearly less restrictive than assuming that \(\Pi \) is uniform as it was done in the previous work on noiseless robust matrix completion. In particular, Assumption 5 is satisfied when the distribution \(\Pi \) is approximately uniform, i.e., when \(\pi _{jk} \asymp \frac{1}{m_1(m_2-s)}\). Note that Assumption 5 implies the following milder condition on the marginal sampling distribution:
Condition (14) is sufficient to control \(\Vert \Sigma \Vert _{2,\infty }\) and \(\Vert \Sigma _R\Vert _{2,\infty }\) while to we need a stronger Assumption 5 to control \(\Vert W\Vert _{2,\infty }\).
The following lemma gives the order of magnitude of the stochastic terms driving the rates of convergence.
Lemma 6
Let the distribution \(\Pi \) on \({\mathcal {X}}'\) satisfy Assumptions 1, 2 and 5. Let also Assumption 3 hold. Assume that \(N\le m_1m_2\), \(n\le \vert \mathcal {I}\vert \), and \(\log m_2 \ge 1\). Then, there exists an absolute constant \(C>0\) such that, for any \(t>0\), the following bounds on the norms of the stochastic terms hold with probability at least \(1-e^{-t}\), as well as the associated bounds in expectation.
-
(i)
$$\begin{aligned}&\left\| \Sigma \right\| \le C\sigma \max \left( \sqrt{\dfrac{L(t+\log d)}{\ae \,Nm}}, \frac{(\log m)(t+\log d)}{N}\right) \quad \text {and}\\&{\mathbb {E}}\left\| \Sigma _{R}\right\| \le C\left( \sqrt{\dfrac{L\log (d)}{nm}}+\frac{\log ^{2} d}{N}\right) ; \end{aligned}$$
-
(ii)
$$\begin{aligned}&\left\| \Sigma \right\| _{2,\infty }\le C\sigma \left( \sqrt{\frac{\gamma (t+\log (d))}{\ae N m_2} }+ \frac{t+\log d}{N}\right) \quad \text {and}\\&{\mathbb {E}}\left\| \Sigma \right\| _{2,\infty }\le C\sigma \,\left( \sqrt{\frac{\gamma \log (d)}{\ae N m_2} }+ \frac{\log d}{N}\right) ; \end{aligned}$$
-
(iii)
$$\begin{aligned}&\left\| \Sigma _R\right\| _{2,\infty }\le C\left( \sqrt{\frac{\gamma (t+\log (d))}{n m_2} }+ \frac{t+\log d}{n}\right) \quad \text {and}\\&{\mathbb {E}}\left\| \Sigma _R\right\| _{2,\infty }\le C \left( \sqrt{\frac{\gamma \log (d)}{n m_2} }+ \frac{\log d}{n}\right) ; \end{aligned}$$
-
(iv)
$$\begin{aligned}&\Vert W\Vert _{2,\infty }\le C \left( \frac{\gamma (t+\log m_2)^{1/4}}{\sqrt{\ae N m_2}}\left( 1+\sqrt{\frac{m_2(t+\log m_2)}{n}}\right) ^{1/2} + \frac{t+\log m_2}{N} \right) \\&{\mathbb {E}}\Vert W\Vert _{2,\infty }\le C \left( \frac{\gamma \log ^{1/4}(d)}{\sqrt{\ae N m_2}}\left( 1+ \sqrt{\frac{m_2\log d}{n}}\right) ^{1/2} + \frac{\log d}{N} \right) . \end{aligned}$$
Let
Recall that \(\ae =\frac{N}{n}\ge 1\). If \(n\ge n^{*}\), using the bounds given by Lemma 6, we can chose the regularization parameters \(\lambda _1\) and \(\lambda _2\) in the following way:
where \(C>0\) is a large enough numerical constant.
With this choice of the regularization parameters, Corollary 4 implies the following result.
Corollary 7
Let the distribution \(\Pi \) on \({\mathcal {X}}'\) satisfy Assumptions 1, 2 and 5. Let Assumption 3 hold and \(\left\| L_{0}\right\| _{\infty }\le {\mathbf {a}}\), \(\left\| S_{0}\right\| _{\infty }\le {\mathbf {a}}\). Assume that \(N\le m_1m_2\) and \(n^{*}\le n\). Then, with probability at least \(1-6/d\) for any solution \(({\hat{L}},{\hat{S}})\) of the convex program (13) with the regularization parameters \((\lambda _1,\lambda _2)\) given by (16), we have
where \(C_{\mu ,\gamma ,L}>0\) can depend only on \(\mu ,\gamma ,L\). Moreover, with the same probability,
Remarks
- 1.:
-
The upper bound (17) can be decomposed into two terms. The first term is proportional to rM / n. It is of the same order as in the case of the usual matrix completion, see [18, 20]. The second term accounts for the corruption. It is proportional to the number of corrupted columns s and to the number of corrupted observations \(\vert {\tilde{\Omega }}\vert \). This term vanishes if there is no corruption, i.e., when \(S_0=\mathbf {0}\).
- 2.:
-
If all entries of \(A_0\) are observed, i.e., the matrix decomposition problem is considered, the bound (17) is analogous to the corresponding bound in [1]. Indeed, then \(\vert {\tilde{\Omega }}\vert =sm_1, N=m_1m_2\), \(\ae \le 2\) and we get
$$ \begin{aligned} \dfrac{\Vert L_0-{\hat{L}}\Vert _2^{2}}{m_1m_2}+\dfrac{\Vert S_0-{\hat{S}}\Vert _2^{2}}{m_1m_2}&\lesssim \& (\sigma \vee {\mathbf {a}})^{2}\left( \dfrac{r\,M}{m_1m_2}+\dfrac{s}{m_2}\right) . \end{aligned}$$The estimator studied in [1] for matrix decomposition problem is similar to our program (13). The difference between these estimators is that in (13) the minimization is over \(\Vert \cdot \Vert _{\infty }\)-balls while the program of [1] uses the minimization over \(\Vert \cdot \Vert _{2,\infty }\)-balls and requires the knowledge of a bound on the norm \(\Vert L_0 \Vert _{2,\infty }\) of the unknown matrix \(L_0\).
- 3.:
-
Suppose that the number of corrupted columns is small (\(s\ll m_2\)). Then, Corollary 7 guarantees, that the prediction error of our estimator is small whenever the number of non-corrupted observations n satisfies the following condition
$$\begin{aligned} n\gtrsim (m_1\vee m_2)\mathrm{rank}(L_0)+\vert {\tilde{\Omega }}\vert \end{aligned}$$(18)where \(\vert {\tilde{\Omega }}\vert \) is the number of corrupted observations. This quantifies the sample size sufficient for successful (robust to corruptions) matrix completion. When the rank r of \(L_0\) is small and \(s\ll m_2\), the right hand side of (18) is considerably smaller than the total number of entries \(m_1m_2\).
- 4.:
-
By changing the numerical constants, one can obtain that the upper bound (17) is valid with probability \(1-6d^{-\alpha }\) for any \(\alpha \ge 1\).
5 Entrywise sparse corruptions
We assume now that \(S_0\) has s non-zero entries but they do not necessarily lay in a small subset of columns. We will also assume that \(s\le \frac{m_1m_2}{2}\). We use now the \(l_1\)-regularizer \(\mathcal {R}\). Then the convex program (5) takes the form
The support \({\tilde{\mathcal {I}}} = \{ (j,k)\,:\, (S_0)_{jk} \ne 0\}\) of the non-zero entries of \(S_0\) satisfies \(\vert {\tilde{\mathcal {I}}}\vert =s\). Also, \(\Vert {\mathbf {Id}}_{\tilde{\Omega }}\Vert _{1}=\vert {\tilde{\Omega }}\vert \) so that \(\Psi _2\), \(\Psi _3\), and \(\Psi _4\) take form
Specializing Theorem 1 to this case yields the following corollary:
Corollary 8
Assume that \(\left\| L_{0}\right\| _{\infty }\le {\mathbf {a}}\) and \(\left\| S_{0}\right\| _{\infty }\le {\mathbf {a}}\). Let the regularization parameters \((\lambda _1,\lambda _2)\) satisfy
Then, with probability at least \(1-4.5\,d^{-1}\), for any solution \(({\hat{L}},{\hat{S}})\) of the convex program (19) with such regularization parameters \((\lambda _1,\lambda _2)\) we have
where C is an absolute constant. Moreover, with the same probability,
In order to get a bound in a closed form we need to obtain suitable upper bounds on the stochastic terms \(\Sigma ,\Sigma _{R}\) and W. We provide such bounds under the following additional assumption on the sampling distribution.
Assumption 9
There exists a positive constant \(\gamma \ge 1\) such that
This assumption prevents any entry from being sampled too often and guarantees that the observations are well spread out over the non-corrupted entries. Assumptions 1 and 9 imply that the sampling distribution \(\Pi \) is approximately uniform in the sense that \(\pi _{jk} \asymp \frac{1}{\vert \mathcal {I}\vert }\). In particular, since \(\vert \mathcal {I}\vert \le \frac{m_1m_2}{2}\), Assumption 9 implies Assumption 2 for \(L=2\mu _1\).
Lemma 10
Let the distribution \(\Pi \) on \({\mathcal {X}}'\) satisfy Assumptions 1, and 9. Let also Assumption 3 hold. Then, there exists an absolute constant \(C>0\) such that, for any \(t>0\), the following bounds on the norms of the stochastic terms hold with probability at least \(1-e^{-t}\), as well as the associated bounds in expectation.
-
(i)
$$\begin{aligned} \Vert W\Vert _{\infty }\le & {} C \left( \frac{\mu _1}{\ae m_1m_2} +\sqrt{ \frac{\mu _1 \left. (t+\log d)\right) }{ \ae N m_1m_2}} +\frac{ t+\log d}{ N}\right) \quad \text {and}\\ {\mathbb {E}}\Vert W\Vert _{\infty }\le & {} C\left( \frac{\mu _1}{\ae m_1m_2} + \sqrt{ \frac{\mu _1 \log d}{ \ae N m_1m_2}} + \frac{\log d}{N}\right) ; \end{aligned}$$
-
(ii)
$$\begin{aligned} \left\| \Sigma \right\| _{\infty }\le & {} C \sigma \left( \sqrt{\frac{\mu _1(t+\log d)}{\ae Nm_1m_2}} + \frac{t+\log d}{N} \right) \quad \text {and}\\ {\mathbb {E}}\left\| \Sigma \right\| _{\infty }\le & {} C \sigma \left( \sqrt{\frac{\mu _1 \log d}{\ae N m_1 m_2}} + \frac{ \log d}{N} \right) ; \end{aligned}$$
-
(iii)
$$\begin{aligned} \left\| \Sigma _R\right\| _{\infty }\le & {} C \left( \sqrt{\frac{\mu _1(t+\log d)}{n\,m_1m_2}} + \frac{t+\log d}{n} \right) \quad \text {and}\\ {\mathbb {E}}\left\| \Sigma _R\right\| _{\infty }\le & {} C \left( \sqrt{\frac{\mu _1 \log d}{n\,m_1m_2}} + \frac{\log d}{n} \right) . \end{aligned}$$
Using Lemma 6(i), and Lemma 10, under the conditions
we can choose the regularization parameters \(\lambda _1\) and \(\lambda _2\) in the following way:
With this choice of the regularization parameters, Corollary 8 and Lemma 10 imply the following result.
Corollary 11
Let the distribution \(\Pi \) on \({\mathcal {X}}'\) satisfy Assumptions 1, and 9. Let Assumption 3 hold and \(\left\| L_{0}\right\| _{\infty }\le {\mathbf {a}}\), \(\left\| S_{0}\right\| _{\infty }\le {\mathbf {a}}\). Assume that \(N\le m_1m_2\) and that condition (20) holds. Then, with probability at least \(1-6/d\) for any solution \(({\hat{L}},{\hat{S}})\) of the convex program (19) with the regularization parameters \((\lambda _1,\lambda _2)\) given by (21), we have
where \(C_{\mu ,\mu _1}>0\) can depend only on \(\mu \) and \(\mu _1\). Moreover, with the same probability
Remarks
- 1.:
-
As in the columnwise sparsity case, we can recognize two terms in the upper bound (22). The first term is proportional to rM / n. It is of the same order as the rate of convergence for the usual matrix completion, see [18, 20]. The second term accounts for the corruptions and is proportional to the number s of nonzero entries in \(S_0\) and to the number of corrupted observations \(\vert {\tilde{\Omega }}\vert \). We will prove in Sect. 6 below that these error terms are of the correct order up to a logarithmic factor.
- 2.:
-
If \(s\ll n<m_1m_2\), the bound (22) implies that one can estimate a low-rank matrix from a nearly minimal number of observations, even when a part of the observations has been corrupted.
- 3.:
-
If all entries of \(A_0\) are observed, i.e., the matrix decomposition problem is considered, the bound (22) is analogous to the corresponding bound in [1]. Indeed, then \(\vert {\tilde{\Omega }}\vert \le s, N=m_1m_2\), \(\ae \le 2\) and we get
$$ \begin{aligned} \dfrac{\Vert L_0-{\hat{L}}\Vert _2^{2}}{m_1m_2}+\dfrac{\Vert S_0-{\hat{S}}\Vert _2^{2}}{m_1m_2}&\lesssim \& (\sigma \vee {\mathbf {a}})^{2}\left( \dfrac{r\,M}{m_1m_2}+\dfrac{s}{m_1m_2}\right) . \end{aligned}$$
6 Minimax lower bounds
In this section, we prove the minimax lower bounds showing that the rates attained by our estimator are optimal up to a logarithmic factor. We will denote by \(\inf _{({\hat{L}},{\hat{S}})}\) the infimum over all pairs of estimators \(({\hat{L}},{\hat{S}})\) for the components \(L_0\) and \(S_0\) in the decomposition \(A_0 = L_0 + S_0\) where both \({\hat{L}}\) and \({\hat{S}}\) take values in \({\mathbb {R}}^{m_1\times m_2}\). For any \(A_0\in {\mathbb {R}}^{m_1\times m_2}\), let \({\mathbb {P}}_{A_0}\) denote the probability distribution of the observations \((X_1,Y_1,\dots ,X_n,Y_n)\) satisfying (1).
We begin with the case of columnwise sparsity. For any matrix \(S \in {\mathbb {R}}^{m_1\times m_2}\), we denote by \(\Vert S\Vert _{2,0}\) the number of nonzero columns of S. For any integers \(0\le r\le \min (m_1,m_2)\), \(0\le s\le m_2\) and any \({{\mathbf {a}}}>0\), we consider the class of matrices
and define
The following theorem gives a lower bound on the estimation risk in the case of columnwise sparsity.
Theorem 2
Suppose that \(m_1,m_2\ge 2\). Fix \({{\mathbf {a}}}>0\) and integers \(1\le r\le \min (m_1,m_2)\) and \(1\le s\le m_2/2\). Let Assumption 9 be satisfied. Assume that \(Mr\le n\), \(\vert {\tilde{\Omega }}\vert \le sm_1\) and \(\ae \le 1+s/m_2\). Suppose that the variables \(\xi _i\) are i.i.d. Gaussian \({\mathcal {N}}(0,\sigma ^2)\), \(\sigma ^2>0\), for \(i=1,\dots ,n\). Then, there exist absolute constants \(\beta \in (0,1)\) and \(c>0\), such that
We turn now to the case of entrywise sparsity. For any matrix \(S \in {\mathbb {R}}^{m_1\times m_2}\), we denote by \(\Vert S\Vert _{0}\) the number of nonzero entries of S. For any integers \(0\le r\le \min (m_1,m_2)\), \(0\le s \le m_1m_2/2\) and any \({{\mathbf {a}}}>0\), we consider the class of matrices
and define
We have the following theorem for the lower bound in the case of entrywise sparsity.
Theorem 3
Assume that \(m_1,m_2\ge 2\). Fix \({{\mathbf {a}}}>0\) and integers \(1\le r\le \min (m_1,m_2)\) and \(1\le s \le m_1 m_2/2\). Let Assumption 9 be satisfied. Assume that \(Mr\le n\) and there exists a constant \(\rho >0\) such that \(\vert {\tilde{\Omega }}\vert \le \rho \,r\,M\). Suppose that the variables \(\xi _i\) are i.i.d. Gaussian \({\mathcal {N}}(0,\sigma ^2)\), \(\sigma ^2>0\), for \(i=1,\dots ,n\). Then, there exist absolute constants \(\beta \in (0,1)\) and \(c>0\), such that
Notes
This statement actually appears as an intermediate step in the proof of this lemma.
References
Agarwal, A., Negahban, S., Wainwright, M.J.: Noisy matrix decomposition via convex relaxation: optimal rates in high dimensions. Ann. Stat. 40(2), 1171–1197 (2012)
Aubin, J.P., Ekeland, I.: Applied nonlinear analysis. In: Pure and Applied Mathematics. Wiley, New York (1984)
Bauer, F.L., Stoer, J., Witzgall, C.: Absolute and monotonic norms. Numer. Math. 3, 257–264 (1961)
Buehlmann, P., van de Geer, S.: Statistics for High-Dimensional Data: Methods, Theory and Applications. Springer, New York (2011)
Cai, T.T., Zhou, W.: Matrix completion via max-norm constrained optimization. doi:10.1007/978-1-4612-0537-1.201E
Candès, E.J., Li, X., Ma, Y., Wright, J.: Robust principal component analysis? J. ACM 58(1), 1–37 (2009)
Candès, E.J., Tao, T.: The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory 56(5), 2053–2080 (2010)
Chandrasekaran, V., Sanghavi, S., Parrilo, P.A., Willsky, A.S.: Rank-sparsity incoherence for matrix decomposition. SIAM J. Optim. 21(2), 572–596 (2011)
Chen, Y., Huan, X., Caramanis, C., Sanghavi, S.: Robust matrix completion with corrupted columns. ICML, 873–880 (2011)
Chen, Y., Jalali, A., Sanghavi, S., Caramanis, C.: Low-rank matrix recovery from errors and erasures. IEEE Trans. Inf. Theory 59(7), 4324–4337 (2013)
de la Peña, V.H., Giné, E.: Decoupling. Probability and Its Applications (New York). Springer, New York (1999) (from dependence to independence, randomly stopped processes. \(U\)-statistics and processes. Martingales and beyond)
Foygel, R., Srebro, N.: Concentration-based guarantees for low-rank matrix reconstruction. J. 24nd Annu. Conf. Learn. Theory (COLT) (2011)
Giné, E., Latała, R., Zinn, J.: High dimensional probability, II (Seattle, 1999). In: Progress in Probability. Exponential and Moment Inequalities for \(U\)-Statistics, vol. 47, pp. 13–38. Birkhäuser, Boston (2000)
Gross, D.: Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Inf. Theory 57(3), 1548–1566 (2011)
Hsu, D., Kakade, S.M., Zhang, T.: Robust matrix decomposition with sparse corruptions. IEEE Trans. Inf. Theory 57(11), 7221–7234 (2011)
Huan, X., Caramanis, C., Sanghavi, S.: Robust PCA via outlier pursuit. IEEE Trans. Inf. Theory 58(5), 3047–3064 (2012)
Keshavan, R.H., Montanari, A., Sewoong, O.: Matrix completion from noisy entries. J. Mach. Learn. Res. 11, 2057–2078 (2010)
Klopp, O.: Noisy low-rank matrix completion with general sampling distribution. Bernoulli 20(1), 282–303 (2014)
Koltchinskii, V.: Lectures from the 38th Probability Summer School held in Saint-Flour, 2008, École d’Été de Probabilités de Saint-Flour (Saint-Flour Probability Summer School). Oracle Inequalities in Empirical Risk Minimization and Sparse Recovery Problems. In: Lecture Notes in Mathematics, vol. 2033. Springer, Heidelberg (2011)
Koltchinskii, V., Lounici, K., Tsybakov, A.B.: Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion. Ann. Stat. 39(5), 2302–2329 (2011)
Negahban, S., Wainwright, M.J.: Restricted strong convexity and weighted matrix completion: optimal bounds with noise. J. Mach. Learn. Res. 13, 1665–1697 (2012)
Recht, B.: A simpler approach to matrix completion. J. Mach. Learn. Res. 12, 3413–3430 (2011)
Rohde, A., Tsybakov, A.: Estimation of high-dimensional low-rank matrices. Ann. Stat. 39(2), 887–930 (2011)
Rudelson, M., Vershynin, R.: Hanson–Wright inequality and sub-Gaussian concentration. Electron. Commun. Probab. 18(82), 9 (2013)
Tsybakov, A.B.: Introduction to nonparametric estimation. In: Springer Series in Statistics. Springer, New York (2009) (revised and extended from the 2004 French original, translated by Vladimir Zaiats)
Vershynin, R.: Introduction to the non-asymptotic analysis of random matrices. arXiv:1011.3027 (2010)
Xiaodong, L.: Compressed sensing and matrix completion with constant proportion of corruptions. Constr. Approx. 37(1) (2013)
Acknowledgments
The work of O. Klopp was conducted as part of the project Labex MME-DII (ANR11-LBX-0023-01). The work of K. Lounici was supported in part by Simons Grant 315477 and by NSF Career Grant DMS-1454515. The work of A.B. Tsybakov was supported by GENES and by the French National Research Agency (ANR) under the Grants IPANEMA (ANR-13-BSH1-0004-02), Labex ECODEC (ANR—11-LABEX-0047), ANR—11-IDEX-0003-02, and by the “Chaire Economie et Gestion des Nouvelles Données”, under the auspices of Institut Louis Bachelier, Havas-Media and Paris-Dauphine. The authors want to thank the anonymous referee for his extremely valuable remarks.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix A: Proofs of Theorem 1 and of Corollary 7
1.1 A.1: Proof of Theorem 1
The proofs of the upper bounds have similarities with the methods developed in [18] for noisy matrix completion but the presence of corruptions in our setting requires a new approach, in particular, for proving ”restricted strong convexity property” (Lemma 15) which is the main difficulty in the proof.
Recall that our estimator is defined as
and our goal is to bound from above the Frobenius norms \(\Vert L_0-{\hat{L}}\Vert _2^{2}\) and \(\Vert S_0-{\hat{S}}\Vert _2^{2}\).
- (1):
-
Set \({\mathcal {F}}(L,S)=\frac{1}{N}\sum ^{N}_{i=1} \left( Y_i-\left\langle X_i,L+S\right\rangle \right) ^{2}+\lambda _1 \Vert L\Vert _*+\lambda _2 {\mathcal {R}}( S)\), \(\Delta L=L_0-{\hat{L}}\) and \(\Delta S=S_0-{\hat{S}}\). Using the inequality \({\mathcal {F}}({\hat{L}},{\hat{S}})\le {\mathcal {F}}(L_0,S_0)\) and (1) we get
$$\begin{aligned}&\dfrac{1}{N}\sum ^{N}_{i=1} \left( \left\langle X_i,\Delta L+\Delta S\right\rangle +\xi _{i}\right) ^{2}+\lambda _{1} \Vert {\hat{L}}\Vert _*+\lambda _{2}{\mathcal {R}}({\hat{S}})\nonumber \\&\qquad \le \dfrac{1}{N}\sum ^{N}_{i=1} \xi _{i}^{2}+\lambda _{1} \Vert L_0\Vert _* +\lambda _{2}{\mathcal {R}}(S_0). \end{aligned}$$After some algebra this implies
$$\begin{aligned} \dfrac{1}{N}\sum _{i\in \Omega } \left\langle X_i,\Delta L+\Delta S\right\rangle ^{2}\le & {} \underset{{\mathbf {I}}}{\underbrace{\dfrac{2}{N}\sum _{i\in {\tilde{\Omega }}} \left| \left\langle \xi _iX_i,\Delta L+\Delta S\right\rangle \right| -\dfrac{1}{N}\sum _{i\in {\tilde{\Omega }}} \left\langle X_i, \Delta L+\Delta S\right\rangle ^{2}}}\nonumber \\&+\, \underset{{\mathbf {II}}}{\underbrace{2\left| \left\langle \Sigma ,\Delta L\right\rangle \right| +\lambda _{1} \left( \Vert L_0\Vert _* -\Vert {\hat{L}}\Vert _*\right) }}\nonumber \\&+\,\underset{{\mathbf {III}}}{\underbrace{2\left| \left\langle \Sigma ,\Delta S_{\mathcal {I}}\right\rangle \right| +\lambda _{2}\left( \mathcal {R}(S_0)-\mathcal {R}({\hat{S}})\right) }} \end{aligned}$$(25)where \(\Sigma =\frac{1}{N}\sum _{i\in \Omega } \xi _iX_i\) and we have used the equality \(\left\langle \Sigma ,\Delta S\right\rangle =\left\langle \Sigma ,\Delta S_{\mathcal {I}}\right\rangle \). We now estimate each of the three terms on the right hand side of (25) separately. This will be done on the random event
$$\begin{aligned} {\mathcal {U}}=\left\{ \underset{1\le i\le N}{\max }\vert \xi _{i}\vert \le C_*\sigma \sqrt{\log d}\right\} \end{aligned}$$(26)where \(C_*>0\) is a suitably chosen constant. Using a standard bound on the maximum of sub-gaussian variables and the constraint \(N\le m_1m_2\) we get that there exists an absolute constant \(C_*>0\) such that \({\mathbb {P}}({\mathcal {U}})\ge 1-\frac{1}{2d}\). In what follows, we take this constant \(C_*\) in the definition of \({\mathcal {U}}\).
We start by estimating \({\mathbf {I}}\). On the event \({\mathcal {U}}\), we get
$$\begin{aligned} \mathbf{I}\le & {} \frac{1}{N}\sum _{i\in {\tilde{\Omega }}} \xi ^{2}_i \le \dfrac{C\,\sigma ^{2}\vert {\tilde{\Omega }}\vert \log (d)}{N}. \end{aligned}$$(27)Now we estimate \({\mathbf {II}}\). For a linear vector subspace S of a euclidean space, let \(P_S\) denote the orthogonal projector on S and let \(S^\bot \) denote the orthogonal complement of S. For any \(A\in {\mathbb {R}}^{m_1\times m_2}\), let \(u_j(A)\) and \(v_j(A)\) be the left and right orthonormal singular vectors of A, respectively . Denote by \(S_1(A)\) the linear span of \(\{u_j(A)\}\), and by \(S_2(A)\) the linear span of \(\{v_j(A)\}\). We set
$$\begin{aligned} {\mathbf {P}}_A^{\bot }(B)= & {} P_{S_1^{\bot }(A)}BP_{S_2^{\bot }(A)} \quad \text {and}\quad {\mathbf {P}}_A(B)=B- {\mathbf {P}}_A^{\bot }(B). \end{aligned}$$By definition of \({\mathbf {P}}_{L_0}^{\bot }\), for any matrix B the singular vectors of \({\mathbf {P}}_{L_0}^{\bot }(B)\) are orthogonal to the space spanned by the singular vectors of \(L_0\). This implies that \(\left\| L_0+{\mathbf {P}}_{L_0}^{\bot }(\Delta L) \right\| _*=\left\| L_0 \right\| _*+\left\| {\mathbf {P}}_{L_0}^{\bot }(\Delta L) \right\| _*\). Thus,
$$\begin{aligned} \Vert {\hat{L}}\Vert _*= & {} \left\| L_0 +\Delta L\right\| _*\\= & {} \left\| L_0 +{\mathbf {P}}_{L_0}^{\bot }(\Delta L) +{\mathbf {P}}_{L_0}(\Delta L)\right\| _*\\\ge & {} \left\| L_0 +{\mathbf {P}}_{L_0}^{\bot }(\Delta L)\right\| _* -\left\| {\mathbf {P}}_{L_0}(\Delta L)\right\| _*\\= & {} \left\| L_0 \right\| _*+\left\| {\mathbf {P}}_{L_0}^{\bot }(\Delta L)\right\| _*-\left\| {\mathbf {P}}_{L_0}(\Delta L)\right\| _*, \end{aligned}$$which yields
$$\begin{aligned} \Vert L_0 \Vert _*-\Vert {\hat{L}}\Vert _*\le \left\| {\mathbf {P}}_{L_0}(\Delta L)\right\| _*-\left\| {\mathbf {P}}_{L_0}^{\bot }(\Delta L)\right\| _*. \end{aligned}$$(28)Using (28) and the duality between the nuclear and the operator norms, we obtain
$$\begin{aligned} {\mathbf {II}}\le 2 \Vert \Sigma \Vert \Vert \Delta L\Vert _{*}+\lambda _{1}\left( \left\| {\mathbf {P}}_{L_0}(\Delta L)\right\| _*-\left\| {\mathbf {P}}_{L_0}^{\bot }(\Delta L)\right\| _*\right) . \end{aligned}$$The assumption that \(\lambda _1\ge 4\Vert \Sigma \Vert \) and the triangle inequality imply
$$\begin{aligned} {\mathbf {II}} \le \dfrac{3}{2}\lambda _1\left\| {\mathbf {P}}_{L_0}(\Delta L)\right\| _*\le \dfrac{3}{2}\lambda _1\sqrt{2r}\left\| \Delta L\right\| _2 \end{aligned}$$(29)where \(r=\mathrm{rank}(L_0)\) and we have used that \(\mathrm{rank}({\mathbf {P}}_{L_0}(\Delta L))\le 2\,\mathrm{rank}(L_0)\).
For the third term in (25), we use the duality between the \(\mathcal {R}\) and \(\mathcal {R}^{*}\), and the identity \(\Delta S_{\mathcal {I}}=-{\hat{S}}_{\mathcal {I}}\):
$$\begin{aligned} {\mathbf {III}}\le 2\mathcal {R}^{*}(\Sigma )\mathcal {R}({\hat{S}}_{\mathcal {I}})+\lambda _{2}( \mathcal {R}(S_0)-\mathcal {R}({\hat{S}})). \end{aligned}$$This and the assumption that \(\lambda _2\ge 4\mathcal {R}^{*}(\Sigma )\) imply
$$\begin{aligned} {\mathbf {III}}\le \lambda _2\mathcal {R}( S_0). \end{aligned}$$(30)Plugging (29), (30) and (27) in (25) we get that, on the event \({\mathcal {U}}\),
$$\begin{aligned} \dfrac{1}{n}\sum _{i\in \Omega } \left\langle X_i,\Delta L+\Delta S\right\rangle ^{2}\le & {} \frac{3\,\ae \,\lambda _1}{\sqrt{2}}\sqrt{r}\left\| \Delta L\right\| _2 +\ae \lambda _2\mathcal {R}( S_0) +\dfrac{C\sigma ^{2}\vert {\tilde{\Omega }} \vert \log (d)}{n}\nonumber \\ \end{aligned}$$(31)where \(\ae =N/n\).
- (2):
-
Second, we will show that a kind of restricted strong convexity holds for the random sampling operator given by \((X_i)\) on a suitable subset of matrices. In words, we prove that the observation operator captures a substantial component of any pair of matrices L, S belonging to a properly chosen constrained set (cf. Lemma 15(ii) below for the exact statement). This will imply that, with high probability,
$$\begin{aligned} \dfrac{1}{n}\sum _{i\in \Omega } \left\langle X_i,\Delta L+\Delta S\right\rangle ^{2}\ge \left\| \Delta L+\Delta S\right\| _{L_2(\Pi )}^{2}-{\mathcal {E}} \end{aligned}$$(32)with an appropriate residual \({\mathcal {E}}\), whenever we prove that \((\Delta L,\Delta S)\) belongs to the constrained set. This will be a substantial element of the remaining part of the proof. The result of the theorem will then be deduced by combining (31) and (32).
We start by defining our constrained set. For positive constants \(\delta _1\) and \(\delta _2\), we first introduce the following set of matrices where \(\Delta S\) should lie:
The constants \(\delta _1\) and \(\delta _2\) define the constraints on the \(L_2(\Pi )\)-norm and on the sparsity of the component S. The error term \({\mathcal {E}}\) in (32) depends on \(\delta _1\) and \(\delta _2\). We will specify the suitable values of \(\delta _1\) and \(\delta _2\) for the matrix \(\Delta S\) later. Next, we define the following set of pairs of matrices:
where \(\kappa \) and \(\tau <m_1\wedge m_2\) are some positive constants. This will be used for \(A=\Delta L\) and \(B=\Delta S\). If the \(L_2(\Pi )\)-norm of the sum of two matrices is too small, the right hand side of (32) is negative. The first inequality in the definition of \({\mathcal {D}}(\tau ,\kappa )\) prevents from this. Condition \(\left\| A\right\| _{*}\le \sqrt{\tau } \left\| A_{\mathcal {I}}\right\| _{2}+\kappa \) is a relaxed form of the condition \(\left\| A\right\| _{*}\le \sqrt{\tau } \left\| A\right\| _{2}\) satisfied by matrices with rank \(\tau \). We will show that, with high probability, the matrix \(\Delta L\) satisfies this condition with \(\tau =C\,\mathrm{rank}(L_0)\) and a small \(\kappa \). To prove it, we need the bound \(\mathcal {R}(B)\le \delta _2\) on the corrupted part.
Finally, define our constrained set as the intersection
We now return to the proof of the theorem. To prove (11), we bound separately the norms \(\left\| \Delta L\right\| _2\) and \(\left\| \Delta S\right\| _2\). Note that
and similarly,
In view of these inequalities, it is enough to bound the quantities \(\Vert \Delta S_{\mathcal {I}}\Vert _{L_2(\Pi )}^{2}\) and \(\Vert \Delta L_{\mathcal {I}}\Vert _2^{2}\). A bound on \(\Vert \Delta S_{\mathcal {I}}\Vert _{L_2(\Pi )}^{2}\) with the rate as claimed in (11) is given in Lemma 14 below. In order to bound \(\Vert \Delta L_{\mathcal {I}}\Vert _{L_2(\Pi )}^{2}\) (or \(\Vert \Delta L_{\mathcal {I}}\Vert _2^{2}\) according to cases), we will need the following argument.
Case 1 Suppose that \(\left\| \Delta L+\Delta S\right\| _{L_2(\Pi )}^{2} < 16{\mathbf {a}}^{2}\sqrt{\dfrac{64\,\log (d)}{\log \left( 6/5\right) \,n}}\). Then a straightforward inequality
together with Lemma 14 below implies that, with probability at least \(1-2.5/d\),
where
Note also that \(\Psi _4\le C(\Psi _1+\Psi _2 +\Psi _3)\). In view of (34), (36) and of fact that \(\vert \mathcal {I}\vert \le m_1m_2\), the bound on \(\Vert \Delta L\Vert _2^{2}\) stated in the theorem holds with probability at least \(1-2.5/d\).
Case 2 Assume now that \(\left\| \Delta L+\Delta S\right\| _{L_2(\Pi )}^{2} \ge 16{\mathbf {a}}^{2}\sqrt{\dfrac{64\,\log (d)}{\log \left( 6/5\right) \,n}}\). We will show that in this case and with an appropriate choice of \(\delta _1,\delta _2,\tau \) and \(\kappa \), the pair \(\frac{1}{4{\mathbf {a}}}(\Delta L, \Delta S)\) belongs to the intersection \({\mathcal {D}}(\tau ,\kappa )\cap \{{\mathbb {R}}^{m_1\times m_2}\times {\mathcal {B}}(\delta _1,\delta _2)\}\).
Lemma 13 below and (27) imply that, on the event \({\mathcal {U}}\),
Lemma 14 yields that, with probability at least \(1-2.5\,d^{-1}\),
This property and (37) imply that \(\frac{1}{4{\mathbf {a}}}\left( \Delta L,\Delta S\right) \in {\mathcal {D}}(\tau ,\kappa )\cap \{{\mathbb {R}}^{m_1\times m_2}\times {\bar{\mathcal {B}}}\}\) with probability at least \(1-2.5\,d^{-1}\), where
Therefore, we can apply Lemma 15(ii). From Lemma 15(ii) and (31) we obtain that, with probability at least \(1-4.5\,d^{-1}\),
where
Using an elementary argument and then (34) we find
This inequality and (38) yield
Using again (35), Lemma 14, (9) and the bound \(\vert \mathcal {I}\vert \le m_1m_2\) we obtain
This and the inequality \(\sqrt{2r\vert {\tilde{\mathcal {I}}}\vert }\,{\mathbb {E}}\left( \Vert \Sigma _R\Vert \right) \le \dfrac{\vert {\tilde{\mathcal {I}}}\vert }{\mu \,m_1m_2}+\mu \,m_1m_2\,r\,\left( {\mathbb {E}}\left( \Vert \Sigma _R\Vert \right) \right) ^{2}\) imply that, with probability at least \(1-4.5\,d^{-1}\),
In view of (40) and (34), \(\Vert \Delta L\Vert _2^{2}\) is bounded by the right hand side of (11) with probability at least \(1-4.5\,d^{-1}\). Finally, inequality (12) follows from Lemma 14, (9) and the identity \(\Delta S_{\mathcal {I}}=-{\hat{S}}_{\mathcal {I}}\).
Lemma 12
Assume that \(\lambda _2\ge 4\left( \mathcal {R}^{*}(\Sigma )+2{\mathbf {a}}\mathcal {R}^{*}(W)\right) \). Then, we have
Proof
Let \(\partial \Vert \cdot \Vert _{*}\), and \(\partial {\mathcal {R}}\) denote the subdifferentials of \(\Vert \cdot \Vert _{*}\) and of \({\mathcal {R}}\), respectively. By the standard condition for optimality over a convex set (see [2, Chapter 4, Section 2, Corollary 6]), we have
for all feasible pairs (L, S). In particular, for \(({\hat{L}},S_0)\) we obtain
which implies
Using the elementary inequality \(2ab\le a^2+b^2\) and the bound \(\Vert \Delta L\Vert _{\infty }\le 2{\mathbf {a}}\) we find
Combining the last two displays we get
By Lemma 18,
where \(W=\frac{1}{N}\sum _{i\in \Omega }X_{i}\). On the other hand, the convexity of \(\mathcal {R}(\cdot )\) and the definition of subdifferential imply
Plugging (43) and (44) in (42) we obtain
Next, the decomposability of \(\mathcal {R}(\cdot )\), the identity \((S_0)_{\mathcal {I}}=0\) and the triangle inequality yield
Since \(\lambda _2\ge 4\left( 2{\mathbf {a}}\mathcal {R}^{*}(W)+\mathcal {R}^{*}( \Sigma )\right) \) the last two displays imply
Thus,
Since we assume that all unobserved entries of \(S_0\) are zero, we have \((S_0)_{\tilde{\mathcal {I}}}=(S_0)_{\tilde{\Omega }}\). On the other hand, \(S_{\tilde{\mathcal {I}}}={\hat{S}}_{\tilde{\Omega }}\) as \(\mathcal {R}(\cdot )\) is a monotonic norm. Indeed, adding to S a non-zero element on the non-observed part increases \(\mathcal {R}(S)\) but does not modify \(\frac{1}{N}\sum ^{N}_{i=1} \left( Y_i-\left\langle X_i,L+S\right\rangle \right) ^{2}\). To conclude, we have \(\Delta S_{{\tilde{\mathcal {I}}}}=\Delta S_{{\tilde{\Omega }}}\), which together with (45), implies the Lemma. \(\square \)
Lemma 13
Suppose that \(\lambda _1\ge 4\Vert \Sigma \Vert \) and \(\lambda _2\ge 4\mathcal {R}^{*}(\Sigma )\). Then,
Proof
Using (41) for \((L,S)=(L_0,S_0)\) we obtain
The convexity of \(\Vert \cdot \Vert _{*}\) and of \(\mathcal {R}(\cdot )\) and the definition of the subdifferential imply
Together with (46), this yields
Using the conditions \(\lambda _1\ge 4\Vert \Sigma \Vert \), \(\lambda _2\ge 4\mathcal {R}^{*}(\Sigma )\), the triangle inequality and (28) we get
Since we assume that all unobserved entries of \(S_0\) are zero, we obtain \(\mathcal {R}(S_0)\le {\mathbf {a}}\mathcal {R}({\mathbf {Id}}_{\tilde{\Omega }})\). Using this inequality in the last display proves the lemma. \(\square \)
Lemma 14
Let \(n>m_1\) and \(\lambda _2\ge 4\left( \mathcal {R}^{*}(\Sigma )+2{\mathbf {a}}\mathcal {R}^{*}(W)\right) \). Suppose that the distribution \(\Pi \) on \({\mathcal {X}}'\) satisfies Assumptions 1 and 2. Let \(\left\| S_0\right\| _{\infty }\le {\mathbf {a}}\) for some constant \({\mathbf {a}}\) and let Assumption 3 be satisfied. Then, with probability at least \(1-2.5\,d^{-1}\),
and
Proof
Using the inequality \({\mathcal {F}}({\hat{L}},{\hat{S}})\le {\mathcal {F}}({\hat{L}},S_0)\) and (1) we obtain
which implies
From Lemma 18 and the duality between \(\mathcal {R}\) and \(\mathcal {R}^{*}\) we obtain
Since here \(\Delta S_{\mathcal {I}}=-{\hat{S}}_{\mathcal {I}}\) and \(\lambda _2\ge 4\left( \mathcal {R}^{*}(\Sigma )+2{\mathbf {a}}\mathcal {R}^{*}(W)\right) \) it follows that
Now, Lemma 12 and the bound \(\Vert \Delta S\Vert _{\infty }\le 2{\mathbf {a}}\) imply that, on the event \({\mathcal {U}}\) defined in (26),
Thus, (48) is proved. To prove (47), consider the following two cases.
Case I \(\Vert \Delta S\Vert ^{2}_{L_{2}(\Pi )}< 4{\mathbf {a}}^{2}\sqrt{\frac{64\log (d)}{\log (6/5)\,n}}\). Then (47) holds trivially.
Case II \(\Vert \Delta S\Vert ^{2}_{L_{2}(\Pi )}\ge 4{\mathbf {a}}^{2}\sqrt{\frac{64\log (d)}{\log (6/5)\,n}}\). Then inequality (50) and the bound \(\Vert \Delta S\Vert _{\infty }\le 2{\mathbf {a}}\) imply that, on the event \({\mathcal {U}}\),
where, for any \(\delta >0\), the set \({\mathcal {C}}(\delta )\) is defined as:
Thus, we can apply Lemma 15(i) below. In view of this lemma, the inequalities (49), (27), \(\Vert \Delta L\Vert _{\infty }\le 2{\mathbf {a}}\) and \(\mathcal {R}\left( S_0\right) \le {\mathbf {a}}\mathcal {R}({\mathbf {Id}}_{\tilde{\mathcal {I}}})\) imply that (47) holds with probability at least \(1-2.5\,d^{-1}\). \(\square \)
Lemma 15
Let the distribution \(\Pi \) on \({\mathcal {X}}'\) satisfy Assumptions 1 and 2. Let \(\delta , \delta _1, \delta _2, \tau \), and \(\kappa \) be positive constants. Then, the following properties hold.
-
(i)
With probability at least \(1-\dfrac{2}{d}\),
$$\begin{aligned} \frac{1}{n}\sum _{i\in \Omega } \left\langle X_i,S\right\rangle ^{2}\ge \dfrac{1}{2}\Vert S\Vert _{L_2(\Pi )}^{2}-8\delta {\mathbb {E}}( \mathcal {R}^{*}(\Sigma _R)) \end{aligned}$$for any \(S\in {\mathcal {C}}(\delta )\).
-
(ii)
With probability at least \(1-\dfrac{2}{d}\),
$$\begin{aligned} \frac{1}{n}\sum _{i\in \Omega } \left\langle X_i,L+S\right\rangle ^{2}\ge & {} \dfrac{1}{2}\Vert L+S\Vert _{L_2(\Pi )}^{2}- \{360\mu \,\vert \mathcal {I}\vert \,\tau \left( {\mathbb {E}}\left( \Vert \Sigma _R\Vert \right) \right) ^{2}\\&+4\delta _1^{2}+8\delta _2\,{\mathbb {E}}( \mathcal {R}^{*}(\Sigma _R))+8\kappa {\mathbb {E}}\left( \Vert \Sigma _R\Vert \right) \} \end{aligned}$$for any pair \((L,S)\in {\mathcal {D}}(\tau ,\kappa )\cap \{{\mathbb {R}}^{m_1\times m_2}\times {\mathcal {B}}(\delta _1,\delta _2)\}\).
Proof
We give a unified proof of (i) and (ii). Let \(A=S\) for (i) and \(A=L+S\) for (ii). Set
and
To prove the lemma it is enough to show that the probability of the random event
is smaller than 2 / d. In order to estimate the probability of \({\mathcal {B}}\), we use a standard peeling argument. Set \(\nu =\sqrt{\dfrac{64\,\log (d)}{\log \left( 6/5\right) \,n}}\) and \(\alpha =\dfrac{6}{5}\). For \(l\in {\mathbb {N}}\), define
If the event \({\mathcal {B}}\) holds, there exist \(l\in {\mathbb {N}}\) and a matrix \(A\in {\mathcal {C}}\cap S_l\) such that
For each \(l\in {\mathbb {N}}\), consider the random event
where
Note that \(A\in S_l\) implies that \(A\in {\mathcal {C}}'(\alpha ^{l}\nu )\). This and (52) grant the inclusion \({\mathcal {B}}\subset \cup _{l=1}^\infty \,{\mathcal {B}}_l\). By Lemma 16, \({\mathbb {P}}\left( {\mathcal {B}}_l\right) \le \exp (-c_5\,n\,\alpha ^{2l}\nu ^{2})\) where \(c_5=1/128\). Using the union bound we find
where we have used the inequality \(e^{x}\ge x\). We finally obtain, for \(\nu =\sqrt{\dfrac{64\,\log (d)}{\log \left( 6/5\right) \,n}}\),
\(\square \)
Let
Lemma 16
Let the distribution \(\Pi \) on \({\mathcal {X}}'\) satisfy Assumptions 1 and 2. Then,
where \(c_5=\dfrac{1}{128}\).
Proof
We follow a standard approach: first we show that \(Z_T\) concentrates around its expectation and then we bound from above the expectation. Since \(\left\| A\right\| _{\infty }\le 1\) for all \(A\in {\mathcal {C}}'(T)\), we have \(\left| \left\langle X_i,A\right\rangle \right| \le 1\). We use first a Talagrand type concentration inequality, cf. [4, Theorem 14.2], implying that
where \(c_5=\dfrac{1}{128}\). Next, we bound the expectation \({\mathbb {E}}\left( Z_T\right) \). By a standard symmetrization argument (see e.g. [19, Theorem 2.1]) we obtain
where \(\{\epsilon _i\}_{i=1}^{n}\) is an i.i.d. Rademacher sequence. Then, the contraction inequality (see e.g. [19]) yields
where \(\Sigma _R=\dfrac{1}{n} {\sum \nolimits ^{n}_{i=1}} \epsilon _i X_i\). Now, to obtain a bound on \({\mathbb {E}}({\sup }_{A\in {\mathcal {C}}'(T)}\left| \left\langle \Sigma _R,A\right\rangle \right| )\) we will consider separately the cases \({\mathcal {C}}={\mathcal {C}}(\delta )\) and \({\mathcal {C}}= {\mathcal {D}}(\tau ,\kappa )\cap \{{\mathbb {R}}^{m_1\times m_2}\times {\mathcal {B}}(\delta _1,\delta _2)\}\).
Case I \(A\in {\mathcal {C}}(\delta )\) and \(\left\| A\right\| _{L_2(\Pi )}^{2}\le T\). By the definition of \({\mathcal {C}}(\delta )\) we have \(\mathcal {R}( A) \le \delta .\) Thus, by the duality between \(\mathcal {R}\) and \(\mathcal {R}^{*}\),
This and the concentration inequality (53) imply
with \(c_5=\dfrac{1}{128}\) and \({\mathcal {E}}=8\delta \,{\mathbb {E}}\left( \mathcal {R}^{*}(\Sigma _R)\right) \) as stated.
Case II \(A=L+S\) where \((L,S)\in {\mathcal {D}}(\tau ,\kappa )\), \(S\in {\mathcal {B}}(\delta _1,\delta _2)\), and \(\Vert L+S\Vert ^{2}_{L_2(\Pi )}\le T\). Then, by the definition of \({\mathcal {B}}(\delta _1,\delta _2)\), we have \(\mathcal {R}(S)\le \delta _2.\) On the other hand, the definition of \({\mathcal {D}}(\tau ,\kappa )\) yields
and
The last two inequalities imply
Therefore we can write
Combining this bound with the following elementary inequalities:
and using the concentration bound (53) we obtain
with \(c_5=\dfrac{1}{128}\) and
as stated. \(\square \)
1.2 A.2: Proof of Corollary 7
With \(\lambda _1\) and \(\lambda _2\) given by (16) we obtain
Appendix B: Proof of Theorems 2 and 3
Note that the assumption \(\ae \le 1+s/m_2\) implies that
Assume w.l.o.g. that \(m_1\ge m_2\). For a \(\gamma \le 1\), define
and consider the associated set of block matrices
where O denotes the \(m_1\times (m_2-r\lfloor m_2/(2r) \rfloor )\) zero matrix, and \(\lfloor x \rfloor \) is the integer part of x.
We define similarly the set of matrices
and
where \({\tilde{O}}\) is the \(m_1\times (m_2-s )\) zero matrix. We now set
Remark 2
In the case \(m_1< m_2\), we only need to change the construction of the low rank component of the test set. We first introduce a matrix \({\tilde{L}}= \left( \begin{array}{c|c}{\bar{L}}&{}O\\ \end{array}\right) \in {\mathbb {R}}^{r \times m_2}\) where \({\bar{L}} \in {\mathbb {R}}^{r \times (m_2/2)}\) with entries in \(\{0, \gamma (\sigma \wedge {\mathbf {a}}) (\frac{ rM}{n} )^{1/2}\}\) and then we replicate this matrix to obtain a block matrix L of size \(m_1 \times m_2\)
By construction, any element of \({\mathcal {A}}\) as well as the difference of any two elements of \({\mathcal {A}}\) can be decomposed into a low rank component L of rank at most r and a group sparse component S with at most s nonzero columns. In addition, the entries of any matrix in \({\mathcal {A}}\) take values in [0, a]. Thus, \({\mathcal {A}}\subset {\mathcal {A}}_{GS}(r,s,{\mathbf {a}})\).
We first establish a lower bound of the order rM / n. Let \({\tilde{\mathcal {A}}}\subset {\mathcal {A}}\) be such that for any \(A=L+S\in {\tilde{\mathcal {A}}}\) we have \(S={\mathbf {0}}\). The Varshamov–Gilbert bound (cf. Lemma 2.9 in [25]) guarantees the existence of a subset \({\mathcal {A}} ^{0}\subset {\tilde{\mathcal {A}}}\) with cardinality \({\mathrm {Card}}({\mathcal {A}} ^0) \ge 2^{(rM)/8}+1\) containing the zero \(m_{1}\times m_2\) matrix \(\mathbf{0}\) and such that, for any two distinct elements \(A_1\) and \(A_2\) of \({\mathcal {A}} ^{0}\),
Since \(\xi _i\sim {\mathcal {N}}(0,\sigma ^2)\) we get that, for any \(A\in {\mathcal {A}} ^0\), the Kullback–Leibler divergence \(K\big ({{\mathbb {P}}}_{\mathbf{0}},{{\mathbb {P}}}_{A}\big )\) between \({{\mathbb {P}}}_{\mathbf{0}}\) and \({{\mathbb {P}}}_{A}\) satisfies
where we have used Assumption 9. From (57) we deduce that the condition
is satisfied if \(\gamma >0\) is chosen as a sufficiently small numerical constant. In view of (56) and (58), the application of Theorem 2.5 in [25] implies
for some absolute constants \(\beta \in (0,1)\).
We now prove the lower bound relative to the corruptions. Let \({\bar{\mathcal {A}}}\subset {\mathcal {A}}\) such that for any \(A=L+S\in {\bar{\mathcal {A}}}\) we have \(L={\mathbf {0}}\). The Varshamov–Gilbert bound (cf. Lemma 2.9 in [25]) guarantees the existence of a subset \({\mathcal {A}} ^0\subset {\bar{\mathcal {A}}}\) with cardinality \({\mathrm {Card}}({\mathcal {A}} ^0) \ge 2^{(sm_1)/8}+1\) containing the zero \(m_1\times m_2\) matrix \(\mathbf{0}\) and such that, for any two distinct elements \(A_1\) and \(A_2\) of \({\mathcal {A}} ^0\),
For any \(A\in {\mathcal {A}} _0\), the Kullback–Leibler divergence between \({{\mathbb {P}}}_{\mathbf{0}}\) and \({{\mathbb {P}}}_{A}\) satisfies
which implies that condition (58) is satisfied if \(\gamma >0\) is chosen small enough. Thus, applying Theorem 2.5 in [25] we get
for some absolute constant \(\beta \in (0,1)\). Theorem 2 follows from inequalities (55), (59) and (61).
The proof of Theorem 3 follows the same lines as that of Theorem 2. The only difference is that we replace \({\tilde{\mathcal {S}}}\) by the following set
We omit further details here.
Appendix C: Proof of Lemma 6
Part (i) of Lemma 6 is proved in Lemmas 5 and 6 in [18].
Proof of (ii)
For the sake of brevity, we set \(X_i(j,k) = \langle X_i, e_j(m_1)e_k(m_2)^{\top }\rangle \). By definition of \(\Sigma \) and \(\Vert \cdot \Vert _{2,\infty }\), we have
For any fixed k, we have
where \(\Xi = (\xi _1,\ldots ,\xi _n)^\top \) and \(A_k\in {\mathbb {R}}^{|\Omega |\times |\Omega |}\) with entries
We freeze the \(X_i\) and we apply the version of Hanson–Wright inequality in [24] to get that there exists a numerical constant C such that with probability at least \(1-e^{-t}\)
Next, we note that
where we have used the Cauchy–Schwarz inequality in the first line and the relation \(X^{2}_{i} (j,k)=X_{i} (j,k)\).
Note that \(Z_i(k):= \sum _{j= 1}^{m_1} X_i(j,k)\) follows a Bernoulli distribution with parameter \(\pi _{\cdot k}\) and consequently \(Z(k) = \sum _{i\in \Omega } Z_{i}(k)\) follows a Binomial distribution \(B(|\Omega |,\pi _{\cdot k})\). We apply Bernstein’s inequality (see, e.g., [4, page 486]) to get that, for any \(t>0\),
Consequently, we get with probability at least \(1-2e^{-t}\) that
and, using \(\Vert A_k\Vert \le \Vert A_k\Vert _{2}\), that
Note also that
Combining the last three displays with (63) we get, up to a rescaling of the constants, with probability at least \(1-e^{-t}\) that
Replacing t by \(t+\log m_2\) in the above display and using the union bound gives that, with probability at least \(1-e^{-t}\),
Assuming that \(\log m_2 \ge 1\) we get with probability at least \(1-e^{-t}\) that
Using (14), we get that there exists a numerical constant \(C>0\) such with probability at least \(1 -e^{-t}\)
Finally, we use Lemma 17 to obtain the required bound on \({\mathbb {E}}\Vert \Sigma \Vert _{2,\infty }\).
Proof of (iii)
We follow the same lines as in the proof of part (ii) above. The only difference is to replace \(\xi _i\) by \(\epsilon _i\), \(\sigma \) by 1 and N by n.
Proof of (iv)
We need to establish the bound on
For any fixed k, we have
The first term on the right hand side of the last display can be written as
Using the concentration bound on Z(k) in the proof of part (ii) above, we get that, with probability at least \(1-e^{-t}\),
Next, the random variable
is a U-statistic of order 2. We use now a Bernstein-type concentration inequality for U-statistics. To this end, we set \(X_i(\cdot ,k) = (X_i(1,k),\ldots , X_i(m_1,k))^\top \) and
Let \(e_0(m_1) = {\mathbf {0}}_{m_1}\) be the zero vector in \({\mathbb {R}}^{m_1}\). Note that \(X_i(\cdot ,k)\) takes values in \(\{ e_j(m_1),\, 0\le j \le m_1\}\). For any function \(g\,:\, \{ e_j(m_1),\, 0\le j \le m_1\}^2 \rightarrow {\mathbb {R}}\), we set \(\Vert g\Vert _{L^{\infty }} = \max _{0\le j_1,j_2\le m_1}|g(e_{j_1}(m_1),e_{j_2}(m_1))|\).
We will need the following quantities to control the tail behavior of \(U_2\)
where \(X_i'(\cdot ,k)\) are independent replications of \(X_i(\cdot ,k)\) and f, \(g\,:\, {\mathbb {R}}^{m_1} \rightarrow {\mathbb {R}}\).
We now evaluate the above quantities in our particular setting. It is not hard to see that \({\mathbf {A}} =\max \{ \pi _{\cdot k}^{(2)} \,,\,1 - \pi _{\cdot k}^{(2)} \}\le 1\) where \(\pi _{\cdot k}^{(2)}= \sum _{j=1}^{m_1}\pi _{jk}^2 \). We also have that
where we have used in the second line that \(\langle X_{i_1}(\cdot ,k),X_{i_2}'(\cdot ,k)\rangle ^2 \!=\! \langle X_{i_1}(\cdot ,k),X_{i_2}'(\cdot ,k)\rangle \) since \(\langle X_{i_1}(\cdot ,k),X_{i_2}'(\cdot ,k)\rangle \) takes values in \(\{0,1\}\).
We now derive a bound on \({\mathbf {D}}\). By Jensen’s inequality, we get
where we used the bound \({\mathbb {E}} [\sum _{i}f_{i}^2(X_i(\cdot ,k))]\le 1\). Thus, the Cauchy–Schwarz inequality implies
where we have used the fact that \({\mathbb {E}} [h^2(X_{i_1},X_{i_2}')] \le \sum _{j=1}^{m_1}\pi _{jk}^2\) following from an argument similar to that used to bound \({\mathbf {C}}\).
Finally, we get a bound on \({\mathbf {B}}\). Set \(\pi _{0,k} = 1 - \pi _{\cdot ,k}\). Note first that
By symmetry, we obtain the same bound on \(\Vert \sum _{i_2} {\mathbb {E}} h^2(\cdot ,X_{i_2}(\cdot ,k)) \Vert _{L^\infty } \). Thus we have
Set now \(U_2 = \sum _{i_1\ne i_2}h(X_{i_1}(\cdot ,k), X_{i_2}(\cdot ,k)).\) We apply a decoupling argument (See for instance Theorem 3.4.1 page 125 in [11]) to get that there exists a constant \(C>0\), such that for any \(u>0\)
where \(X_i'(\cdot ,k)\) is independent of \(X_i(\cdot ,k)\) and has the same distribution as \(X_i(\cdot ,k)\). Next, Theorem 3.3 in [13] gives that, for any \(u>0\),
for some absolute constant \(C>0\). Combining the last display with our bounds on \({\mathbf {A}},{\mathbf {B}},{\mathbf {C}},{\mathbf {D}}\), we get that for any \(t>0\), with probability at least \(1- 2e^{-t}\),
where \(C>0\) is a numerical constant. Combining the last display with (64) we get that, for any \(t>0\) with probability at least \(1-3e^{-t}\),
Set \(\pi _{\max } = \max _{1\le k \le m_2}\{\pi _{\cdot k}\}\) and \(\pi ^{(2)}_{\max } = \max _{1\le k \le m_2}\{\pi _{\cdot k}^{(2)}\}\). Using the union bound and up to a rescaling of the constants, we get that, with probability at least \(1 - e^{t}\),
Recall that \(\vert \Omega \vert =n\) and \(\ae =N/n\). Assumption 5 and the fact that \(n\le \vert \mathcal {I}\vert \) imply that there exists a numerical constant \(C>0\) such that, with probability at least \(1 -e^{-t}\),
where we have used that \(\pi _{j,k} \le \pi _{\cdot k} \le \sqrt{2}\gamma /m_2\). Finally, the bound on the expectation \({\mathbb {E}} \Vert W\Vert _{2,\infty }\) follows from this result and Lemma 17.
Appendix D: Proof of Lemma 10
With the notation \(X_i(j,k) = \langle X_i, e_j(m_1)e_k(m_2)^{\top }\rangle \) we have
Under Assumption 3, the Orlicz norm \(\Vert \xi _i\Vert _{\psi _2}= \inf \{x>0: {\mathbb {E}}[(\xi _i/x)^2] \le e\}\) satisfies \(\Vert \xi _i\Vert _{\psi _2} \le c\sigma \) for some numerical constant \(c>0\) and all i. This and the relation (See Lemma 5.5 in [26]Footnote 1)
imply that \(N^{-\ell }{\mathbb {E}}[|\xi _i|^\ell X_i^\ell (j,k)] = N^{-\ell } {\mathbb {E}}[X_i(j,k)] {\mathbb {E}}[|\xi _i|^\ell ]\le (\ell !/2) c^2v(c\sigma /N)^{\ell -2}\) for all \(\ell \ge 2\) and \(v=\frac{\sigma ^2\mu _1}{N^2 m_1m_2}\), where we have used the independence between \(\xi _i\) and \(X_i\), and Assumption 9. Thus, for any fixed (j, k), we have
and
Thus, we can apply Bernstein’s inequality (see, e.g. [4, page 486]), which yields
for any fixed (j, k). Replacing here t by \(t+\log (m_1m_2)\) and using the union bound we obtain
The bound on \({\mathbb {E}}[\Vert \Sigma \Vert _{\infty }]\) in the statement of Lemma 10 follows from this inequality and Lemma 17. The same argument proves the bounds on \(\Vert \Sigma _R\Vert _{\infty }\) and \({\mathbb {E}} \Vert \Sigma _R\Vert _{\infty }\) in the statement of Lemma 10. By a similar (and even somewhat simpler) argument, we also get that
while Assumption 9 implies that \(\Vert {\mathbb {E}}[W]\Vert _{\infty } \le \frac{\mu _1}{\ae m_1m_2}\).
Appendix E: Technical Lemmas
Lemma 17
Let Y be a non-negative random variable. Let there exist \(A\ge 0\), and \(a_j>0\), \(\alpha _j>0\) for \(1\le j\le m\), such that
Then
where \(\Gamma (\cdot )\) is the Gamma function.
Proof
Using the change of variable \( u = \sum _{j=1}^m a_j v^{\alpha _j} \) we get
\(\square \)
Lemma 18
Assume that \(\mathcal {R}\) is an absolute norm. Then
where \(W=\frac{1}{N}\sum _{i\in \Omega }X_{i}\).
Proof
In view of the definition of \(\mathcal {R}^{*}\),
where we have used the inequalities \(\left\langle X_i,\Delta L\right\rangle \le \Vert \Delta L\Vert _{\infty }\le 2{\mathbf {a}}\), and the fact that \(\mathcal {R}\) is an absolute norm. \(\square \)
Rights and permissions
About this article
Cite this article
Klopp, O., Lounici, K. & Tsybakov, A.B. Robust matrix completion. Probab. Theory Relat. Fields 169, 523–564 (2017). https://doi.org/10.1007/s00440-016-0736-y
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-016-0736-y