1 Introduction

In the recent years, there have been a considerable interest in statistical inference for high-dimensional matrices. One particular problem is matrix completion where one observes only a small number \(N \ll m_1m_2\) of the entries of a high-dimensional \(m_1\times m_2\) matrix \(L_0\) of rank r and aims at inferring the missing entries. In general, recovery of a matrix from a small number of observed entries is impossible, but, if the unknown matrix has low rank, then accurate and even exact recovery is possible. In the noiseless setting, [7, 14, 22] established the following remarkable result: assuming that the matrix \(L_0\) satisfies some low coherence condition, this matrix can be recovered exactly by a constrained nuclear norm minimization with high probability from only \(N \gtrsim r \max \{m_1,m_2\} \log ^2(m_1+m_2)\) entries observed uniformly at random. A more common situation in applications corresponds to the noisy setting in which the few available entries are corrupted by noise. Noisy matrix completion has been in the focus of several recent studies (see, e.g., [5, 12, 17, 18, 20, 21, 23]).

The matrix completion problem is motivated by a variety of applications. An important question in applications is whether or not matrix completion procedures are robust to corruptions. Suppose that we observe noisy entries of \(A_0 = L_0 + S_0\) where \(L_0\) is an unknown low-rank matrix and \(S_0\) corresponds to some gross/malicious corruptions. We wish to recover \(L_0\) but we observe only few entries of \(A_0\) and, among those, a fraction happens to be corrupted by \(S_0\). Of course, we do not know which entries are corrupted. It has been shown empirically that uncontrolled and potentially adversarial gross errors affecting only a small portion of observations can be particularly harmful. For example, Xu et al. [16] showed that a very popular matrix completion procedure using nuclear norm minimization can fail dramatically even if \(S_0\) contains only a single nonzero column. It is particularly relevant in applications to recommendation systems where malicious users try to manipulate the outcome of matrix completion algorithms by introducing spurious perturbations \(S_0\). Hence, there is a need for new matrix completion techniques that are robust to the presence of corruptions \(S_0\).

With this motivation, we consider the following setting of robust matrix completion. Let \(A_0\in {\mathbb {R}}^{m_{1}\times m_{2}}\) be an unknown matrix that can be represented as a sum \(A_0=L_0+S_0\) where \(L_0\) is a low-rank matrix and \(S_0\) is a matrix with some low complexity structure such as entrywise sparsity or columnwise sparsity. We consider the observations \((X_i,Y_i), i=1,\dots ,N,\) satisfying the trace regression model

$$\begin{aligned} Y_{i}={\mathrm {tr}}(X_{i}^{T}A_{0})+\xi _{i},\quad \,i=1,\dots ,N, \end{aligned}$$
(1)

where \({\mathrm {tr}}(\mathrm{M})\) denotes the trace of matrix \(\mathrm{M}\). Here, the noise variables \(\xi _{i}\) are independent and centered, and \(X_{i}\) are \(m_1\times m_2\) matrices taking values in the set

$$\begin{aligned} {\mathcal {X}} = \{e_j(m_1)e_k^{T}(m_2),1\le j\le m_1, 1\le k\le m_2\}, \end{aligned}$$
(2)

where \(e_l(m)\), \(l=1,\dots ,m,\) are the canonical basis vectors in \({\mathbb {R}}^{m}\). Thus, we observe some entries of matrix \(A_0\) with random noise. Based on the observations \((X_i,Y_i)\), we wish to obtain accurate estimates of the components \(L_0\) and \(S_0\) in the high-dimensional setting \(N\ll m_1m_2\). Throughout the paper, we assume that \((X_1,\dots ,X_n)\) is independent of \((\xi _1,\dots ,\xi _n)\).

We assume that the set of indices i of our N observations is the union of two disjoint components \(\Omega \) and \({\tilde{\Omega }}\). The first component \(\Omega \) corresponds to the “non-corrupted” noisy entries of \(L_0\), i.e., to the observations, for which the entry of \(S_0\) is zero. The second set \({\tilde{\Omega }}\) corresponds to the observations, for which the entry of \(S_0\) is nonzero. Given an observation, we do not know whether it belongs to the corrupted or non-corrupted part of the observations and we have \(\vert \Omega \vert +\vert {\tilde{\Omega }}\vert =N\), where \(\vert \Omega \vert \) and \(\vert {\tilde{\Omega }}\vert \) are non-random numbers of non-corrupted and corrupted observations, respectively.

A particular case of this setting is the matrix decomposition problem where \(N=m_1m_2\), i.e., we observe all entries of \(A_0\). Several recent works consider the matrix decomposition problem, mostly in the noiseless setting, \(\xi _i \equiv 0\). Chandrasekaran et al. [8] analyzed the case when the matrix \(S_0\) is sparse, with small number of non-zero entries. They proved that exact recovery of \((L_0,S_0)\) is possible with high probability under additional identifiability conditions. This model was further studied by Hsu et al. [15] who give milder conditions for the exact recovery of \((L_0,S_0)\). Also in the noiseless setting, Candes et al. [6] studied the same model but with positions of corruptions chosen uniformly at random. Xu et al. [16] studied a model, in which the matrix \(S_0\) is columnwise sparse with sufficiently small number of non-zero columns. Their method guarantees approximate recovery for the non-corrupted columns of the low-rank component \(L_0\). Agarwal et al. [1] consider a general model, in which the observations are noisy realizations of a linear transformation of \(A_0\). Their setup includes the matrix decomposition problem and some other statistical models of interest but does not cover the matrix completion problem. Agarwal et al. [1] state a general result on approximate recovery of the pair \((L_0,S_0)\) imposing a “spikiness condition” on the low-rank component \(L_0\). Their analysis includes as particular cases both the entrywise corruptions and the columnwise corruptions.

The robust matrix completion setting, when \(N<m_1m_2\), was first considered by Candes et al. [6] in the noiseless case for entrywise sparse \(S_0\). Candes et al. [6] assumed that the support of \(S_0\) is selected uniformly at random and that N is equal to \(0.1m_1m_2\) or to some other fixed fraction of \(m_1m_2\). Chen et al. [9] considered also the noiseless case but with columnwise sparse \(S_0\). They proved that the same procedure as in [8] can recover the non-corrupted columns of \(L_0\) and identify the set of indices of the corrupted columns. This was done under the following assumptions: the locations of the non-corrupted columns are chosen uniformly at random; \(L_0\) satisfies some sparse/low-rank incoherence condition; the total number of corrupted columns is small and a sufficient number of non-corrupted entries is observed. More recently, Chen et al. [10] and Li [27] considered noiseless robust matrix completion with entrywise sparse \(S_0\). They proved exact recovery of the low-rank component under an incoherence condition on \(L_0\) and some additional assumptions on the number of corrupted observations.

To the best of our knowledge, the present paper is the first study of robust matrix completion with noise. Our analysis is general and covers in particular the cases of columnwise sparse corruptions and entrywise sparse corruptions. It is important to note that we do not require strong assumptions on the unknown matrices, such as the incoherence condition, or additional restrictions on the number of corrupted observations as in the noiseless case. This is due to the fact that we do not aim at exact recovery of the unknown matrix. We emphasize that we do not need to know the rank of \(L_0\) nor the sparsity level of \(S_0\). We do not need to observe all entries of \(A_0\) either. We only need to know an upper bound on the maximum of the absolute values of the entries of \(L_0\) and \(S_0\). Such information is often available in applications; for example, in recommendation systems, this bound is just the maximum rating. Another important point is that our method allows us to consider quite general and unknown sampling distribution. All the previous works on noiseless robust matrix completion assume the uniform sampling distribution. However, in practice the observed entries are not guaranteed to follow the uniform scheme and the sampling distribution is not exactly known.

We establish oracle inequalities for the cases of entrywise sparse and columnwise sparse \(S_0\). For example, in the case of columnwise corruptions, we prove the following bound on the normalized Frobenius error of our estimator \(({\hat{L}}, {\hat{S}})\) of \((L_0, S_0)\): with high probability

$$\begin{aligned} \frac{\Vert {\hat{L}} - L_0\Vert _2^2}{m_1m_2}+\dfrac{\Vert S_0-{\hat{S}}\Vert _2^{2}}{m_1m_2} \lesssim \dfrac{r\,\max (m_1,m_2)+\vert {\tilde{\Omega }}\vert }{\vert \Omega \vert }+\dfrac{s}{m_2} \end{aligned}$$

where the symbol \(\lesssim \) means that the inequality holds up to a multiplicative absolute constant and a factor, which is logarithmic in \(m_1\) and \(m_2\). Here, r denotes the rank of \(L_0\), and s is the number of corrupted columns. Note that, when the number of corrupted columns s and the proportion of corrupted observations \(\vert {\tilde{\Omega }}\vert / \vert \Omega \vert \) are small, this bound implies that \(O(r\,\max (m_1,m_2))\) observations are enough for successful and robust to corruptions matrix completion. We also show that, both under the columnwise corruptions and entrywise corruptions, the obtained rates of convergence are minimax optimal up to logarithmic factors.

This paper is organized as follows. Section 2.1 contains the notation and definitions. We introduce our estimator in Sect. 2.2 and we state the assumptions on the sampling scheme in Sect. 2.3. Section 3 presents a general upper bound for the estimation error. In Sects. 4 and 5, we specialize this bound to the settings with columnwise corruptions and entrywise corruptions, respectively. In Sect. 6, we prove that our estimator is minimax rate optimal up to a logarithmic factor. The Appendix contains the proofs.

2 Preliminaries

2.1 Notation and definitions

2.1.1 General notation

For any set I, \(\vert I\vert \) denotes its cardinality and \({\bar{I}}\) its complement. We write \(a\vee b=\max (a,b)\) and \(a\wedge b=\min (a,b)\).

For a matrix A, \(A^{i}\) is its ith column and \(A_{ij}\) is its (ij)th entry. Let \(I\subset \{1,\dots m_1\}\times \{1,\dots m_2\}\) be a subset of indices. Given a matrix A, we denote by \(A_{I}\) its restriction on I, that is, \(\left( A_{I}\right) _{ij}=A_{ij}\) if \((i,j)\in I\) and \(\left( A_{I}\right) _{ij}=0\) if \((i,j)\not \in I\). In what follows, \({\mathbf {Id}}\) denotes the matrix of ones, i.e., \({\mathbf {Id}}_{ij}=1\) for any (ij) and \(\mathbf{0}\) denotes the zero matrix, i.e., \(\mathbf{0}_{ij}=0\) for any (ij).

For any \(p\ge 1\), we denote by \(\Vert \cdot \Vert _{p}\) the usual \(l_p-\)norm. Additionally, we use the following matrix norms: \(\Vert A\Vert _{*}\) is the nuclear norm (the sum of singular values), \(\Vert A\Vert \) is the operator norm (the largest singular value), \(\Vert A\Vert _{\infty }\) is the largest absolute value of the entries:

$$\begin{aligned} \Vert A\Vert _{\infty } = \max _{1\le j \le m_1,1\le k \le m_2}|A_{jk}|, \end{aligned}$$

the norm \(\Vert A\Vert _{2,1}\) is the sum of \(l_2\) norms of the columns of A and \(\Vert A\Vert _{2,\infty }\) is the largest \(l_2\) norm of the columns of A:

$$\begin{aligned} \Vert A\Vert _{2,1} =\sum _{k=1}^{m_2} \Vert A^{k}\Vert _2\quad \text {and}\quad \Vert A\Vert _{2,\infty } =\max _{1\le k\le m_2} \Vert A^{k}\Vert _2. \end{aligned}$$

The inner product of matrices A and B is defined by \(\langle A,B \rangle = {\mathrm {tr}}(AB^\top )\).

2.1.2 Notation related to corruptions

We first introduce the index sets \(\mathcal {I}\) and \({\tilde{\mathcal {I}}}\). These are subsets of \(\{1,\dots ,m_1\}\times \{1,\dots ,m_2\}\) that are defined differently for the settings with columnwise sparse and entrywise sparse corruption matrix \(S_0\).

For the columnwise sparse matrix \(S_0\), we define

$$\begin{aligned} {\tilde{\mathcal {I}}}=\{1,\dots ,m_1\}\times J \end{aligned}$$
(3)

where \(J\subset \{1,\dots ,m_2\}\) is the set of indices of the non-zero columns of \(S_0\). For the entrywise sparse matrix \(S_0\), we denote by \({\tilde{\mathcal {I}}}\) the set of indices of the non-zero elements of \(S_0\). In both settings, \(\mathcal {I}\) denotes the complement of \({\tilde{\mathcal {I}}}\).

Let \(\mathcal {R}\,:\,{\mathbb {R}}^{m_1\times m_2}\rightarrow {\mathbb {R}}_{+}\) be a norm that will be used as a regularizer relative to the corruption matrix \(S_0\). The associated dual norm is defined by the relation

$$\begin{aligned} \mathcal {R}^{*}(A)=\underset{\mathcal {R}(B)\le 1}{\sup }\langle A,B\rangle . \end{aligned}$$
(4)

Let \(\vert A\vert \) denote the matrix whose entries are the absolute values of the entries of matrix A. The norm \(\mathcal {R}(\cdot )\) is called absolute if it depends only on the absolute values of the entries of A:

$$\begin{aligned} \mathcal {R}(A)=\mathcal {R}(\vert A\vert ). \end{aligned}$$

For instance, the \(l_p\)-norm and the \(\Vert \cdot \Vert _{2,1}\)-norm are absolute. We call \(\mathcal {R}(\cdot )\) monotonic if \(\vert A\vert \le \vert B\vert \) implies \(\mathcal {R}(A)\le \mathcal {R}(B)\). Here and below, the inequalities between matrices are understood as entry-wise inequalities. Any absolute norm is monotonic and vice versa (see, e.g., [3]).

2.1.3 Specific notation

  • We set \(d=m_1+m_2\), \(m=m_1\wedge m_2\), and \(M=m_1\vee m_2\).

  • Let \(\{\epsilon _i\}_{i=1}^{n}\) be a sequence of i.i.d. Rademacher random variables. We define the following random variables called the stochastic terms:

    $$\begin{aligned} \Sigma _R=\dfrac{1}{n} \sum _{i\in \Omega } \epsilon _i X_i, \quad \Sigma =\dfrac{1}{N} \sum _{i\in \Omega } \xi _iX_i,\quad \text {and} \qquad W=\dfrac{1}{N} \sum _{i\in \Omega } X_i. \end{aligned}$$
  • We denote by r the rank of matrix \(L_0\).

  • We denote by N the number of observations, and by \(n=\vert \Omega \vert \) the number of non-corrupted observations. The number of corrupted observations is \(\vert {\tilde{\Omega }}\vert = N-n\). We set \(\ae =N/n\).

  • We use the generic symbol C for positive constants that do not depend on \(n, m_1, m_2,r,s\) and can take different values at different appearances.

2.2 Convex relaxation for robust matrix completion

For the usual matrix completion, i.e., when the corruption matrix \(S_0=\mathbf {0}\), one of the most popular methods of solving the problem is based on constrained nuclear norm minimization. For example, the following constrained matrix Lasso estimator is introduced in [18]:

$$\begin{aligned} {\hat{A}}\in {\underset{ \left\| A\right\| _{\infty }\le {\mathbf {a}}}{\mathop {\hbox {arg min}}}} \left\{ \dfrac{1}{n}\sum \limits ^{n}_{i=1}\left( Y_i-\left\langle X_i,A\right\rangle \right) ^{2}+\lambda \Vert A\Vert _*\right\} , \end{aligned}$$

where \(\lambda >0\) is a regularization parameter and \({\mathbf {a}}\) is an upper bound on \(\left\| L_0\right\| _{\infty }\).

To account for the presence of non-zero corruptions \(S_0\), we introduce an additional norm-based penalty that should be chosen depending on the structure of \(S_0\). We consider the following estimator \(({\hat{L}},{\hat{S}})\) of the pair \((L_0,S_0)\):

$$\begin{aligned} ({\hat{L}},{\hat{S}})\in \underset{^{\left\| L\right\| _{\infty }\le {\mathbf {a}}}_{\left\| S\right\| _{\infty }\le {\mathbf {a}}}}{\mathop {\hbox {arg min}}} \left\{ \dfrac{1}{N}\sum ^{N}_{i=1} \left( Y_i-\left\langle X_i,L+S\right\rangle \right) ^{2}+\lambda _{1} \Vert L\Vert _* +\lambda _{2}{\mathcal {R}}(S)\right\} . \end{aligned}$$
(5)

Here \(\lambda _1>0\) and \(\lambda _2>0\) are regularization parameters and \({\mathbf {a}}\) is an upper bound on \(\left\| L_0\right\| _{\infty }\) and \(\left\| S_0\right\| _{\infty }\). Note that this definition and all the proofs can be easily adapted to the setting with two different upper bounds for \(\left\| L_0\right\| _{\infty }\) and \(\left\| S_0\right\| _{\infty }\) as it can be the case in some applications. Thus, the results of the paper extend to this case as well.

For the following two key examples of sparsity structure of \(S_0\), we consider specific regularizers \(\mathcal {R}\).

  • Example 1. Suppose that \(S_0\) is columnwise sparse, that is, it has a small number \(s<m_2\) of non-zero columns. We use the \(\Vert \cdot \Vert _{2,1}\)-norm regularizer for such a sparsity structure: \(\mathcal {R}(S)=\Vert S\Vert _{2,1}.\) The associated dual norm is \(\mathcal {R}^{*}(S)=\Vert S\Vert _{2,\infty }\).

  • Example 2. Suppose now that \(S_0\) is entrywise sparse, that is, that it has \(s\ll m_1m_2\) non-zero entries. The usual choice of regularizer for such a sparsity structure is the \(l_1\) norm: \(\mathcal {R}(S)=\Vert S\Vert _{1}\). The associated dual norm is \(\mathcal {R}^{*}(S)=\Vert S \Vert _{\infty }\).

In these two examples, the regularizer \(\mathcal {R}\) is decomposable with respect to a properly chosen set of indices I. That is, for any matrix \(A\in {\mathbb {R}}^{m_1\times m_2}\) we have

$$\begin{aligned} \mathcal {R}(A)=\mathcal {R}(A_{I})+\mathcal {R}(A_{{\bar{I}}}). \end{aligned}$$
(6)

For instance, the \(\Vert \cdot \Vert _{2,1}\)-norm is decomposable with respect to any set I such that

$$\begin{aligned} I=\{1,\dots ,m_1\}\times J \end{aligned}$$
(7)

where \(J\subset \{1,\dots ,m_2\}\). The usual \(l_1\) norm is decomposable with respect to any subset of indices I.

2.3 Assumptions on the sampling scheme and on the noise

In the literature on the usual matrix completion (\(S_0=\mathbf {0}\)), it is commonly assumed that the observations \(X_i\) are i.i.d. For robust matrix completion, it is more realistic to assume the presence of two subsets in the observed \(X_i\). The first subset \(\{X_{i},\,i\in \Omega \}\) is a collection of i.i.d. random matrices with some unknown distribution on

$$\begin{aligned} {\mathcal {X}}' = \{e_j(m_1)e_k^{T}(m_2),(j,k)\in {\mathcal {I}}\}. \end{aligned}$$
(8)

These \(X_i\)’s are of the same type as in the usual matrix completion. They are the X-components of non-corrupted observations (recall that the entries of \(S_0\) corresponding to indices in \({\mathcal {I}}\) are equal to zero). On this non-corrupted part of observations, we require some assumptions on the sampling distribution (see Assumptions 125, and 9 below).

The second subset \(\{X_{i},\;i\in {\tilde{\Omega }}\}\) is a collection of matrices with values in

$$\begin{aligned} {\mathcal {X}}'' = \{e_j(m_1)e_k^{T}(m_2),(j,k)\in {\tilde{\mathcal {I}}}\}. \end{aligned}$$

These are the X-components of corrupted observations. Importantly, we make no assumptions on how they are sampled. Thus, for any \(i\in {\tilde{\Omega }}\), we have that the index of the corresponding entry belongs to \({\tilde{\mathcal {I}}}\) and we make no further assumption. If we take the example of recommendation systems, this partition into \(\{X_{i},\,i\in \Omega \}\) and \(\{X_{i},\,i\in {\tilde{\Omega }}\}\) accounts for the difference in behavior of normal and malicious users.

As there is no hope for recovering the unobserved entries of \(S_0\), one should consider only the estimation of the restriction of \(S_0\) to \({\tilde{\Omega }}\). This is equivalent to assume that we estimate the whole \(S_0\) when all unobserved entries of \(S_0\) are equal to zero, cf. [9]. This assumption will be done throughout the paper.

For \(i\in \Omega \), we suppose that \(X_i\) are i.i.d realizations of a random matrix X having distribution \(\Pi \) on the set \({\mathcal {X}}'\). Let \(\pi _{jk}={\mathbb {P}}(X=e_j(m_1)e_k^{T}(m_2))\) be the probability to observe the (jk)th entry. One of the particular settings of this problem is the case of the uniform on \({\mathcal {X}}'\) distribution \(\Pi \). It was previously considered in the context of noiseless robust matrix completion, see, e.g., [9]. We consider here a more general sampling model. In particular, we suppose that any non-corrupted element is sampled with positive probability:

Assumption 1

There exists a positive constant \(\mu \ge 1\) such that, for any \((j,k)\in {\mathcal {I}}\),

$$\begin{aligned}\pi _{jk}\ge (\mu \vert \mathcal {I}\vert )^{-1}.\end{aligned}$$

If \(\Pi \) is the of uniform distribution on \({\mathcal {X}}'\) we have \(\mu =1\). For \(A\in {\mathbb {R}}^{m_1\times m_2}\) set

$$\begin{aligned} \Vert A\Vert _{L_2(\Pi )}^{2}={\mathbb {E}}(\langle A,X\rangle ^{2}). \end{aligned}$$

Assumption 1 implies that

$$\begin{aligned} \Vert A\Vert ^{2} _{L_2(\Pi )}\ge \left( \mu \,\vert \mathcal {I}\vert \right) ^{-1}\Vert A_{{\mathcal {I}}}\Vert ^{2} _{2}. \end{aligned}$$
(9)

Denote by \(\pi _{\cdot k}=\sum _{j=1}^{{m_1}}\pi _{jk}\) the probability to observe an element from the kth column and by \(\pi _{j\cdot }=\sum _{k=1}^{{m_2}}\pi _{jk}\) the probability to observe an element from the jth row. The following assumption requires that no column and no row is sampled with too high probability.

Assumption 2

There exists a positive constant \(L\ge 1\) such that

$$\begin{aligned} \underset{i,j}{\max }(\pi _{\cdot k},\pi _{j\cdot })\le L/m. \end{aligned}$$

This assumption will be used in Theorem 1 below. In Sects. 4 and 5, we apply Theorem 1 to the particular cases of columnwise sparse and entrywise sparse corruptions. There, we will need more restrictive assumptions on the sampling distribution (see Assumptions 5 and 9).

We assume below that the noise variables \(\xi _i\) are sub-gaussian:

Assumption 3

There exist positive constants \(\sigma \) and \(c_1\) such that

$$\begin{aligned} \underset{i=1,\dots ,n}{\max }{\mathbb {E}}\exp (\xi ^{2}_i/\sigma ^{2})< c_1. \end{aligned}$$

3 Upper bounds for general regularizers

In this section we state our main result which applies to a general convex program (5) where \(\mathcal {R}\) is an absolute norm and a decomposable regularizer. In the next sections, we consider in detail two particular choices, \(\mathcal {R}(\cdot )=\Vert \cdot \Vert _{1}\) and \(\mathcal {R}(\cdot )=\Vert \cdot \Vert _{2,1}\). Introduce the notation:

$$\begin{aligned} \Psi _{1}= & {} \mu ^{2}\,m_1m_2\,r\,(\ae ^{2}\lambda ^{2}_1+{\mathbf {a}}^{2}\left( {\mathbb {E}}\left( \Vert \Sigma _R\Vert \right) \right) ^{2})+{\mathbf {a}}^{2}\,\mu \, \sqrt{\dfrac{\log (d)}{n}},\nonumber \\ \Psi _2= & {} \mu \,{\mathbf {a}}\,\mathcal {R}({\mathbf {Id}}_{\tilde{\Omega }}) \left( \dfrac{\lambda _2\,{\mathbf {a}}}{\lambda _1}{\mathbb {E}}\left( \Vert \Sigma _R\Vert \right) +\ae \lambda _2+{\mathbf {a}}\,{\mathbb {E}}\left( \mathcal {R}^{*}(\Sigma _R)\right) \right) ,\nonumber \\ \Psi _3= & {} \dfrac{\mu \,\vert {\tilde{\Omega }}\vert \left( {\mathbf {a}}^{2}+\sigma ^{2}\log (d)\right) }{N}\left( \dfrac{{\mathbf {a}}\,{\mathbb {E}}\left( \Vert \Sigma _R\Vert \right) }{\lambda _1}+\dfrac{{\mathbf {a}}\,{\mathbb {E}}\left( \mathcal {R}^{*}(\Sigma _R)\right) }{\lambda _2}+\ae \right) +\dfrac{{\mathbf {a}}^{2}\vert {\tilde{\mathcal {I}}}\vert }{m_1m_2},\nonumber \\ \Psi _4= & {} \mu \,{\mathbf {a}}^{2}\,\sqrt{\dfrac{\log (d)}{n}}+\mu \, {\mathbf {a}}\,\mathcal {R}({\mathbf {Id}}_{\tilde{\Omega }})[\ae \,\lambda _{2}+ {\mathbf {a}}\,{\mathbb {E}}( \mathcal {R}^{*}(\Sigma _R))]\nonumber \\&+\left[ \dfrac{{\mathbf {a}}\,{\mathbb {E}}\left( \mathcal {R}^{*}(\Sigma _R)\right) }{\lambda _2}+\ae \right] \frac{\mu \,\vert {\tilde{\Omega }}\vert ({\mathbf {a}}^{2}+\sigma ^{2}\log (d))}{N} \end{aligned}$$
(10)

where \(d=m_1+m_2\).

Theorem 1

Let \(\mathcal {R}\) be an absolute norm and a decomposable regularizer. Assume that \(\left\| L_0\right\| _{\infty }\le {\mathbf {a}}\), \(\left\| S_0\right\| _{\infty }\le {\mathbf {a}}\) for some constant \({\mathbf {a}}\) and let Assumptions 13 be satisfied. Let \(\lambda _1>4\left\| \Sigma \right\| \), and \(\lambda _2\ge 4\left( \mathcal {R}^{*}(\Sigma )+2{\mathbf {a}}\mathcal {R}^{*}(W)\right) \). Then, with probability at least \(1-4.5\,d^{-1}\),

$$\begin{aligned} \dfrac{\Vert L_0-{\hat{L}}\Vert _2^{2}}{m_1m_2}+\dfrac{\Vert S_0-{\hat{S}}\Vert _2^{2}}{m_1m_2}\le & {} C\,\left\{ \Psi _1+ \Psi _2+\Psi _3\right\} \end{aligned}$$
(11)

where C is an absolute constant. Moreover, with the same probability,

$$\begin{aligned} \dfrac{\Vert {\hat{S}}_{\mathcal {I}}\Vert ^{2}_2}{\vert \mathcal {I}\vert }\le & {} C\Psi _4. \end{aligned}$$
(12)

The term \(\Psi _{1}\) in (11) corresponds to the estimation error associated with matrix completion of a rank r matrix. The second and the third terms account for the error induced by corruptions. In the next two sections we apply Theorem 1 to the settings with the entrywise sparse and columnwise sparse corruption matrices \(S_0\).

4 Columnwise sparse corruptions

In this section, we assume that that \(S_0\) has at most s non-zero columns, and \(s\le m_2/2\). We use here the \(\Vert \cdot \Vert _{2,1}\)-norm regularizer \(\mathcal {R}\). Then, the convex program (5) takes form

$$\begin{aligned} ({\hat{L}},{\hat{S}})\in \underset{^{\left\| L\right\| _{\infty }\le {\mathbf {a}}}_{\left\| S\right\| _{\infty }\le {\mathbf {a}}} }{\mathop {\hbox {arg min}}}\left\{ \dfrac{1}{N}\sum ^{N}_{i=1} \left( Y_i-\left\langle X_i,L+S\right\rangle \right) ^{2}+\lambda _{1} \Vert L\Vert _1 +\lambda _{2}\Vert S\Vert _{2,1}\right\} . \end{aligned}$$
(13)

Since \(S_0\) has at most s non-zero columns, we have \(\vert {\tilde{\mathcal {I}}}\vert =m_1s\). Furthermore, by the Cauchy–Schwarz inequality, \(\Vert {\mathbf {Id}}_{\tilde{\Omega }}\Vert _{2,1}\le \sqrt{s\vert {\tilde{\Omega }}\vert }\). Using these remarks we replace \(\Psi _2\), \(\Psi _3\) and \(\Psi _4\) by the larger quantities

$$\begin{aligned} \Psi '_2= & {} \mu \,{\mathbf {a}}\,\sqrt{s\vert {\tilde{\Omega }}\vert } \left( \dfrac{{\mathbf {a}}\,\lambda _2}{\lambda _1}{\mathbb {E}}\left( \Vert \Sigma _R\Vert \right) +\ae \lambda _2+{\mathbf {a}}\,{\mathbb {E}}\Vert \Sigma _R\Vert _{2,\infty }\right) ,\\ \Psi '_3= & {} \dfrac{\mu \,\vert {\tilde{\Omega }}\vert \left( {\mathbf {a}}^{2}+\sigma ^{2}\log (d)\right) }{N}\left( \dfrac{{\mathbf {a}}{\mathbb {E}}\left( \Vert \Sigma _R\Vert \right) }{\lambda _1}+\dfrac{{\mathbf {a}}{\mathbb {E}}\Vert \Sigma _R\Vert _{2,\infty }}{\lambda _2}+\ae \right) +\dfrac{{\mathbf {a}}^{2}s}{m_2},\\ \Psi '_4= & {} \mu \,{\mathbf {a}}^{2}\,\sqrt{\dfrac{\log (d)}{n}}+\mu \,{\mathbf {a}}\, \sqrt{s\vert {\tilde{\Omega }}\vert }\left[ \ae \,\lambda _{2}+ {\mathbf {a}}\, {\mathbb {E}}\Vert \Sigma _R\Vert _{2,\infty }\right] \\&+\left[ \dfrac{{\mathbf {a}}\,{\mathbb {E}}\Vert \Sigma _R\Vert _{2,\infty }}{\lambda _2}+\ae \right] \frac{\mu \,\vert {\tilde{\Omega }}\vert \left( {\mathbf {a}}^{2}+\sigma ^{2}\log (d)\right) }{N}. \end{aligned}$$

Specializing Theorem 1 to this case yields the following corollary.

Corollary 4

Assume that \(\left\| L_{0}\right\| _{\infty }\le {\mathbf {a}}\) and \(\left\| S_{0}\right\| _{\infty }\le {\mathbf {a}}\). Let the regularization parameters \((\lambda _1,\lambda _2)\) satisfy

$$\begin{aligned} \lambda _1>4\left\| \Sigma \right\| \quad \;\text {and}\quad \;\lambda _2\ge 4\left( \Vert \Sigma \Vert _{2,\infty }+2{\mathbf {a}}\Vert W\Vert _{2,\infty }\right) . \end{aligned}$$

Then, with probability at least \(1-4.5\,d^{-1}\), for any solution \(({\hat{L}},{\hat{S}})\) of the convex program (13) with such regularization parameters \((\lambda _1,\lambda _2)\) we have

$$\begin{aligned} \dfrac{\Vert L_0-{\hat{L}}\Vert _2^{2}}{m_1m_2}+\dfrac{\Vert S_0-{\hat{S}}\Vert _2^{2}}{m_1m_2}\le & {} C\,\left\{ \Psi _1+ \Psi '_2+\Psi '_3\right\} . \end{aligned}$$

where C is an absolute constant. Moreover, with the same probability,

$$\begin{aligned} \dfrac{\Vert {\hat{S}}_{\mathcal {I}}\Vert ^{2}_2}{\vert \mathcal {I}\vert }\le & {} C\Psi '_4. \end{aligned}$$

In order to get a bound in a closed form, we need to obtain suitable upper bounds on the stochastic terms \(\Sigma \), \(\Sigma _{R}\) and W. We derive such bounds under an additional assumption on the column marginal sampling distribution. Set \(\pi _{\cdot ,k}^{(2)} =\sum _{j=1}^{m_1} \pi _{jk}^2\).

Assumption 5

There exists a positive constant \(\gamma \ge 1\) such that

$$\begin{aligned} \underset{k}{\max }\;\pi _{\cdot ,k}^{(2) } \le \frac{\gamma ^{2}}{\vert \mathcal {I}\vert \,m_2}. \end{aligned}$$

This condition prevents the columns from being sampled with too high probability and guarantees that the non-corrupted observations are well spread out among the columns. Assumption 5 is clearly less restrictive than assuming that \(\Pi \) is uniform as it was done in the previous work on noiseless robust matrix completion. In particular, Assumption 5 is satisfied when the distribution \(\Pi \) is approximately uniform, i.e., when \(\pi _{jk} \asymp \frac{1}{m_1(m_2-s)}\). Note that Assumption 5 implies the following milder condition on the marginal sampling distribution:

$$\begin{aligned} \underset{k}{\max }\;\pi _{\cdot k}\le \frac{\sqrt{2}\,\gamma }{m_2}. \end{aligned}$$
(14)

Condition (14) is sufficient to control \(\Vert \Sigma \Vert _{2,\infty }\) and \(\Vert \Sigma _R\Vert _{2,\infty }\) while to we need a stronger Assumption 5 to control \(\Vert W\Vert _{2,\infty }\).

The following lemma gives the order of magnitude of the stochastic terms driving the rates of convergence.

Lemma 6

Let the distribution \(\Pi \) on \({\mathcal {X}}'\) satisfy Assumptions 12 and 5. Let also Assumption 3 hold. Assume that \(N\le m_1m_2\), \(n\le \vert \mathcal {I}\vert \), and \(\log m_2 \ge 1\). Then, there exists an absolute constant \(C>0\) such that, for any \(t>0\), the following bounds on the norms of the stochastic terms hold with probability at least \(1-e^{-t}\), as well as the associated bounds in expectation.

  1. (i)
    $$\begin{aligned}&\left\| \Sigma \right\| \le C\sigma \max \left( \sqrt{\dfrac{L(t+\log d)}{\ae \,Nm}}, \frac{(\log m)(t+\log d)}{N}\right) \quad \text {and}\\&{\mathbb {E}}\left\| \Sigma _{R}\right\| \le C\left( \sqrt{\dfrac{L\log (d)}{nm}}+\frac{\log ^{2} d}{N}\right) ; \end{aligned}$$
  2. (ii)
    $$\begin{aligned}&\left\| \Sigma \right\| _{2,\infty }\le C\sigma \left( \sqrt{\frac{\gamma (t+\log (d))}{\ae N m_2} }+ \frac{t+\log d}{N}\right) \quad \text {and}\\&{\mathbb {E}}\left\| \Sigma \right\| _{2,\infty }\le C\sigma \,\left( \sqrt{\frac{\gamma \log (d)}{\ae N m_2} }+ \frac{\log d}{N}\right) ; \end{aligned}$$
  3. (iii)
    $$\begin{aligned}&\left\| \Sigma _R\right\| _{2,\infty }\le C\left( \sqrt{\frac{\gamma (t+\log (d))}{n m_2} }+ \frac{t+\log d}{n}\right) \quad \text {and}\\&{\mathbb {E}}\left\| \Sigma _R\right\| _{2,\infty }\le C \left( \sqrt{\frac{\gamma \log (d)}{n m_2} }+ \frac{\log d}{n}\right) ; \end{aligned}$$
  4. (iv)
    $$\begin{aligned}&\Vert W\Vert _{2,\infty }\le C \left( \frac{\gamma (t+\log m_2)^{1/4}}{\sqrt{\ae N m_2}}\left( 1+\sqrt{\frac{m_2(t+\log m_2)}{n}}\right) ^{1/2} + \frac{t+\log m_2}{N} \right) \\&{\mathbb {E}}\Vert W\Vert _{2,\infty }\le C \left( \frac{\gamma \log ^{1/4}(d)}{\sqrt{\ae N m_2}}\left( 1+ \sqrt{\frac{m_2\log d}{n}}\right) ^{1/2} + \frac{\log d}{N} \right) . \end{aligned}$$

Let

$$\begin{aligned} n^{*}=2\,\log (d) \left( \dfrac{m_2}{\gamma }\vee \frac{m\,\log ^{2}m}{L}\right) . \end{aligned}$$
(15)

Recall that \(\ae =\frac{N}{n}\ge 1\). If \(n\ge n^{*}\), using the bounds given by Lemma 6, we can chose the regularization parameters \(\lambda _1\) and \(\lambda _2\) in the following way:

$$\begin{aligned} \lambda _1=C\left( \sigma \vee {\mathbf {a}}\right) \sqrt{\dfrac{L\, \log (d)}{Nm}}\quad \text {and} \quad \lambda _2=C\,\gamma \left( \sigma \vee {\mathbf {a}}\right) \sqrt{\dfrac{\log (d)}{Nm_2}}, \end{aligned}$$
(16)

where \(C>0\) is a large enough numerical constant.

With this choice of the regularization parameters, Corollary 4 implies the following result.

Corollary 7

Let the distribution \(\Pi \) on \({\mathcal {X}}'\) satisfy Assumptions 12 and 5. Let Assumption 3 hold and \(\left\| L_{0}\right\| _{\infty }\le {\mathbf {a}}\), \(\left\| S_{0}\right\| _{\infty }\le {\mathbf {a}}\). Assume that \(N\le m_1m_2\) and \(n^{*}\le n\). Then, with probability at least \(1-6/d\) for any solution \(({\hat{L}},{\hat{S}})\) of the convex program (13) with the regularization parameters \((\lambda _1,\lambda _2)\) given by (16), we have

$$\begin{aligned} \dfrac{\Vert L_0-{\hat{L}}\Vert _2^{2}}{m_1m_2}+\dfrac{\Vert S_0-{\hat{S}}\Vert _2^{2}}{m_1m_2}\le & {} C_{\mu ,\gamma , L}(\sigma \vee {\mathbf {a}})^{2}\log (d)\,\ae \dfrac{r\,M+\vert {\tilde{\Omega }}\vert }{n}+\dfrac{{\mathbf {a}}^{2}s}{m_2}\qquad \end{aligned}$$
(17)

where \(C_{\mu ,\gamma ,L}>0\) can depend only on \(\mu ,\gamma ,L\). Moreover, with the same probability,

$$\begin{aligned} \dfrac{\Vert {\hat{S}}_{\mathcal {I}}\Vert ^{2}_2}{\vert \mathcal {I}\vert }\le C_{\mu ,\gamma , L} \dfrac{\ae (\sigma \vee {\mathbf {a}})^{2}\,\vert {\tilde{\Omega }}\vert \,\log (d)}{n}+\dfrac{{\mathbf {a}}^{2}s}{m_2}. \end{aligned}$$

Remarks

1.:

The upper bound (17) can be decomposed into two terms. The first term is proportional to rM / n. It is of the same order as in the case of the usual matrix completion, see [18, 20]. The second term accounts for the corruption. It is proportional to the number of corrupted columns s and to the number of corrupted observations \(\vert {\tilde{\Omega }}\vert \). This term vanishes if there is no corruption, i.e., when \(S_0=\mathbf {0}\).

2.:

If all entries of \(A_0\) are observed, i.e., the matrix decomposition problem is considered, the bound (17) is analogous to the corresponding bound in [1]. Indeed, then \(\vert {\tilde{\Omega }}\vert =sm_1, N=m_1m_2\), \(\ae \le 2\) and we get

$$ \begin{aligned} \dfrac{\Vert L_0-{\hat{L}}\Vert _2^{2}}{m_1m_2}+\dfrac{\Vert S_0-{\hat{S}}\Vert _2^{2}}{m_1m_2}&\lesssim \& (\sigma \vee {\mathbf {a}})^{2}\left( \dfrac{r\,M}{m_1m_2}+\dfrac{s}{m_2}\right) . \end{aligned}$$

The estimator studied in [1] for matrix decomposition problem is similar to our program (13). The difference between these estimators is that in (13) the minimization is over \(\Vert \cdot \Vert _{\infty }\)-balls while the program of [1] uses the minimization over \(\Vert \cdot \Vert _{2,\infty }\)-balls and requires the knowledge of a bound on the norm \(\Vert L_0 \Vert _{2,\infty }\) of the unknown matrix \(L_0\).

3.:

Suppose that the number of corrupted columns is small (\(s\ll m_2\)). Then, Corollary 7 guarantees, that the prediction error of our estimator is small whenever the number of non-corrupted observations n satisfies the following condition

$$\begin{aligned} n\gtrsim (m_1\vee m_2)\mathrm{rank}(L_0)+\vert {\tilde{\Omega }}\vert \end{aligned}$$
(18)

where \(\vert {\tilde{\Omega }}\vert \) is the number of corrupted observations. This quantifies the sample size sufficient for successful (robust to corruptions) matrix completion. When the rank r of \(L_0\) is small and \(s\ll m_2\), the right hand side of (18) is considerably smaller than the total number of entries \(m_1m_2\).

4.:

By changing the numerical constants, one can obtain that the upper bound (17) is valid with probability \(1-6d^{-\alpha }\) for any \(\alpha \ge 1\).

5 Entrywise sparse corruptions

We assume now that \(S_0\) has s non-zero entries but they do not necessarily lay in a small subset of columns. We will also assume that \(s\le \frac{m_1m_2}{2}\). We use now the \(l_1\)-regularizer \(\mathcal {R}\). Then the convex program (5) takes the form

$$\begin{aligned} ({\hat{L}},{\hat{S}})\in \underset{^{\left\| L\right\| _{\infty }\le {\mathbf {a}}}_{\left\| S\right\| _{\infty }\le {\mathbf {a}}} }{\mathop {\hbox {arg min}}}\left\{ \dfrac{1}{N}\sum ^{N}_{i=1} \left( Y_i-\left\langle X_i,L+S\right\rangle \right) ^{2}+\lambda _{1} \Vert L\Vert _{*} +\lambda _{2}\Vert S\Vert _{1}\right\} . \end{aligned}$$
(19)

The support \({\tilde{\mathcal {I}}} = \{ (j,k)\,:\, (S_0)_{jk} \ne 0\}\) of the non-zero entries of \(S_0\) satisfies \(\vert {\tilde{\mathcal {I}}}\vert =s\). Also, \(\Vert {\mathbf {Id}}_{\tilde{\Omega }}\Vert _{1}=\vert {\tilde{\Omega }}\vert \) so that \(\Psi _2\), \(\Psi _3\), and \(\Psi _4\) take form

$$\begin{aligned} \Psi ''_2= & {} \mu \,{\mathbf {a}}\,\vert {\tilde{\Omega }}\vert \left( \dfrac{{\mathbf {a}}\,\lambda _2}{\lambda _1}{\mathbb {E}}\left( \Vert \Sigma _R\Vert \right) +\ae \lambda _2+{\mathbf {a}}\,{\mathbb {E}}\Vert \Sigma _R\Vert _{2,\infty }\right) ,\\ \Psi ''_3= & {} \dfrac{\mu \,\vert {\tilde{\Omega }}\vert ({\mathbf {a}}^{2}+\sigma ^{2}\log (d))}{N}\left( \dfrac{{\mathbf {a}}{\mathbb {E}}\left( \Vert \Sigma _R\Vert \right) }{\lambda _1}+\dfrac{{\mathbf {a}}{\mathbb {E}}\Vert \Sigma _R\Vert _{2,\infty }}{\lambda _2}+\ae \right) +\dfrac{{\mathbf {a}}^{2}s}{m_1m_2},\\ \Psi ''_4= & {} \mu \,{\mathbf {a}}^{2}\,\sqrt{\dfrac{\log (d)}{n}}+\mu \,{\mathbf {a}}\,\vert {\tilde{\Omega }}\vert \left[ \ae \,\lambda _{2}+ {\mathbf {a}}\,{\mathbb {E}}\Vert \Sigma _R\Vert _{2,\infty }\right] \\&+\left[ \dfrac{{\mathbf {a}}\,{\mathbb {E}}\Vert \Sigma _R\Vert _{2,\infty }}{\lambda _2}+\ae \right] \frac{\mu \,\vert {\tilde{\Omega }}\vert \left( {\mathbf {a}}^{2}+\sigma ^{2}\log (d)\right) }{N}. \end{aligned}$$

Specializing Theorem 1 to this case yields the following corollary:

Corollary 8

Assume that \(\left\| L_{0}\right\| _{\infty }\le {\mathbf {a}}\) and \(\left\| S_{0}\right\| _{\infty }\le {\mathbf {a}}\). Let the regularization parameters \((\lambda _1,\lambda _2)\) satisfy

$$\begin{aligned} \lambda _1>4\left\| \Sigma \right\| \quad \;\text {and} \; \quad \lambda _2\ge 4\left( \Vert \Sigma \Vert _{\infty }+2{\mathbf {a}}\Vert W\Vert _{\infty }\right) . \end{aligned}$$

Then, with probability at least \(1-4.5\,d^{-1}\), for any solution \(({\hat{L}},{\hat{S}})\) of the convex program (19) with such regularization parameters \((\lambda _1,\lambda _2)\) we have

$$\begin{aligned} \dfrac{\Vert L_0-{\hat{L}}\Vert _2^{2}}{m_1m_2}+\dfrac{\Vert S_0-{\hat{S}}\Vert _2^{2}}{m_1m_2}\le & {} C\,\left\{ \Psi _1+ \Psi ''_2+\Psi ''_3\right\} \end{aligned}$$

where C is an absolute constant. Moreover, with the same probability,

$$\begin{aligned} \dfrac{\Vert {\hat{S}}_{\mathcal {I}}\Vert ^{2}_2}{\vert \mathcal {I}\vert }\le & {} C\Psi ''_4. \end{aligned}$$

In order to get a bound in a closed form we need to obtain suitable upper bounds on the stochastic terms \(\Sigma ,\Sigma _{R}\) and W. We provide such bounds under the following additional assumption on the sampling distribution.

Assumption 9

There exists a positive constant \(\gamma \ge 1\) such that

$$\begin{aligned} \underset{i,j}{\max }\;\pi _{ij}\le \frac{\mu _1}{\vert \mathcal {I}\vert }. \end{aligned}$$

This assumption prevents any entry from being sampled too often and guarantees that the observations are well spread out over the non-corrupted entries. Assumptions 1 and 9 imply that the sampling distribution \(\Pi \) is approximately uniform in the sense that \(\pi _{jk} \asymp \frac{1}{\vert \mathcal {I}\vert }\). In particular, since \(\vert \mathcal {I}\vert \le \frac{m_1m_2}{2}\), Assumption 9 implies Assumption 2 for \(L=2\mu _1\).

Lemma 10

Let the distribution \(\Pi \) on \({\mathcal {X}}'\) satisfy Assumptions 1, and 9. Let also Assumption 3 hold. Then, there exists an absolute constant \(C>0\) such that, for any \(t>0\), the following bounds on the norms of the stochastic terms hold with probability at least \(1-e^{-t}\), as well as the associated bounds in expectation.

  1. (i)
    $$\begin{aligned} \Vert W\Vert _{\infty }\le & {} C \left( \frac{\mu _1}{\ae m_1m_2} +\sqrt{ \frac{\mu _1 \left. (t+\log d)\right) }{ \ae N m_1m_2}} +\frac{ t+\log d}{ N}\right) \quad \text {and}\\ {\mathbb {E}}\Vert W\Vert _{\infty }\le & {} C\left( \frac{\mu _1}{\ae m_1m_2} + \sqrt{ \frac{\mu _1 \log d}{ \ae N m_1m_2}} + \frac{\log d}{N}\right) ; \end{aligned}$$
  2. (ii)
    $$\begin{aligned} \left\| \Sigma \right\| _{\infty }\le & {} C \sigma \left( \sqrt{\frac{\mu _1(t+\log d)}{\ae Nm_1m_2}} + \frac{t+\log d}{N} \right) \quad \text {and}\\ {\mathbb {E}}\left\| \Sigma \right\| _{\infty }\le & {} C \sigma \left( \sqrt{\frac{\mu _1 \log d}{\ae N m_1 m_2}} + \frac{ \log d}{N} \right) ; \end{aligned}$$
  3. (iii)
    $$\begin{aligned} \left\| \Sigma _R\right\| _{\infty }\le & {} C \left( \sqrt{\frac{\mu _1(t+\log d)}{n\,m_1m_2}} + \frac{t+\log d}{n} \right) \quad \text {and}\\ {\mathbb {E}}\left\| \Sigma _R\right\| _{\infty }\le & {} C \left( \sqrt{\frac{\mu _1 \log d}{n\,m_1m_2}} + \frac{\log d}{n} \right) . \end{aligned}$$

Using Lemma 6(i), and Lemma 10, under the conditions

$$\begin{aligned} \frac{m_1m_2\log d}{\mu _1}\ge & {} n\ge \frac{2\,m\log (d)\log ^{2}(m)}{L} \end{aligned}$$
(20)

we can choose the regularization parameters \(\lambda _1\) and \(\lambda _2\) in the following way:

$$\begin{aligned} \lambda _1= & {} C(\sigma \vee {\mathbf {a}})\sqrt{\dfrac{\mu _1\, \log (d)}{Nm}}\qquad \text {and} \nonumber \\ \lambda _2= & {} C(\sigma \vee {\mathbf {a}})\dfrac{\log (d)}{N}. \end{aligned}$$
(21)

With this choice of the regularization parameters, Corollary 8 and Lemma 10 imply the following result.

Corollary 11

Let the distribution \(\Pi \) on \({\mathcal {X}}'\) satisfy Assumptions 1, and 9. Let Assumption 3 hold and \(\left\| L_{0}\right\| _{\infty }\le {\mathbf {a}}\), \(\left\| S_{0}\right\| _{\infty }\le {\mathbf {a}}\). Assume that \(N\le m_1m_2\) and that condition (20) holds. Then, with probability at least \(1-6/d\) for any solution \(({\hat{L}},{\hat{S}})\) of the convex program (19) with the regularization parameters \((\lambda _1,\lambda _2)\) given by (21), we have

$$\begin{aligned} \dfrac{\Vert L_0-{\hat{L}}\Vert ^{2}_2}{ m_1m_2}+\dfrac{\Vert S_0-{\hat{S}}\Vert ^{2}_2}{m_1m_2}\le & {} C_{\mu ,\mu _1}\ae (\sigma \vee {\mathbf {a}})^{2}\log (d)\dfrac{r\,M+\vert {\tilde{\Omega }}\vert }{n}+\dfrac{{\mathbf {a}}^{2}s}{m_1m_2}\qquad \end{aligned}$$
(22)

where \(C_{\mu ,\mu _1}>0\) can depend only on \(\mu \) and \(\mu _1\). Moreover, with the same probability

$$\begin{aligned} \dfrac{\Vert {\hat{S}}_{\mathcal {I}}\Vert ^{2}_2}{\vert \mathcal {I}\vert }\le C_{\mu ,\mu _1}\dfrac{\ae (\sigma \vee {\mathbf {a}})^{2}\vert {\tilde{\Omega }}\vert \,\log (d)}{n}+\dfrac{{\mathbf {a}}^{2}s}{m_1m_2}. \end{aligned}$$

Remarks

1.:

As in the columnwise sparsity case, we can recognize two terms in the upper bound (22). The first term is proportional to rM / n. It is of the same order as the rate of convergence for the usual matrix completion, see [18, 20]. The second term accounts for the corruptions and is proportional to the number s of nonzero entries in \(S_0\) and to the number of corrupted observations \(\vert {\tilde{\Omega }}\vert \). We will prove in Sect. 6 below that these error terms are of the correct order up to a logarithmic factor.

2.:

If \(s\ll n<m_1m_2\), the bound (22) implies that one can estimate a low-rank matrix from a nearly minimal number of observations, even when a part of the observations has been corrupted.

3.:

If all entries of \(A_0\) are observed, i.e., the matrix decomposition problem is considered, the bound (22) is analogous to the corresponding bound in [1]. Indeed, then \(\vert {\tilde{\Omega }}\vert \le s, N=m_1m_2\), \(\ae \le 2\) and we get

$$ \begin{aligned} \dfrac{\Vert L_0-{\hat{L}}\Vert _2^{2}}{m_1m_2}+\dfrac{\Vert S_0-{\hat{S}}\Vert _2^{2}}{m_1m_2}&\lesssim \& (\sigma \vee {\mathbf {a}})^{2}\left( \dfrac{r\,M}{m_1m_2}+\dfrac{s}{m_1m_2}\right) . \end{aligned}$$

6 Minimax lower bounds

In this section, we prove the minimax lower bounds showing that the rates attained by our estimator are optimal up to a logarithmic factor. We will denote by \(\inf _{({\hat{L}},{\hat{S}})}\) the infimum over all pairs of estimators \(({\hat{L}},{\hat{S}})\) for the components \(L_0\) and \(S_0\) in the decomposition \(A_0 = L_0 + S_0\) where both \({\hat{L}}\) and \({\hat{S}}\) take values in \({\mathbb {R}}^{m_1\times m_2}\). For any \(A_0\in {\mathbb {R}}^{m_1\times m_2}\), let \({\mathbb {P}}_{A_0}\) denote the probability distribution of the observations \((X_1,Y_1,\dots ,X_n,Y_n)\) satisfying (1).

We begin with the case of columnwise sparsity. For any matrix \(S \in {\mathbb {R}}^{m_1\times m_2}\), we denote by \(\Vert S\Vert _{2,0}\) the number of nonzero columns of S. For any integers \(0\le r\le \min (m_1,m_2)\), \(0\le s\le m_2\) and any \({{\mathbf {a}}}>0\), we consider the class of matrices

$$\begin{aligned} {\mathcal {A}}_{GS}(r,s,{{\mathbf {a}}})= & {} \left\{ A_0 = L_0 + S_0\in \,{\mathbb {R}}^{m_1\times m_2}:\, {\mathrm {rank}}(L_0)\le r,\,\Vert L_0\Vert _{\infty }\le {{\mathbf {a}}}, \right. \nonumber \\&\qquad \left. \text {and}\;\Vert S_0\Vert _{2,0}\,\le \, s\,,\Vert S_0\Vert _{\infty }\,\le \, {{\mathbf {a}}}\right\} \end{aligned}$$
(23)

and define

$$\begin{aligned} \psi _{GS}(N,r,s) = \left( \sigma \wedge {{\mathbf {a}}}\right) ^2 \left( \frac{Mr+\vert {\tilde{\Omega }}\vert }{n}+\frac{s}{m_2}\right) . \end{aligned}$$

The following theorem gives a lower bound on the estimation risk in the case of columnwise sparsity.

Theorem 2

Suppose that \(m_1,m_2\ge 2\). Fix \({{\mathbf {a}}}>0\) and integers \(1\le r\le \min (m_1,m_2)\) and \(1\le s\le m_2/2\). Let Assumption 9 be satisfied. Assume that \(Mr\le n\), \(\vert {\tilde{\Omega }}\vert \le sm_1\) and \(\ae \le 1+s/m_2\). Suppose that the variables \(\xi _i\) are i.i.d. Gaussian \({\mathcal {N}}(0,\sigma ^2)\), \(\sigma ^2>0\), for \(i=1,\dots ,n\). Then, there exist absolute constants \(\beta \in (0,1)\) and \(c>0\), such that

$$\begin{aligned} \inf _{({\hat{L}},{\hat{S}})}\sup _{\begin{array}{c} (L_0,S_0)\in \,{\mathcal {A}}_{GS}( r,s,{{\mathbf {a}}}) \end{array}} {\mathbb {P}}_{A_0}\left( \dfrac{\Vert {\hat{L}}-L_0\Vert _2^{2}}{m_1m_2}+\dfrac{\Vert {\hat{S}}-S_0\Vert _2^{2}}{m_1m_2} > c\psi _{GS}(N,r,s) \right) \ \ge \ \beta . \end{aligned}$$

We turn now to the case of entrywise sparsity. For any matrix \(S \in {\mathbb {R}}^{m_1\times m_2}\), we denote by \(\Vert S\Vert _{0}\) the number of nonzero entries of S. For any integers \(0\le r\le \min (m_1,m_2)\), \(0\le s \le m_1m_2/2\) and any \({{\mathbf {a}}}>0\), we consider the class of matrices

$$\begin{aligned}&{\mathcal {A}}_{S}(r,s,a)= \left\{ A_0 = L_0 + S_0\in \,{\mathbb {R}}^{m_1\times m_2}:\, {\mathrm {rank}}(L_0)\le r,\, \right. \\&\left. \qquad \Vert S_0\Vert _{0} \le s ,\, \Vert L_0\Vert _{\infty }\,\le {{\mathbf {a}}}, \Vert S_0\Vert _{\infty }\,\le \, {{\mathbf {a}}}\right\} \end{aligned}$$

and define

$$\begin{aligned} \psi _{S}(N,r,s) = (\sigma \wedge {{\mathbf {a}}})^2 \left\{ \frac{Mr + \vert {\tilde{\Omega }}\vert }{n}+\dfrac{s}{m_1m_2}\right\} . \end{aligned}$$

We have the following theorem for the lower bound in the case of entrywise sparsity.

Theorem 3

Assume that \(m_1,m_2\ge 2\). Fix \({{\mathbf {a}}}>0\) and integers \(1\le r\le \min (m_1,m_2)\) and \(1\le s \le m_1 m_2/2\). Let Assumption 9 be satisfied. Assume that \(Mr\le n\) and there exists a constant \(\rho >0\) such that \(\vert {\tilde{\Omega }}\vert \le \rho \,r\,M\). Suppose that the variables \(\xi _i\) are i.i.d. Gaussian \({\mathcal {N}}(0,\sigma ^2)\), \(\sigma ^2>0\), for \(i=1,\dots ,n\). Then, there exist absolute constants \(\beta \in (0,1)\) and \(c>0\), such that

$$\begin{aligned} \inf _{({\hat{L}},{\hat{S}})}\sup _{\begin{array}{c} (L_0,S_0)\in \, {\mathcal {A}}_{S}(r,s,{{\mathbf {a}}}) \end{array}} {\mathbb {P}}_{A_0}\bigg (\frac{\Vert {\hat{L}}-L_0\Vert ^2_{2}}{m_1m_2} + \frac{\Vert {\hat{S}}-S_0\Vert ^2_{2}}{m_1m_2} > c\psi _{S}(N,r,s) \bigg )\ \ge \ \beta .\nonumber \\ \end{aligned}$$
(24)