1 Introduction

Let \(\{\xi _i\in {\mathbb {R}}^n,\,\,i=1,\ldots ,m\}\) be a set of samples independently drawn from a Gaussian distribution \({\mathcal {N}}(\mu ,\Sigma )\). It is well known that the elements of a precision matrix (i.e., the inverse of covariance matrix \(\Sigma ^{-1}\)) could be applied to characterize the conditional independence between two variables. This is based on the fact proved in Dempster (1972) that the i-th and j-th components of random variable \(\xi \sim {\mathcal {N}}(\mu ,\Sigma )\) are conditionally independent if and only if \([\Sigma ^{-1}]_{ij}=0\). However, the true covariance matrix \(\Sigma \) is hardly known in practice, but usually can be estimated from the observations.

When the samples are fully observed, let \({\hat{\mu }}:=\frac{1}{m}\sum ^m_{i=1}\xi _i\), the sample covariance matrix \({\hat{s}}=\frac{1}{m}\sum ^m_{i=1}(\xi _i-{\hat{\mu }})(\xi _i-{\hat{\mu }})^T\) is a widely used estimator. If the sample size m is larger than the dimension n, the sample covariance matrix \({\hat{s}}\) is in general positive definite, but the elements of precision matrix estimator \({\hat{s}}^{-1}\) are typically nonzero (Yuan and Lin 2007). Moreover, contemporary datasets are often high dimensional (\(n\gtrsim m\)). In this setting, the sample covariance matrix is rank deficient. Therefore, the sparse Gaussian graphical model (sGGM) was introduced (Yuan and Lin 2007) :

$$\begin{aligned} \begin{array}{ll} \min \limits _{x\succ 0} -\mathop {\mathrm {log\,det}}\limits x+\langle {\hat{s}},x\rangle +\lambda \Vert x\Vert _1, \end{array} \end{aligned}$$
(1.1)

where \(\lambda >0\), \({\hat{s}}\) is the sample covariance matrix estimated by the fully observed samples, and \(\Vert x\Vert _1 = \sum ^{n}_{i,j=1}|x_{ij}|\). The sparse Gaussian graphical model and its variants have been applied in many fields, such as climate networks (Zerenner et al. 2014), biological networks (Wang et al. 2016; Zhang et al. 2018), and traffic management (Sun et al. 2012). A large amount of literature is devoted to the algorithm design and analysis: interior point methods (Lu and Toh 2010; Yuan and Lin 2007), block coordinate descent method (Friedman et al. 2008), alternating direction method of multipliers (Yuan 2012), Newton-type methods (Hsieh et al. 2014; Wang et al. 2010), to name just a few.

Dataset with missing values is a ubiquitous phenomenon in the practical world (Lounici 2014). The covariance matrix is an essential part of many applications and algorithms (Pavez and Ortega 2021). However, only a few literature study the covariance estimation from the data with missing values. Inverse probability weighting (IPW) approaches (Seaman and White 2013) are commonly applied to correct the bias of the covariance estimation from the partially observed data. The estimator obtained by the IPW approach is usually named as IPW estimator. Under the assumption that the data are missing completely at random, Kolar and Xing (2012), and Lounici (2014) introduced different IPW estimators instead of the sample covariance matrix. Recently, under more general assumptions about missingness, new IPW estimators are proposed by Park and Lim (2019), Park et al. (2020), Pavez and Ortega (2021). Moreover, the precision matrix estimation based on the IPW estimator has also been studied in Fan et al. (2019); Kolar and Xing (2012).

It has been proved that problem (1.1) has a unique solution under the assumption that the covariance matrix estimation is positive semidefinite (Hsieh et al. 2014). However, the IPW estimators are usually non-positive semidefinite. When the non-positive semidefinite IPW estimator is taken as a surrogate of the sample covariance matrix, the objective function in problem (1.1) could be unbounded from below. Therefore, problem (1.1) may fail to yield a precision matrix estimator when \({\hat{s}}\) is non-positive semidefinite. Moreover, the estimation of the covariance matrix from data with missing values is still challenging work.

In this paper, we consider the following Gaussian graphical model:

$$\begin{aligned} \begin{array}{ll} \min \limits _{x\in {\mathcal {C}}} -\mathop {\mathrm {log\,det}}\limits x+\langle {\hat{s}},x\rangle +\lambda \Vert x\Vert _{1,\mathrm{off}}, \end{array} \end{aligned}$$
(1.2)

where \({\hat{s}}\in {\mathbb {S}}^n\) is a given IPW estimator (not necessarily positive semidefinite), \(\Vert x\Vert _{1,\mathrm{off}}=\sum _{i\ne j}|x_{ij}|\) and \({\mathcal {C}}:=\{x\in {\mathbb {S}}^n: \Vert x\Vert \le \alpha \}\) with \(\Vert x\Vert :=(\sum ^n_{i,j}x^2_{ij})^{1/2}\) and \(\alpha >0\). The main motivations to study problem (1.2) are listed below:

  • Lower boundedness the set \({\mathcal {C}}\), which is named as side constraint in Loh and Wainwright (2015), is used to guarantee the existence of a unique global optimal solution.

  • Robustness the proposed model (1.2) can be used to hedge against the risk raised by the uncertainty of the covariance matrix estimation. Specifically, there exists a positive scalar \(\mu \) such that problem (1.2) can be equivalently rewritten as the following penalized problem:

    $$\begin{aligned} \min \limits _{x\succ 0} -\mathop {\mathrm {log\,det}}\limits x+\langle {\hat{s}},x\rangle +\lambda \Vert x\Vert _{1,\mathrm{off}}+\mu \Vert x\Vert . \end{aligned}$$
    (1.3)

    Furthermore, by using the identity \(\mu \Vert x\Vert =\max _{\Vert s-{\hat{s}}\Vert \le \mu }\langle s-{\hat{s}},x \rangle \), problem (1.3) can be equivalently reformulated into the following robust counterpart of sparse Gaussian graphical model,

    $$\begin{aligned} \begin{array}{ll} \min \limits _{x\succ 0}\,\max \limits _{s\in {\mathcal {U}}} -\mathop {\mathrm {log\,det}}\limits x+\langle s,x\rangle +\lambda \Vert x\Vert _{1,\mathrm{off}}, \end{array} \end{aligned}$$

    where \({\mathcal {U}}\) is the uncertainty set defined as

    $$\begin{aligned} {\mathcal {U}}:=\left\{ s\in {\mathbb {S}}^n : \,\Vert s-{\hat{s}}\Vert \le \mu \right\} ,\,\,\mu >0. \end{aligned}$$

    Therefore, we refer to the problem (1.2) as a robust Gaussian graphical model (rGGM) throughout this paper.

In addition to the lower boundedness and robustness, another motivation to consider rGGM is to pave the way for solving nonconvex regularized Gaussian graphical model [see e.g., Fan et al. (2019), Loh and Wainwright (2015)]. It has been commonly accepted that nonconvex regularizers have better performance than that of convex regularizers (Fan et al. 2016). Besides, the widely used smoothly clipped absolute deviation [SCAD, Fan and Li (2001)] and minimax concave penalty [MCP, Zhang (2010)] regularizers can be reformulated as the difference of \(\ell _1\) norm and some smooth convex functions (Ahn et al. 2017; Tang et al. 2020). Note that the lower boundedness assumption usually plays an essential role in the theoretical analysis of various numerical algorithms. Therefore, a systematic study on the convex regularized Gaussian graphical model with side constraints may provide a theoretical foundation for designing more efficient numerical algorithms for solving the nonconvex regularized Gaussian graphical models.

Note that the proposed Gaussian graphical model is a composite convex optimization problem:

$$\begin{aligned} \begin{array}{ll} \min \limits _{x\succ 0} -\mathop {\mathrm {log\,det}}\limits \,x+\left\langle {\hat{s}},x\right\rangle +p(x),\,\,\,p(x):=\lambda \Vert x\Vert _{1,\mathrm{off}}+{\mathbb {I}}_{{\mathcal {C}}}(x). \end{array} \end{aligned}$$
(1.4)

The alternating direction method of multipliers (ADMM) has been extensively used for solving the structured composite convex optimization problem [see e.g. Fan et al. (2019), Yuan (2012), Yuan et al. (2020)]. Under some calmness conditions, (Han et al. 2018) show that the globally convergent semiproximal ADMM can achieve a linear convergence rate. A systematical study on the linear convergence of various ADMM-type algorithms has been given in Yuan et al. (2020). However, due to the side constraint \({\mathcal {C}}\), the linear convergence rate of ADMM-type algorithms for solving problem (1.2) can not be directly obtained from Yuan et al. (2020, Table 4). Therefore, we investigate the stability analysis of the solution mapping associated with the problem (1.2). This will facilitate the algorithm design and give the theoretical guarantee of various algorithms, such as the alternating direction method of multipliers and the proximal point algorithm.

The remaining parts of this paper are organized as follows. In Sect. 2, we present some definitions and preliminary results. In particular, we derive the specific expression of the proximal mapping associated with the convex regularizer p and the Lipschitz continuity of the KKT solution mapping associated with the problem (1.2). In Sect. 3, we propose the alternating direction method of multipliers, its implementation details, and convergence analysis for solving the problem (1.2). We evaluate the numerical performance of synthetic data and real data in Sect. 4 and conclude the paper in Sect. 5.

Here, we collect the frequently used notations of this paper. Let \({\mathbb {S}}^n\) (\({\mathbb {S}}^n_{+}, {\mathbb {S}}^n_{++})\) be the space of \(n\times n\) real symmetric (positive semidefinite, positive definite) matrices. For a matrix \(x\in {\mathbb {S}}^n\), we use \(\Vert x\Vert _1\) and \(\Vert x\Vert \) to denote its vector \(\ell _1\) and Frobenious norm, i.e., \(\Vert x\Vert _{1,\mathrm{off}}=\sum _{i\ne j}|x_{ij}|\) and \(\Vert x\Vert =(\sum _{i,j}x^2_{ij})^{1/2}\). Let \({\mathbb {I}}_{C}\) denote the indicator function of the convex set \(C\subseteq {\mathbb {S}}^n\), that is \({\mathbb {I}}_C(x) =0\) if \(x\in C\), and \(+\infty \) otherwise, and \(\Pi _{C}(x)\) denote the Euclidean projection of x onto C.

2 Tools and definitions

In this section, we recall some definitions and present the results that will be used in the theoretical analysis and numerical implementation. Let \({\mathbb {X}}\), \({\mathbb {Y}}\) be two finite-dimensional real Hilbert spaces.

2.1 Proximal mapping

The proximal mapping associated with the regularizer p, a sum of two functions, plays a key role in the arithmetic design.

Definition 2.1

(Moreau 1965; Yosida 1964regularization.) Let \(f:{\mathbb {X}}\rightarrow \Re \cup \{+\infty \}\) be a proper, lower semicontinuous, convex function. The Moreau-Yosida regularization of f is given by

$$\begin{aligned} \Phi _f(x):=\min \limits _{z\in {\mathbb {X}}}\left\{ f(z)+\frac{1}{2}\Vert z-x\Vert ^2\right\} ,\,\,x\in {\mathbb {X}}. \end{aligned}$$
(2.1)

The unique solution to problem (2.1) is called the proximal mapping associated with f which is defined as

$$\begin{aligned} \mathrm{Prox}_f(x):=\arg \min \limits _{z\in {\mathbb {X}}}\left\{ f(z)+\frac{1}{2}\Vert z-x\Vert ^2\right\} ,\,\,x\in {\mathbb {X}}. \end{aligned}$$

Lemma 2.1

Let \(p:{\mathbb {S}}^n\rightarrow {\mathbb {R}}\cup \{+\infty \}\) be defined by \(p(x)=\lambda \Vert x\Vert _{1,\mathrm{off}}+{\mathbb {I}}_{{\mathcal {C}}}(x)\), then one has

$$\begin{aligned} \mathrm{Prox}_{p}(x) = \Pi _{{\mathcal {C}}}\circ \mathrm{Prox}_{\lambda \Vert \cdot \Vert _{1,\mathrm{off}}}(x),\,\,\,\forall x\in {\mathbb {S}}^n. \end{aligned}$$

Proof

The proof is presented in Appendix 6.1. □

2.2 Lipschitz-like properties

The Lipschitz-like properties of solution mapping corresponding to convex optimization problems can be employed to establish the convergence rates of various algorithms. Firstly, we consider the following linearly perturbed formulation:

$$\begin{aligned} \begin{array}{cl} \min \limits _{x,y} &{} f(x)+\left\langle {\hat{s}},x\right\rangle +p(y)-\left\langle (u,v),(x,y)\right\rangle \\ \mathrm{s.t.} &{} x-y+w = 0, \end{array} \end{aligned}$$
(2.2)

where \(f(x):=-\log \det x\) and \(p:{\mathbb {S}}^n\rightarrow \Re \cup \{+\infty \}\) is defined by (1.4). Define a mapping:

$$\begin{aligned} {\mathcal {K}}_{pert}\left( (x,y,z);(u,v,w)\right) =\left( \begin{array}{c} x-\mathrm{Prox}_f\left( x+u-z-{\hat{s}}\right) \\ y-\mathrm{Prox}_p\left( y+v+z\right) \\ x-y-w \end{array} \right) , \end{aligned}$$
(2.3)

and a multifunction \(\mathsf{Sol}:{\mathbb {S}}^n\times {\mathbb {S}}^n\times {\mathbb {S}}^n\rightrightarrows {\mathbb {S}}^n\times {\mathbb {S}}^n\times {\mathbb {S}}^n\):

$$\begin{aligned} \begin{array}{ll} \mathsf{Sol}(u,v,w)&{}:=\left\{ (x,y,z):{\mathcal {K}}_{pert}((x,y,z);(u,v,w))=0 \right\} \\ ~ &{}\,=\hbox {set of all} (x,y,z) \hbox {satisfying the KKT condition for problem}\, (2.2). \end{array} \end{aligned}$$
(2.4)

Proposition 2.1

The multifunction \(\mathsf{Sol}\) defined by (2.4) is Lipschitz continuous near the origin, i.e., there exist a neighborhood \({\mathcal {N}}\) of the origin and a positive scalar \(\kappa \) such that \(\mathsf{Sol}(u,v,w)\ne \emptyset \) for any \((u,v,w)\in {\mathcal {N}}\) and

$$\begin{aligned} \begin{array}{l} \left\| \mathsf{Sol}(u,v,w)-\mathsf{Sol}(u',v',w')\right\| \le \kappa \left\| (u,v,w)-(u',v',w')\right\| ,\,\,\forall \,(u,v,w),(u',v',w')\in {\mathcal {N}}. \end{array} \end{aligned}$$

Proof

The proof is given in Appendix 6.2. □

The proof of Proposition 2.1 is motivated by the routine of the proof in Zhang et al. (2020, Theorem 3.1). However, the regularizer p defined in (1.4) is no longer positive homogeneous, which is beyond the scope of the framework presented in Zhang et al. (2020). Therefore, based on Proposition 2.1, the proximal point dual Newton algorithm (Zhang et al. 2020) potentially can be applied to solve the problem (1.2).

To establish the linear convergence rate of the alternating direction method of multipliers for solving problem (1.2) based on the results presented in Han et al. (2018), we should recall the definition of the calmness of a set-valued mapping \(F:{\mathbb {X}}\rightrightarrows {\mathbb {Y}}\).

Definition 2.2

A set-valued mapping \(F:{\mathbb {X}}\rightrightarrows {\mathbb {Y}}\) is said to be calm at \(({\bar{x}},{\bar{y}})\in \mathrm{gph}F\) with modulus \(\kappa \ge 0\) if there exist neighborhoods \({\mathcal {N}}_{{{\bar{x}}}}\) of \({\bar{x}}\) and \({\mathcal {N}}_{{{\bar{y}}}}\) of \({\bar{y}}\) such that

$$\begin{aligned} F(x)\cap {\mathcal {N}}_{{{\bar{y}}}}\subseteq F({\bar{x}})+\kappa \left\| x-{\bar{x}}\right\| {\mathbb {B}},\,\,\forall \,x\in {\mathcal {N}}_{\bar{x}}, \end{aligned}$$

where \({\mathbb {B}}\) denotes the closed unit ball in \({\mathbb {Y}}\).

The definition was first introduced as the pseudo-upper Lipschitz continuity in Ye and Ye (1997, Definition 2.8) and was coined as calmness in Rockafellar and Wets (1998). It is well known that \({\mathcal {F}}\) is calm at \(({\bar{x}},{\bar{y}})\in \mathrm{gph}\,{\mathcal {F}}\) if and only if its inverse set-valued mapping \({\mathcal {F}}^{-1}\) is metrically subregular [c.f. Dontchev and Rockafellar (2004, Definition 3.1)] at \(({\bar{y}},{\bar{x}})\in {\mathrm{gph}}\,{\mathcal {F}}^{-1}\).

3 ADMM algorithm

In this section, we present the Alternating Direction Method of Multipliers (ADMM) for solving the problem (1.2) and prove that the proposed method is globally linearly convergent. By introducing auxiliary variable, problem (1.2) can be reformulated as

$$\begin{aligned} \begin{array}{ll} \min \limits _{x,y} &{} f(x)+\left\langle {\hat{s}},x\right\rangle +p(y)\\ \mathrm{s.t.} &{} x-y=0. \end{array} \end{aligned}$$
(3.1)

For a given positive scalar \(\sigma >0\), the augmented Lagrangian function associated with problem (3.1) is defined by

$$\begin{aligned} {\mathbb {L}}_{\sigma }(x,y,z) =f(x)+\left\langle {\hat{s}}, x\right\rangle +p(y)+\langle x-y,z\rangle +\frac{\sigma }{2}\Vert x-y\Vert ^2. \end{aligned}$$

The Karush–Kuhn–Tucker (KKT) condition of problem (3.1) can be described as follows:

$$\begin{aligned} \nabla f(x)+z+ {\hat{s}} =0, \,\,\,0\in \partial p(y)-z,\,\,\,\hbox {and}\,\,\,x-y=0. \end{aligned}$$
(3.2)

The ADMM for problem (3.1) can be characterized as follows:

figure a

3.1 Convergence analysis

The ADMM algorithm has been widely applied in convex regularized Gaussian graphical model. Its globally convergent properties have been well-established. However, to the best of our knowledge, there is lack of systematic analysis on the convergence rate of ADMM for solving the rGGM. In this part, we show that ADMM for solving the problem (3.1) is globally and linearly convergent.

Define a KKT mapping \(R:{\mathbb {S}}^n\times {\mathbb {S}}^n\times {\mathbb {S}}^n\rightarrow {\mathbb {S}}^n\times {\mathbb {S}}^n\times {\mathbb {S}}^n\):

$$\begin{aligned} R(x,y,z)=\left( \begin{array}{c} x-\mathop {\mathrm {Prox}}\limits _f\left( x-z-{\hat{s}}\right) \\ y-\mathop {\mathrm {Prox}}\limits _p(y+z)\\ x-y \end{array} \right) . \end{aligned}$$
(3.3)

Then, it holds that

$$\begin{aligned} R(x,y,z)=0 \Longleftrightarrow (x,y,z) \hbox {satisfies the KKT condition} (3.2). \end{aligned}$$

That is, \(R^{-1}(0)\) is the set of KKT points and \(R^{-1}(0)=\mathsf{Sol}(0,0,0)\).

The following result plays a key role in establishing the linear convergence rate of ADMM. Its proof relies on several auxiliary results, we give the details in Appendix 6.2.

Proposition 3.1

Let R be the KKT mapping defined by (3.3), then \(R^{-1}\) is calm at the origin for the KKT point \(({\bar{x}},{\bar{y}},{\bar{z}})\) of problem (3.1).

Proof

From Definition 2.2, it is sufficient to show that there exist neighborhoods \({\mathcal {W}}\) of \(({\bar{x}},{\bar{y}},{\bar{z}})\) and \({\mathcal {V}}\) of origin, a positive scalar \(\kappa _0\) such that

$$\begin{aligned} R^{-1}(u,v,w)\cap {\mathcal {W}}\subseteq R^{-1}(0)+\kappa _0 \Vert (u,v,w)\Vert {\mathbb {B}},\,\,\,\forall (u,v,w)\in {\mathcal {V}}. \end{aligned}$$
(3.4)

Since the function f is strongly convex on any compact subset of \({\mathbb {S}}^n_{++}\), we know that the KKT point of problem (3.1) is unique. The global Lipschitz continuity of proximal mappings \(\mathrm{Prox}_f\) and \(\mathrm{Prox}_p\) implies that the multifunction R is globally continuous. Therefore, there exists a neighborhood \({\mathcal {V}}_0\) of the origin such that for any \((u,v,w)\in {\mathcal {V}}_0\) there exists \((x,y,z)\in R^{-1}(u,v,w)\), that is

$$\begin{aligned} \left( \begin{array}{c} x-u-\mathop {\mathrm {Prox}}\limits _f\left( x-z-{\hat{s}}\right) \\ y-v-\mathop {\mathrm {Prox}}\limits _p(y+z)\\ x-y-w \end{array} \right) =0. \end{aligned}$$

This, together with the definition of \({\mathcal {K}}_{pert}\), implies that

$$\begin{aligned} {\mathcal {K}}_{pert}\left( (x-u,y-v,z);(u,v,w+u-v)\right) =0. \end{aligned}$$

Consequently, it holds that \(\mathsf{Sol}(u,v,w+u-v)=\{(x-u,y-v,z)\}\). Let \(({\bar{x}},{\bar{y}},{\bar{z}})\) be the KKT point of problem (3.1), then \(R({\bar{x}},{\bar{y}},{\bar{z}})=0\) and \(\mathsf{Sol}(0,0,0)=\{({\bar{x}},{\bar{y}},{\bar{z}})\}\).

Set \({\mathcal {V}}={\mathcal {V}}_0\cap \{(u,v,w): (u,v,w+u-v)\in {\mathcal {N}}\}\ne \emptyset \) with \({\mathcal {N}}\) being described in Proposition 2.1. Then, it follows from Proposition 2.1 that

$$\begin{aligned} \left\| \mathsf{Sol}(u,v,w+u-v)-\mathsf{Sol}(0,0,0)\right\| = \left\| (x-u,y-v,z)-({\bar{x}},{\bar{y}},{\bar{z}})\right\| \le \kappa \left\| (u,v,w+u-v)\right\| . \end{aligned}$$

Therefore, for any \((u,v,w)\in {\mathcal {V}}\), one has

$$\begin{aligned} \begin{array}{rl} \left\| (x,y,z)-({\bar{x}},{\bar{y}},{\bar{z}})\right\| &{} \le \kappa \left\| (u,v,w+u-v)\right\| +\left\| (u,v,0)\right\| \\ ~ &{} \le (2\kappa +1)\left\| (u,v,w)\right\| . \end{array} \end{aligned}$$

This implies (3.4) holds. The proof is completed. □

Theorem 3.1

The infinite sequence \(\{(x^k,y^k,z^k)\}\) generated by Algorithm 1 converges globally and linearly to a KKT point of problem (3.1).

Proof

Since Algorithm 1 is essentially a special case of the semi-proximal ADMM studied in Han et al. (2018), we can obtain the conclusion directly from Proposition 3.1 and Han et al. (2018, Theorem 2). □

3.2 Implementation details

To implement the algorithm efficiently, we give the details for implementation:

  • Closed-form expression of \(y^{k+1}\): from the definition of the augmented Lagrangian function, one has

    $$\begin{aligned} y^{k+1}=\mathop {\mathrm {arg\,min}}\limits _y~{\mathbb {L}}_{\sigma }\left( x^k,y,z^k\right) = \mathop {\mathrm {arg\,min}}\limits _y~p(y)+\frac{\sigma }{2}\left\| y-x^k-z^k/\sigma \right\| ^2. \end{aligned}$$

    Specifically, we have

    1. (a)

      if \(p(x)=\lambda \Vert x\Vert _{1,\mathrm{off}}\), then \(y^{k+1}= \mathrm{Prox}_{\sigma ^{-1}\lambda \Vert \cdot \Vert _{1,\mathrm{off}}}(x^k+z^k/\sigma )\);

    2. (b)

      if \(p(x)=\Vert x\Vert _{1,\mathrm{off}}+{\mathbb {I}}_{{\mathcal {C}}}(x)\), then by some elementary calculation and Lemma 2.1 , one has

    $$\begin{aligned} y^{k+1} = \sigma ^{-1}\mathrm{Prox}_{p_{\sigma }}\left( \sigma x^k+z^k \right) ,\,\,\,p_{\sigma }(x):= \Pi _{{\mathcal {C}}_{\sigma }}\circ \mathrm{Prox}_{\lambda \Vert \cdot \Vert _{1,\mathrm{off}}}(x), \end{aligned}$$

    where \({\mathcal {C}}_{\sigma }:=\{x:\Vert x\Vert _2\le \sigma \alpha \}\).

  • Closed-form expression of \(x^{k+1}\): from the definition of the augmented Lagrangian function, one has

    $$\begin{aligned} x^{k+1} = \mathop {\mathrm {arg\,min}}\limits _{x\succ 0} f(x) + \frac{\sigma }{2}\left\| x-y^{k+1}+(z^{k}+{\hat{s}})/\sigma \right\| ^2 = \mathrm{Prox}_{\sigma ^{-1}f}(y^{k+1}-\left( z^{k}+{\hat{s}})/\sigma \right) . \end{aligned}$$

    Specifically, let \({\tilde{x}}:=y^{k+1}-(z^{k}+{\hat{s}})\) have the eigenvalue decomposition \({\tilde{x}}=PDP^T\) with \(D=\mathrm{diag}(d)\), from Wang et al. (2010)[Lemma 3.1], one has

    $$\begin{aligned} x^{k+1} = P\mathrm{diag}\left( \phi ^+_{\gamma }(d)\right) P^T,\,\,\, \gamma :=\sigma ^{-1}, \end{aligned}$$

    where \([\phi ^+_{\gamma }(d)]_i = (\sqrt{d_i^2+4\gamma }-d_i)/2,\,\,i=1,\ldots ,p\).

  • Stopping criteria: based on the KKT condition for problem (3.1) and Proposition 3.1, we terminate the algorithm when the relative KKT residual is less than \(10^{-5}\) or the maximum number of iteration 20,000 is attained. Here, the relative KKT residual is given as

    $$\begin{aligned} \eta =\{\eta _p, \eta _d, \eta _c\}, \end{aligned}$$

    where \(\eta _p= \frac{\Vert x-y\Vert }{1+\Vert x\Vert },\,\,\eta _d = \frac{\Vert x-\mathrm{Prox}_f(x-z-{\hat{s}})\Vert }{1+\Vert x\Vert }\), and \(\eta _c = \frac{y-\mathrm{Prox}_p(y+z)}{1+\Vert y\Vert }\).

4 Numerical experiments

In this section, we evaluate the performance of the proposed model (1.2) by comparing with the sparse Gaussian graphical model (sGGM) with positive semidefinite covariance estimation and the sGGM with IPW estimator. Meanwhile, we use ADMM to solve the three models. Specifically, we consider the following problems:

$$\begin{aligned} \begin{array}{ll} \min \limits _{x\succ 0} f(x)+\langle {\hat{s}},x\rangle +p(x), \end{array} \end{aligned}$$
(4.1)

where the covariance estimation \({\hat{s}}\) and the regularizer \(p:{\mathbb {S}}^n\rightarrow \Re \cup \{+\infty \}\) are taken respectively as the following forms: given an IPW estimator \(s\in {\mathbb {S}}^n\),

  • for sGGM with positive semidefinite covariance estimation (sGGM in short ): \(p(x):=\lambda \Vert x\Vert _{1,\mathrm{off}}\) and \({\hat{s}} = \Pi _{{\mathbb {S}}^n_+}(s)\). More specifically, let the given estimator s admit the eigenvalue decomposition \(s=U\mathrm{Diag}(\mu _1,\ldots ,\mu _n)U^T\) with U being an orthogonal matrix, then \({\hat{s}}=U\mathrm{Diag}({\hat{\mu }}_1\ldots ,{\hat{\mu }}_n)U^T\), \(\hat{\mu _i}=\max \{\mu _i,0\},i=1, \ldots ,n\). Here, for a given vector \(\mu \), \(Diag (\mu )\) is the diagonal matrix whose diagonal elements are the components of \(\mu \).

  • for sGGM with IPW estimator (rGGM0 in short): \(p(x):=\lambda \Vert x\Vert _{1,\mathrm{off}}\) and \({\hat{s}}=s\);

  • for rGGM: \(p(x):=\lambda \Vert x\Vert _{1,\mathrm{off}}+{\mathbb {I}}_{{\mathcal {C}}}(x)\) and \({\hat{s}}=s\).

In this part, the IPW estimator \(s\in {\mathbb {S}}^n\) is obtained from Kolar and Xing (2012). Given a set of partially observed samples \(\{\xi _i\in {\mathbb {R}}^n,\,\,i=1,\ldots ,m\}\). Let \( \delta \in \Re ^{m\times n}\) be the matrix with elements given by

$$\begin{aligned} \delta _{ij}=\left\{ \begin{array}{ll} 1, &{} \hbox {if} \xi _{ij} \hbox {is observed};\\ 0, &{} \hbox {otherwise}. \end{array} \right. \end{aligned}$$

Then, the (ij)-th element of the IPW estimator s is taken as

$$\begin{aligned} s_{ij} = \frac{\sum ^m_{k=1}\delta _{ki}\delta _{kj}\left( \xi _{ki}-{\hat{\mu }}_i\right) \left( \xi _{kj}-{\hat{\mu }}_j\right) }{\sum ^{m}_{k=1}\delta _{ki}\delta _{kj}},\,\,\, i,j=1,\ldots ,n, \end{aligned}$$

where \({\hat{\mu }}_i=(\sum ^m_{k=1}\delta _{ki})^{-1}\sum ^m_{k=1}\delta _{ki}\xi _{ki}\).

The performance of the estimations of precision matrix from different methods are evaluated by F1-Score:

$$\begin{aligned} \mathrm{F1} = \frac{2\times \mathrm{precision}\times \mathrm{recall}}{\mathrm{precision}+\mathrm{recall}}, \end{aligned}$$

where

$$\begin{aligned} \hbox {precision} = \frac{|{\hat{x}}\cap x^*|}{\hbox {the number of nonzeros in }} {\hat{x}},\,\,\,\hbox {recall}= \frac{|{\hat{x}}\cap x^*|}{\hbox {the number of nonzeros in}} {x}^*. \end{aligned}$$

Here, \(|{\hat{x}}\cap x^*|\) is the number of nonzeros in \(x^*\) estimated as nonzeros in \({\hat{x}}\), \({\hat{x}}\) and \(x^*\) are the estimated precision matrix from the problem (4.1) and the true precision matrix, respectively. The regularized parameter \(\lambda \) and \(\alpha \) are chosen by using the modified BIC criterion [see e.g. Städler and Bühlmann (2012)]:

$$\begin{aligned} \left( \lambda ^*,\alpha ^*\right) :=\min _{\lambda ,\alpha } m\left( -\mathop {\mathrm {log\,det}}\limits \,{\hat{x}}(\lambda ,\alpha )+\left\langle {\hat{s}},{\hat{x}}(\lambda ,\alpha )\right\rangle \right) +\log (m) \sum _{i\le j} \mathbf{I}_{\left\{ {\hat{x}}(\lambda ,\alpha )_{ij}\ne 0\right\} }, \end{aligned}$$
(4.2)

where \({\hat{x}}(\lambda ,\alpha )\) is the optimal solution of problem (4.1), and \(\mathbf{I}_{\{{\hat{s}}_{ij}\ne 0\}}=1\) if \({\hat{s}}_{ij}\ne 0\), and 0 otherwise.

4.1 Simulation 1

The data generating processes in this part are from Kolar and Xing (2012), Städler and Bühlmann (2012), Rothman et al. (2008). Assume that \(\xi _1,\ldots ,\xi _m\), i.i.d.\(\sim {\mathcal {N}}(0,s)\) with

Model 1: \(\sigma _{ij}=0.7^{|i-j|},\,\,i,j=1,\ldots ,n\);

Model 2: for \(i,j=1,\ldots ,n\), the (ij)-th element of covariance matrix \(\Sigma \):

$$\begin{aligned} \sigma _{ij} = \mathbf{I}_{\{i=j\}}+0.4\mathbf{I}_{\{|i-j|=1\}}+0.2\mathbf{I}_{\{|i-j|=2\}}+0.2\mathbf{I}_{\{|i-j|=3\}}+0.1\mathbf{I}_{\{|i-j|=4\}}, \end{aligned}$$

where \(\mathbf{I}_{\{i=j\}}\) represents a function which is 1 if \(i=j\) and 0 otherwise;

Model 3: \(\Omega =B+\gamma I\), where each off-diagonal entry of B is generated independently and equals 0.5 with probability \(\zeta =0.1\) or 0 with probability \(1-\zeta \). Diagonal entries of B are zero, and \(\gamma \) is chosen such that the condition number of \(\Omega =\Sigma ^{-1}\) is n.

For each model, we obtain the training datasets by deleting completely at random \(r\% (r=10,20,30)\) of the synthetic data. It has been pointed out in Rothman et al. (2008) that in Model 1 and Model 2, the number of nonzeros in \(s^{-1}\) is proportional to m, whereas in Model 3, the number of nonzeros in \(s^{-1}\) is proportional to \(m^2\). That is, the precision matrices generated by Model 1 and Model 2 have strong sparsity and that generated by Model 3 have weak sparsity.

We evaluate the performance of rGGM (1.2) by comparing with that of the sGGM and rGGM0. The parameters chosen to obtain the precision matrix estimation are selected by using (4.2). Specifically, the optimal \(\alpha \) is selected from set \(\{10, 9, 8,\ldots , 1\}\), the optimal regularized parameters \(\lambda \) for Model 1, Model 2, and Model 3 are from {0.5, 0.45, ..., 0.35}, {0.2, 0.15, ..., 0.05}, and {0.075, 0.0725, ..., 0.06}, respectively. The Average performance results based on 50 simulation runs are presented in Table 1. Meanwhile, we test the convergence time of ADMM used in rGGM. Specifically, the models with \(m=100,n=100\) take less than 0.15 s, the models with \(m=150, n=200\) take no more than 0.4 s, and the models with \(m=200, n=500\) take less than 4 s. From the table, we can see that the F1-Scores for all three methods tend to decrease with the increase of missing rates for most cases.

For Model 1, which has the most strong sparsity of the three data generating processes, the F1-Score obtained by rGGM is significantly better than that generated by the other two methods. The main reason is that the number of nonzeros in the precision matrix obtained from rGGM is in good agreement with that of the true precision matrix. This can be observed from the values of precision and recall corresponding to each estimation method. According as the sparsity decreases, the advantage of the F1-Score obtained by rGGM goes down. However, for the highest dimensional cases with three different data generating processes, the F1-Score obtained by rGGM is still better than that generated by the other two methods. For better illustration, we also give the boxplots (Fig. 1) of the data generated by three models with \(m=200, n=500, r =30\).

Fig. 1
figure 1

Comparison of F1-Score for sGGM, rGGM0, and rGGM with data generated by Model 1, Model 2, and Model 3 based on 50 simulation runs (\(m=200, n=500, r =30\))

Table 1 Comparison of average precision, recall, and F1-Score for sGGM, rGGM0, and rGGM with data generated by Model 1, Model 2, and Model 3 based on 50 simulation runs

4.2 Simulation 2: protein

In this part, we generate the data based on the real dataset proteinFootnote 1 (\(m=6621,\,n=357\)) which can be obtained from LIBSVM library (Chang and Lin 2011). The dataset protein contains three classes and the sample sizes corresponding to each class are 3112, 2323, and 1186, respectively.

The datasets used in this part are generated by the following steps. For each class, we first generate the sample covariance matrix \(s_i\in {\mathbb {S}}^{357}_+ (i=1,2,3)\) based on the complete data from dataset protein and select edges by using the sparse Gaussian graphical model (1.1). The selected edges will be taken as the “true” edges. Then, we randomly generate 50 datasets with samples from multivariate Gaussian distribution \({\mathcal {N}}(0,s_i)\). For each synthetic dataset, we further produce completely at random \(r\%\) missing values, \(r=10,20,30\). Finally, we evaluate the performances of sGGM, rGGM0, and rGGM with F1-Score based on the 50 synthetic datasets.

The numerical results are visualized by box plots, see Fig. 2. We can see from the figure that the F1-Score obtained by rGGM is higher than that of the other two methods for 6 out of 9 cases. For the other 3 cases, the F1-Score of rGGM is still comparable with that of the other two methods.

Fig. 2
figure 2

Comparison of F1-Score for sGGM, rGGM0, and rGGM with 3 class datasets generated from protein. Here, the sample sizes corresponding to each class are 3112, 2323, and 1186, respectively

Fig. 3
figure 3

Comparison of F1-Score for sGGM, rGGM0, and rGGM with class student generated from the data set University Webpages

4.3 Simulation 3: university webpages

For illustration, we also explore the performance of rGGM on the dataset University Webpages, which was originally collected by the World Wide Knowledge Base (Web–>Kb) project of the CMU text learning group.Footnote 2 In this part, we use the pre-processed dataset webkb-test-stemmed,Footnote 3 which contains four different classes: student, staff, course, and project. More specifically, we select the class student, the one has the largest sample size \(m=544\). The pre-precessing details can be found in Cardoso-Cachopo (2007, Section 2.8). We further process the dataset and obtain the term-document matrix \(X\in \Re ^{544\times n}\) by log-entropy weighting method [see e.g. Dumais (1991), Guo et al. (2011)]. Here, we take \(n=300\).

Since the term-document matrix \(X\in \Re ^{544\times 300}\) is very sparse (the percentage of nonzeros is \(10.1\%\)), we produce completely at random \(r\%\) nonzero missing values (\(r=10,20,30\)) and evaluate the performances of sGGM, rGGM0, and rGGM with F1-Score based on the 50 synthetic datasets. The results are visualized in Fig. 3. As shown in the figure, the F1-scores for all the methods tend to decrease as the missing rate increases. But, the rGGM still has better F1-Score on the class student of the dataset University Webpages.

5 Conclusion

In this paper, we proposed an estimator of sparse precision matrix based on the dataset with partially observed observations. The estimator was obtained by using a robust convex optimization problem that can be solved by a linearly convergent first-order method. The numerical results showed that the presented estimators usually have satisfactory F1-Scores. It is commonly accepted that the nonconvex regularizers have better performance than convex regularizers. Therefore, we will design the efficient numerical algorithms for solving the robust Gaussian graphical model with nonconvex regularizers based on the theoretical analysis of the convex counterpart in the future.