Abstract
The alternating direction method of multipliers (ADMM) has been proved to be effective for solving separable convex optimization subject to linear constraints. In this paper, we propose a generalized symmetric ADMM (GS-ADMM), which updates the Lagrange multiplier twice with suitable stepsizes, to solve the multi-block separable convex programming. This GS-ADMM partitions the data into two group variables so that one group consists of p block variables while the other has q block variables, where \(p \ge 1\) and \(q \ge 1\) are two integers. The two grouped variables are updated in a Gauss–Seidel scheme, while the variables within each group are updated in a Jacobi scheme, which would make it very attractive for a big data setting. By adding proper proximal terms to the subproblems, we specify the domain of the stepsizes to guarantee that GS-ADMM is globally convergent with a worst-case \({\mathcal {O}}(1/t)\) ergodic convergence rate. It turns out that our convergence domain of the stepsizes is significantly larger than other convergence domains in the literature. Hence, the GS-ADMM is more flexible and attractive on choosing and using larger stepsizes of the dual variable. Besides, two special cases of GS-ADMM, which allows using zero penalty terms, are also discussed and analyzed. Compared with several state-of-the-art methods, preliminary numerical experiments on solving a sparse matrix minimization problem in the statistical learning show that our proposed method is effective and promising.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
We consider the following grouped multi-block separable convex programming problem
where \(f_i(x_i):\mathcal {R}^{m_i}\rightarrow \mathcal {R},\ g_j(y_j):\mathcal {R}^{d_j}\rightarrow \mathcal {R}\) are closed and proper convex functions (possibly nonsmooth); \(A_i\in \mathcal {R}^{n\times m_i}, B_j\in \mathcal {R}^{n\times d_j}\) and \( c\in \mathcal {R}^{n}\) are given matrices and vectors, respectively; \(\mathcal {X}_i\subset \mathcal {R}^{m_i}\) and \(\mathcal {Y}_j\subset \mathcal {R}^{d_j} \) are closed convex sets; \(p \ge 1\) and \(q \ge 1\) are two integers. Throughout this paper, we assume that the solution set of the problem (1) is nonempty and all the matrices \(A_i\), \(i=1,\ldots , p\), and \(B_j\), \(j=1,\ldots ,q\), have full column rank. And in the following, we denote \(\mathcal {A}=\left( A_1, \ldots ,A_p\right) ,\mathcal {B}=\left( B_1, \ldots ,B_q\right) \), \({\mathbf{x}} = (x_1 ^\mathsf{T}, \ldots , x_p ^\mathsf{T}) ^\mathsf{T}\), \({\mathbf{y}} = (y_1 ^\mathsf{T}, \ldots , y_q ^\mathsf{T}) ^\mathsf{T}\), \({\mathcal {X}} = {\mathcal {X}}_1 \times {\mathcal {X}}_2 \times \cdots {\mathcal {X}}_p \), \({\mathcal {Y}} = {\mathcal {Y}}_1 \times {\mathcal {Y}}_2 \times \cdots {\mathcal {Y}}_q \) and \(\mathcal {M}=\mathcal {X} \times \mathcal {Y} \times \mathcal {R}^n\).
In the last few years, the problem (1) has been extensively investigated due to its wide applications in different fields, such as the sparse inverse covariance estimation problem [21] in finance and statistics, the model updating problem [4] in the design of vibration structural dynamic system and bridges, the low rank and sparse representations [19] in image processing and so forth. One standard way to solve the problem (1) is the classical Augmented Lagrangian Method (ALM) [10], which minimizes the following augmented Lagrangian function
where \(\beta >0\) is a penalty parameter for the equality constraint and
is the Lagrangian function of the problem (1) with the Lagrange multiplier \(\lambda \in \mathcal {R}^{n}\). Then, the ALM procedure for solving (1) can be described as follows:
However, ALM does not make full use of the separable structure of the objective function of (1) and hence, could not take advantage of the special properties of the component objective functions \(f_i\) and \(g_j\) in (1). As a result, in many recent real applications involving big data, solving the subproblems of ALM becomes very expensive.
One effective approach to overcome such difficulty is the Alternating Direction Method of Multipliers (ADMM), which was originally proposed in [8] and could be regarded as a splitting version of ALM. At each iteration, ADMM first sequentially optimize over one block variable while fixing all the other block variables, and then follows by updating the Lagrange multiplier. A natural extension of ADMM for solving the multi-block case problem (1) takes the following iterations:
Obviously, the scheme (3) is a serial algorithm which uses the newest information of the variables at each iteration. Although the above scheme was proved to be convergent for the two-block, i.e., \(p=q=1\), separable convex minimization (see [11]), as shown in [3], the direct extension of ADMM (3) for the multi-block case, i.e., \(p+q \ge 3\), without proper modifications is not necessarily convergent. Another natural extension of ADMM is to use the Jacobian fashion, where the variables are updated simultaneously after each iteration, that is,
As shown in [13], however, the Jacobian scheme (4) is not necessarily convergent either. To ensure the convergence, He et al. [14] proposed a novel ADMM-type splitting method that by adding certain proximal terms, allowed some of the subproblems to be solved in parallel, i.e., in a Jacobian fashion. And in [14], some sparse low-rank models and image painting problems were tested to verify the efficiency of their method.
More recently, a Symmetric ADMM (S-ADMM) was proposed by He et al. [16] for solving the two-block separable convex minimization, where the algorithm performs the following updating scheme:
and the stepsizes \((\tau ,s)\) were restricted into the domain
in order to ensure its global convergence. The main improvement of [16] is that the scheme (5) largely extends the domain of the stepsizes \((\tau ,s)\) of other ADMM-type methods [12]. What’s more, the numerical performance of S-ADMM on solving the widely used basis pursuit model and the total-variational image debarring model significantly outperforms the original ADMM in both the CPU time and the number of iterations. Besides, Gu, et al.[9] also studied a semi-proximal-based strictly contractive Peaceman-Rachford splitting method, that is (5) with two additional proximal penalty terms for the x and y update. But their method has a nonsymmetric convergence domain of the stepsize and still focuses on the two-block case problem, which limits its applications for solving large-scale problems with multiple block variables.
Mainly motivated by the work of [9, 14, 16], we would like to generalize S-ADMM with more wider convergence domain of the stepsizes to tackle the multi-block separable convex programming model (1), which more frequently appears in recent applications involving big data [2, 20]. Our algorithm framework can be described as follows:
In the above generalized symmetric ADMM (GS-ADMM), \(\tau \) and s are two stepsize parameters satisfying
and \(\sigma _1\in (p-1,+\infty ),\sigma _2\in (q-1,+\infty )\) are two proximal parametersFootnote 1 for the regularization terms \(P_i^k(\cdot )\) and \(Q_j^k(\cdot )\). He and Yuan[15] also investigated the above GS-ADMM (7) but restricted the stepsize \(\tau =s\in (0,1)\), which does not exploit the advantages of using flexible stepsizes given in (8) to improve its convergence.
Major contributions of this paper can be summarized as the following four aspects:
-
Firstly, the new GS-ADMM could deal with the multi-block separable convex programming problem (1), while the original S-ADMM in [16] only works for the two block case and may not be convenient for solving large-scale problems. In addition, the convergence domain \(\mathcal {K}\) for the stepsizes \((\tau , s)\) in (8), shown in Fig. 1, is significantly larger than the domain \(\mathcal {H}\) given in (6) and the convergence domain in [9, 15]. For example, the stepsize s can be arbitrarily close to 5 / 3 when the stepsize \(\tau \) is close to \(-1/3\). Moreover, the above domain in (8) is later enlarged to a symmetric domain \({\mathcal {G}}\) defined in (73), shown in Fig. 2. Numerical experiments in Sec. 5.2.1 also validate that using more flexible and relatively larger stepsizes \((\tau ,s)\) can often improve the convergence speed of GS-ADMM. On the other hand, we can see that when \(\tau =0\), the stepsize s can be chosen in the interval \((0, (\sqrt{5}+1)/2)\), which was firstly suggested by Fortin and Glowinski in [5, 7].
-
Secondly, the global convergence of GS-ADMM as well as its worst-case \({\mathcal {O}}(1/t)\) ergodic convergence rate are established. What’s more, the total \(p+q\) block variables are partitioned into two grouped variables. While a Gauss–Seidel fashion is taken between the two grouped variables, the block variables within each group are updated in a Jacobi scheme. Hence, parallel computing can be implemented for updating the variables within each group, which could be critical in some scenarios for problems involving big data.
-
Thirdly, we discuss two special cases of GS-ADMM, which is (7) with \(p\ge 1, q=1\) and \(\sigma _2=0\) or with \(p=1,q\ge 1\) and \(\sigma _1=0\). These two special cases of GS-ADMM were not discussed in [15] and in fact, to the best of our knowledge, they have not been studied in the literature. We show the convergence domain of the stepsizes \((\tau , s)\) for these two cases is still \({\mathcal {K}}\) defined in (8) that is larger than \(\mathcal {H}\).
-
Finally, numerical experiments are performed on solving a sparse matrix optimization problem arising from the statistical learning. We have investigated the effects of the stepsizes \((\tau ,s)\) and the penal parameter \(\beta \) on the performance of GS-ADMM. And our numerical experiments demonstrate that by properly choosing the parameters, GS-ADMM could perform significantly better than other recently quite popular methods developed in [1, 14, 17, 23].
The paper is organized as follows. In Sect. 2, some preliminaries are given to reformulate the problem (1) into a variational inequality and to interpret the GS-ADMM (7) as a prediction–correction procedure. Section 3 investigates some properties of \(\Vert {\mathbf{w}}^k-{\mathbf{w}}^*\Vert _H^2\) and provides a lower bound of \(\Vert {\mathbf{w}}^{k}-\widetilde{{\mathbf{w}}}^k\Vert _G^2\), where H and G are some particular symmetric matrices. Then, we establish the global convergence of GS-ADMM and show its convergence rate in an ergodic sense. In Sect. 4, we discuss two special cases of GS-ADMM, in which either the penalty parameters \(\sigma _1\) or \(\sigma _2\) is allowed to be zero. Some preliminary numerical experiments are done in Sect. 5. We finally make some conclusions in Sect. 6.
1.1 Notation
Denoted by \(\mathcal {R}, \mathcal {R}^n, \mathcal {R}^{m\times n}\) be the set of real numbers, the set of n dimensional real column vectors and the set of \(m\times n\) real matrices, respectively. For any \(x, y\in \mathcal {R}^n\), \(\langle x, y\rangle =x ^\mathsf{T}y\) denotes their inner product and \(\Vert x\Vert =\sqrt{\langle x, x\rangle }\) denotes the Euclidean norm of x, where the superscript \(^\mathsf{T}\) is the transpose. Given a symmetric matrix G, we define \(\Vert x\Vert _G^2 = x ^\mathsf{T}G x\). Note that with this convention, \(\Vert x\Vert _G^2\) is not necessarily nonnegative unless G is a positive definite matrix (\(\succeq \mathbf 0 \)). For convenience, we use I and \(\mathbf{0}\) to stand respectively for the identity matrix and the zero matrix with proper dimension throughout the context.
2 Preliminaries
In this section, we first use a variational inequality to characterize the solution set of the problem (1). Then, we analyze that GS-ADMM (7) can be treated as a prediction–correction procedure involving a prediction step and a correction step.
2.1 Variational reformulation of (1)
We begin with the following standard lemma whose proof can be found in [16, 18].
Lemma 1
Let \(f:\mathcal {R}^m\longrightarrow \mathcal {R}\) and \(h:\mathcal {R}^m\longrightarrow \mathcal {R}\) be two convex functions defined on a closed convex set \(\varOmega \subset \mathcal {R}^m\) and h is differentiable. Suppose that the solution set \(\varOmega ^* = \arg \min \limits _{x\in \varOmega }\{f(x)+h(x) \}\) is nonempty. Then, we have
It is well-known in optimization that a triple point \(({\mathbf{x}}^*, {\mathbf{y}}^*, \lambda ^*) \in {\mathcal {M}}\) is called the saddle-point of the Lagrangian function (2) if it satisfies
which can be also characterized as
Then, by Lemma 1, the above saddle-point equations can be equivalently expressed as
Rewriting (9) in a more compact variational inequality (VI) form, we have
where
and
Noticing that the affine mapping \({\mathcal {J}}\) is skew-symmetric, we immediately get
Hence, (10) can be also rewritten as
Because of the nonempty assumption on the solution set of (1), the solution set \(\mathcal {M}^*\) of the variational inequality \(\text {VI}(h, \mathcal {J}, \mathcal {M})\) is also nonempty and convex, see e.g. Theorem 2.3.5 [6] for more details. The following theorem given by Theorem 2.1 [11] provides a concrete way to characterize the set \(\mathcal {M}^*\).
Theorem 1
The solution set of the variational inequality \(\text {VI}(h, \mathcal {J}, \mathcal {M})\) is convex and can be expressed as
2.2 A prediction–correction interpretation of GS-ADMM
Following a similar approach in [16], we next interpret GS-ADMM as a prediction–correction procedure. First, let
and
Then, by using the above notations, we derive the following basic lemma.
Lemma 2
For the iterates \(\widetilde{{\mathbf{u}}}^k, \widetilde{{\mathbf{w}}}^k\) defined in (15), we have \(\widetilde{{\mathbf{w}}}^k\in \mathcal {M}\) and
where
with
Proof
Omitting some constants, it is easy to verify that the \(x_i\)-subproblem \((i=1,2,\ldots ,p)\) of GS-ADMM can be written as
where \(c_{x,i} = c - \sum \limits _{l=1,l\ne i}^{p}A_lx_l^k - \mathcal {B}{\mathbf{y}}^k\). Hence, by Lemma 1, we have \(x_i^{k+1}\in \mathcal {X}_i\) and
for any \( x_i\in \mathcal {X}_i\). So, by the definition of (13) and (14), we get
for any \( x_i\in \mathcal {X}_i\). By the way of generating \(\lambda ^{k+\frac{1}{2}}\) in (7) and the definition of (14), the following relation holds
Similarly, the \(y_j\)-subproblem (\(j=1,\ldots ,q\)) of GS-ADMM can be written as
where \(c_{y,j} = c - \mathcal {A}{\mathbf{x}}^{k+1} - \sum \limits _{l=1,l\ne j}^{q}B_ly_l^k\). Hence, by Lemma 1, we have \(y_j^{k+1}\in \mathcal {Y}_j\) and
for any \(y_j \in {\mathcal {Y}}_j\). This inequality, by using (21) and the definition of (13) and (14), can be rewritten as
for any \(y_j \in {\mathcal {Y}}_j\). Besides, the equality (14) can be rewritten as
which is equivalent to
Then, (16) follows from (20), (22) and (23). \(\square \)
Lemma 3
For the sequences \(\{{\mathbf{w}}^k\}\) and \(\{\widetilde{{\mathbf{w}}}^k\}\) generated by GS-ADMM, the following equality holds
where
Proof
It follows from the way of generating \(\lambda ^{k+1}\) in the algorithm and (21) that
The above equality together with \(x_i^{k+1}=\widetilde{x}_l^k\), for \(i=1,\ldots ,p\), and \(y_j^{k+1}=\widetilde{y}_j^k\), for \(j=1,\ldots ,q\), imply
which immediately gives (24). \(\square \)
Lemmas 2 and 3 show that our GS-ADMM can be interpreted as a prediction–correction framework, where \({\mathbf{w}}^{k+1}\) and \(\widetilde{{\mathbf{w}}}^k\) are normally called the predictive variable and the correcting variable, respectively.
3 Convergence analysis of GS-ADMM
Compared with (12) and (16), the key for proving the convergence of GS-ADMM is to verify that the extra term in (16) converges to zero, that is,
In this section, we first investigate some properties of the sequence \(\{\Vert {\mathbf{w}}^k-{\mathbf{w}}^*\Vert _H^2\}\). Then, we provide a lower bound of \(\Vert {\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^k\Vert _G^2\). Based on these properties, the global convergence and worst-case \(\mathcal {O}(1/t)\) convergence rate of GS-ADMM are established in the end.
3.1 Properties of \(\left\{ \left\| {\mathbf{w}}^k-{\mathbf{w}}^*\right\| _H^2\right\} \)
It follows from (11) and (16) that \( \widetilde{{\mathbf{w}}}^k\in \mathcal {M}\) and
Suppose \(\tau + s >0\). Then, the matrix M defined in (25) is nonsingular. So, by (24) and a direct calculation, the right-hand term of (26) is rewritten as
where
with \(H_x\) defined in (18) and
The following lemma shows that H is a positive definite matrix for proper choice of the parameters \((\sigma _1,\sigma _2, \tau , s)\).
Lemma 4
The matrix H defined in (28) is symmetric positive definite if
Proof
By the block structure of H, we only need to show that the blocks \(H_\mathbf {x}\) and \(\widetilde{H}\) in (28) are positive definite if the parameters \((\sigma _1,\sigma _2, \tau , s)\) satisfy (30). Note that
where
If \(\sigma _1\in (p-1,+\infty )\), \(H_{\mathbf {x},0}\) is positive definite. Then, it follows from (31) that \(H_\mathbf {x}\) is positive definite if \(\sigma _1\in (p-1,+\infty )\) and all \(A_i\), \(i=1, \ldots , p\), have full column rank.
Now, note that the matrix \(\widetilde{H}\) can be decomposed as
where
and
According to the fact that
where
we have \(\widetilde{H}_0\) is positive definite if and only if \(H_{\mathbf {y},0}+(1-\tau )EE^\mathsf{T}\) is positive definite and \(\tau + s >0\). Note that \(H_{\mathbf {y},0}\) is positive definite if \(\sigma _2\in (q-1,+\infty )\), and \((1-\tau )EE^\mathsf{T}\) is positive semidefinite if \(\tau \le 1\). So, \(\widetilde{H}_0\) is positive definite if \(\sigma _2\in (q-1,+\infty )\), \( \tau \le 1\) and \(\tau + s >0\). Then, it follows from (33) that \(\widetilde{H}\) is positive definite, if \(\sigma _2\in (q-1,+\infty )\), \( \tau \le 1\), \(\tau + s >0\) and all the matrices \(B_j\), \(j=1, \ldots , q\), have full column rank.
Summarizing the above discussions, the matrix H is positive definite if the parameters \((\sigma _1,\sigma _2, \tau , s)\) satisfy (30). \(\square \)
Theorem 2
The sequences \(\{{\mathbf{w}}^k\}\) and \(\{\widetilde{{\mathbf{w}}}^k\}\) generated by GS-ADMM satisfy
where
Proof
By substituting
into the following identity
we have
Now, notice that
where the second equality holds by (24) and the fourth equality follows from (28). Then, (36) follows from (26)–(27), (38)–(39) and the definition of G in (37). \(\square \)
Theorem 3
The sequences \(\{{\mathbf{w}}^k\}\) and \(\{\widetilde{{\mathbf{w}}}^k\}\) generated by GS-ADMM satisfy
Proof
Setting \(\mathbf {w}={\mathbf{w}}^* \in \mathcal {M}^*\) in (36) we have
The above inequality together with (10) reduces to the inequality (40). \(\square \)
It can be observed that if \(\left\| {\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^k\right\| _G^2\) is positive, then the inequality (40) implies the contractiveness of the sequence \(\{{\mathbf{w}}^k-{\mathbf{w}}^*\}\) under the H-weighted norm. However, the matrix G defined in (37) is not necessarily positive definite when \(\sigma _1\in (p-1,+\infty )\), \(\sigma _2\in (q-1,+\infty )\) and \((\tau , s) \in {\mathcal {K}}\). Therefore, it is necessary to estimate the lower bound of \(\left\| {\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^k\right\| _G^2\) for the sake of the convergence analysis.
3.2 Lower bound of \(\left\| {\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^k\right\| _G^2\)
This subsection provides a lower bound of \(\left\| {\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^k\right\| _G^2\), for \(\sigma _1\in (p-1,+\infty )\), \(\sigma _2\in (q-1,+\infty )\) and \((\tau , s) \in {\mathcal {K}}\), where \({\mathcal {K}}\) is defined in (8).
By simple calculations, the G given in (37) can be explicitly written as
where \(H_\mathbf {x}\) is defined in (18) and
In addition, we have
where \(\widetilde{D}\) is defined in (34) and
Now, we present the following lemma which provides a lower bound of \(\Vert {\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^k\Vert _G\).
Lemma 5
Suppose \(\sigma _1\in (p-1,+\infty )\) and \(\sigma _2\in (q-1,+\infty )\). For the sequences \(\{{\mathbf{w}}^k\}\) and \(\{ \widetilde{{\mathbf{w}}}^k \}\) generated by GS-ADMM, there exists \(\xi _1>0\) such that
Proof
First, it is easy to derive that
where
with \(H_{\mathbf {x},0}\) and \(\widetilde{G}_0\) defined in (32) and (42), respectively, and
In the above matrix \(\overline{G}_0\), E and \(H_{\mathbf {y},0}\) are defined in (35). Hence, we have
Since \(\sigma _1\in (p-1,+\infty )\) and \(\sigma _2\in (q-1,+\infty )\), \(H_{\mathbf {x},0}\) and \(H_{\mathbf {y},0}\) are positive definite. So, there exists a \(\xi _1 >0\) such that
In view of the definition of E in (35), we have
Then, the inequality (43) follows from (44) and (45). \(\square \)
Lemma 6
Suppose \( \tau >-1\). Then, the sequence \(\{{\mathbf{w}}^k\}\) generated by GS-ADMM satisfies
Proof
It follows from the optimality condition of \(y_l^{k+1}\)-subproblem that \(y_l^{k+1}\in \mathcal {Y}_l\) and for any \(y_l \in {\mathcal {Y}}_l\), we have
with \(c_{y,l} = c - {\mathcal {A}}{\mathbf{x}}^{k+1} - \sum \limits _{j=1,j\ne l}^{q}B_jy_j^k\), which implies
For \(l=1,2,\ldots ,q\), letting \(y_l=y_l^k\) in the above inequality and summing them together, we can deduce that
where
and \(H_{\mathbf {y},0}\) is defined in (35). Similarly, it follows from the optimality condition of \(y_l^k\)-subproblem that
For \(l=1,2,\ldots ,q\), letting \(y_l=y_l^{k+1}\) in the above inequality and summing them together, we obtain
Since \(\sigma _2 \in (q-1, \infty )\) and all \(B_j\), \(j=1,\ldots , q\), have full column rank, we have from (48) that \(H_\mathbf {y}\) is positive definite. Meanwhile, by the Cauchy-Schwartz inequality, we get
By adding (47) to (49) and using (50), we achieve
From the update of \(\lambda ^{k+\frac{1}{2}}\), i.e., \(\lambda ^{k+\frac{1}{2}}=\lambda ^k-\tau \beta \left( \mathcal {A}{\mathbf{x}}^{k+1}+\mathcal {B}{\mathbf{y}}^{k}-c\right) \) and the update of \(\lambda ^k\), i.e., \(\lambda ^k=\lambda ^{k-\frac{1}{2}}-s\beta \left( \mathcal {A}{\mathbf{x}}^{k}+\mathcal {B}{\mathbf{y}}^{k}-c\right) ,\) we have
Substituting the above inequality into the left-term of (51), the proof is completed. \(\square \)
Theorem 4
Suppose \(\sigma _1\in (p-1,+\infty )\), \(\sigma _2\in (q-1,+\infty )\) and \(\tau > -1\). For the sequences \(\{{\mathbf{w}}^k\}\) and \(\{ \widetilde{{\mathbf{w}}}^k \}\) generated by GS-ADMM , there exists \(\xi _1>0\) such that
Proof
The inequality (52) is directly obtained from (43) and (46). \(\square \)
The following theorem gives another variant of the lower bound of \(\Vert {\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^k\Vert _G^2\), which plays a key role in showing the convergence of GS-ADMM.
Theorem 5
Let the sequences \(\{{\mathbf{w}}^k\}\) and \(\{ \widetilde{{\mathbf{w}}}^k \}\) be generated by GS-ADMM. Then, for any
where \({\mathcal {K}}\) is defined in (8), there exist constants \(\xi _i(i=1,2)>0\) and \(\xi _j(j=3,4)\ge 0\), such that
Proof
By the Cauchy-Schwartz inequality, we have
Since
we have \(\tau > -1\) when \((\tau , s) \in {\mathcal {K}}\). Then, combining (55) with Theorem 4, we deduce
where \(\xi _1 >0\) is the constant in Theorem 4. Since \( -1 < \tau \le 1 \) and \( \beta > 0 \), we have
In addition, when \((\tau , s) \in {\mathcal {K}} \), there is
which, by \(\tau > -1\) and \(\beta >0\), implies
Hence, the proof is completed. \(\square \)
3.3 Global convergence
In this subsection, we show the global convergence and the worst-case O(1 / t) convergence rate of GS-ADMM. The following corollary is obtained directly from Theorems 2–3 and 5.
Corollary 1
Let the sequences \(\{{\mathbf{w}}^k\}\) and \(\{ \widetilde{{\mathbf{w}}}^k \}\) be generated by GS-ADMM. For any \((\sigma _1, \sigma _2, \tau , s)\) satisfying (53), there exist constants \(\xi _i(i=1,2)>0\) and \(\xi _j(j=3,4)\ge 0\) such that
and
Theorem 6
Let the sequences \(\{{\mathbf{w}}^k\}\) and \(\{ \widetilde{{\mathbf{w}}}^k \}\) be generated by GS-ADMM. Then, for any \((\sigma _1, \sigma _2, \tau , s)\) satisfying (53), we have
and there exists a \({\mathbf{w}}^\infty \in \mathcal {M}^*\) such that
Proof
Summing the inequality (59) over \(k=1,2,\ldots ,\infty \), we have
which implies that (61) and (62) hold since \(\xi _1 >0\) and \(\xi _2 >0\).
Because \((\sigma _1, \sigma _2, \tau , s)\) satisfy (53), we have by Lemma 4 that H is positive definite. So, it follows from (59) that the sequence \(\{{\mathbf{w}}^{k}\}\) is uniformly bounded. Therefore, there exits a subsequence \(\{{\mathbf{w}}^{k_j}\}\) converging to a point \({\mathbf{w}}^\infty = ({\mathbf{x}}^\infty , {\mathbf{y}}^\infty , \lambda ^\infty ) \in {\mathcal {M}}\). In addition, by the definitions of \(\widetilde{x}_k\), \(\widetilde{y}_k\) and \(\widetilde{\lambda }_k\) in (13) and (14), it follows from (61), (62) and the full column rank assumption of all the matrices \(A_i\) and \(B_j\) that
for all \(i=1, \ldots , p\) and \(j=1, \ldots , q\). So, we have \(\lim \limits _{k \rightarrow \infty } {\mathbf{w}}^k - \widetilde{{\mathbf{w}}}^k = \mathbf{0}\). Thus, for any fixed \(\mathbf {w} \in {\mathcal {M}}\), taking \(\widetilde{{\mathbf{w}}}^{k_j}\) in (16) and letting j go to \(\infty \), we obtain
Hence, \({\mathbf{w}}^\infty \in {\mathcal {M}}^*\) is a solution point of \(\text {VI}(h, \mathcal {J}, \mathcal {M})\) defined in (12).
Since (59) holds for any \({\mathbf{w}}^* \in {\mathcal {M}}^*\), by (59) and \({\mathbf{w}}^\infty \in {\mathcal {M}}^*\), for all \(l \ge k_j\), we have
This together with (62), (64), \(\lim \limits _{j \rightarrow \infty } {\mathbf{w}}^{k_j} = {\mathbf{w}}^\infty \) and the positive definiteness of H illustrate \(\lim \limits _{l \rightarrow \infty } {\mathbf{w}}^l = {\mathbf{w}}^\infty \). Therefore, the whole sequence \(\{{\mathbf{w}}^k\}\) converges to the solution \({\mathbf{w}}^\infty \in {\mathcal {M}}^*\). This completes the whole proof. \(\square \)
The above Theorem 6 shows the global convergence of our GS-ADMM. Next, we show the \({\mathcal {O}}(1/t)\) convergence rate for the ergodic iterates
Theorem 7
Let the sequences \(\{{\mathbf{w}}^k\}\) and \(\{ \widetilde{{\mathbf{w}}}^k \}\) be generated by GS-ADMM. Then, for any \((\sigma _1, \sigma _2, \tau , s)\) satisfying (53), there exist \(\xi _j(j =3,4) \ge 0\) such that
Proof
For \(k=1, \ldots , t\), summing the inequality (60), we have
Since \((\sigma _1, \sigma _2, \tau , s)\) satisfy (53), \(H_\mathbf {y}\) is positive definite. And by Lemma 4, H is also positive definite. So, it follows from (68) that
By the convexity of h and (66), we have
Then, (67) follows from (69). \(\square \)
Remark 1
In the above Theorems 6 and 7, we assume the parameters \((\sigma _1, \sigma _2, \tau , s)\) satisfy (53). However, because of the symmetric role played by the x and y iterates in the GS-ADMM, substituting the index \(k+1\) by k for the x and \(\lambda \) iterates, the GS-ADMM algorithm (7) can be clearly written as
So, by applying Theorems 6 and 7 on the algorithm (70), it also converges and has the \({\mathcal {O}}(1/t)\) convergence rate when \((\sigma _1, \sigma _2, \tau , s)\) satisfy
where
Hence, the convergence domain \({\mathcal {K}}\) in Theorems 6 and 7 can be easily enlarged to the symmetric domain, shown in Fig. 2,
Remark 2
Theorem 6 implies that we can apply the following easily usable stopping criterion for GS-ADMM:
where \(tol >0\) is a given tolerance error. One the other hand, Theorem 7 tells us that for any given compact set \(\mathcal {K}\subset \mathcal {M}\), if
we have
which shows that GS-ADMM has a worst-case \(\mathcal {O}(1/t)\) convergence rate in an ergodic sense.
4 Two special cases of GS-ADMM
Note that in GS-ADMM (7), the two proximal parameters \(\sigma _1\) and \(\sigma _2\) are required to be strictly positive for the generalized \(p+q\) block separable convex programming. However, when taking \(\sigma _1=\sigma _2=0\) for the two-block problem, i.e., \(p=q=1\), GS-ADMM would reduce to the scheme (5), which is globally convergent [16]. Such observations motivate us to further investigate the following two special cases:
4.1 Convergence for the case (a)
The corresponding GS-ADMM for the first case (a) can be simplified as follows:
And, the corresponding matrices Q, M, H and G in this case are the following:
where \(H_\mathbf {x}\) is defined in (18) and
It can be verified that H in (78) is positive definite as long as
Analogous to (44), we have
for some \(\xi _1 >0\), since \(H_{\mathbf {x},0}\) defined in (32) is positive definite. When \(\sigma _2 = 0\), by a slight modification for the proof of Lemma 6, we have the following lemma.
Lemma 7
Suppose \(\tau >-1\). The sequence \(\{{\mathbf{w}}^k\}\) generated by the algorithm (74) satisfies
Then, the following two theorems are similar to Theorems 5 and 6.
Theorem 8
Let the sequences \(\{{\mathbf{w}}^k\}\) and \(\{ \widetilde{{\mathbf{w}}}^k \}\) be generated by the algorithm (74). For any
where \({\mathcal {K}}\) is given in (8), there exist constants \(\xi _i(i=1,2)>0\) and \(\xi _3\ge 0\), such that
Proof
For any \((\tau , s) \in {\mathcal {K}}\), we have \(\tau > -1\). Then, by Lemma 7, the inequality (80) and the Cauchy-Schwartz inequality (55), we can deduce that (81) holds with \(\xi _1\) given in (80), \(\xi _2\) and \(\xi _3\) given in (58) and (57), respectively. \(\square \)
Theorem 9
Let the sequences \(\{{\mathbf{w}}^k\}\) and \(\{ \widetilde{{\mathbf{w}}}^k \}\) be generated by the algorithm (74). For any
where
we have
and there exists a \({\mathbf{w}}^\infty \in \mathcal {M}^*\) such that \(\lim \limits _{k\rightarrow \infty } \widetilde{{\mathbf{w}}}^{k} = {\mathbf{w}}^\infty . \)
Proof
First, it is clear that Theorem 3 still holds, which, combining with Theorem 8, gives
Hence, (83) holds. For \((\sigma _1, \tau , s)\) satisfying (82), H in (78) is positive definite. So, by (84), \(\{{\mathbf{w}}^k\}\) is uniformly bounded and therefore, there exits a subsequence \(\{{\mathbf{w}}^{k_j}\}\) converging to a point \({\mathbf{w}}^\infty = ({\mathbf{x}}^\infty , y^\infty , \lambda ^\infty ) \in {\mathcal {M}}\). So, it follows from (83) and the full column rank assumption of all the matrices \(A_i\) that
for all \(i=1, \ldots , p\). Hence, by \(\lim \limits _{j \rightarrow \infty } {\mathbf{w}}^{k_j} = {\mathbf{w}}^\infty \) and (83), we have
and therefore, by the full column rank assumption of B and (83),
Hence, by (85), we have \( \lim \limits _{j \rightarrow \infty } {\mathbf{w}}^{k_j} - \widetilde{{\mathbf{w}}}^{k_j} = {\mathbf{0}}\). Thus, by taking \(\widetilde{{\mathbf{w}}}^{k_j}\) in (16) and letting j go to \(\infty \), the inequality (65) still holds. Then, the rest proof of this theorem follows from the same proof of Theorem 6. \(\square \)
Based on Theorem 8 and by a similar proof to those of Theorem 7, we can also show that the algorithm (74) has the worst-case \({\mathcal {O}}(1/t)\) convergence rate, which is omitted here for conciseness.
4.2 Convergence for the case (b)
The corresponding GS-ADMM for the case (b) can be simplified as follows
In this case, the corresponding matrices Q, M, H and G become \(\widetilde{Q},\widetilde{M}, \widetilde{H}\) and \(\widetilde{G}\), which are defined in (19), the lower-right block of M in (25), (29) and (41), respectively.
In what follows, let us define
Then, by the proof of Theorem 5, we can deduce the following theorem.
Theorem 10
Let the sequences \(\{\mathbf {v}^k\}\) and \(\{ \widetilde{\mathbf {v}}^k\}\) be generated by the algorithm (86). For any
where \({\mathcal {K}}\) is defined in (8), there exist constants \(\xi _i(i=1,2)>0\) and \(\xi _j(j=3,4)\ge 0\) such that
By slight modifications of the proof of Theorems 6 and 9, we have the following global convergence theorem.
Theorem 11
Let the sequences \(\{{\mathbf{w}}^k\}\) and \(\{ \widetilde{{\mathbf{w}}}^k \}\) be generated by the algorithm (74). Then, for any
where \({\mathcal {K}}\) is defined in (8), we have
and there exists a \({\mathbf{w}}^\infty \in \mathcal {M}^*\) such that \(\lim _{k\rightarrow \infty } \widetilde{{\mathbf{w}}}^{k} = {\mathbf{w}}^\infty . \)
By a similar proof to that of Theorem 7, the algorithm (86) also has the worst-case \({\mathcal {O}}(1/t)\) convergence rate.
Remark 3
Again, substituting the index \(k+1\) by k for the x and \(\lambda \) iterates, the algorithm (74) can be also written as
So, by applying Theorem 11 on the above algorithm, we know the algorithm (74) also converges globally when \((\sigma _1, \tau , s)\) satisfy
where \(\overline{{\mathcal {K}}}\) is given in (72). Hence, the convergence domain \({\mathcal {K}}_1\) in Theorem 9 can be enlarged to the symmetric domain \({\mathcal {G}} = {\mathcal {K}}_1 \cup \overline{{\mathcal {K}}}\) given in (73). By a similar reason, the convergence domain \({\mathcal {K}}\) in Theorem 11 can be enlarged to \({\mathcal {G}}\) as well.
5 Numerical experiments
In this section, we investigate the performance of the proposed GS-ADMM for solving a class of sparse matrix minimization problems. All the algorithms are coded and simulated in MATLAB 7.10(R2010a) on a PC with Intel Core i5 processor(3.3GHz) with 4 GB memory.
5.1 Test problem
Consider the following Latent Variable Gaussian Graphical Model Selection (LVGGMS) problem arising in the statistical learning [2, 20]:
where \(C\in \mathcal {R}^{n\times n}\) is the covariance matrix obtained from observation, \(\nu \) and \(\mu \) are two given positive weight parameters, tr(L) stands for the trace of the matrix L and \(\Vert S\Vert _1=\sum _{ij}|S_{ij}|\). Clearly, by two different ways of partitioning the variables of (87), the GS-ADMM (7) can lead to the following two algorithms:
Note that all the subproblems in (88) and (89) have closed formula solutions. Next, we take the scheme (88) for an example to show how to get the explicit solutions of the subproblem. By the first-order optimality condition of the X-subproblem in (88), we derive
which is equivalent to
Then, from the eigenvalue decomposition
where \(\text {Diag}(\rho )\) is a diagonal matrix with \(\rho _i, i=1, \ldots , n\), on the diagonal, we obtain that the solution of the X-subproblem in (88) is
where \(\text {Diag}(\gamma )\) is the diagonal matrix with diagonal elements
On the other hand, the S-subproblem in (88) is equivalent to
where \(\text {Shrink}(\cdot ,\cdot )\) is the soft shrinkage operator (see e.g.[22]). Meanwhile, it is easy to verify that the L-subproblem is equivalent to
where \(\max \{\rho , {\mathbf{0}}\}\) is taken component-wise and \(V\text {Diag}(\rho )V^\mathsf{T}\) is the eigenvalue decomposition of the matrix
5.2 Numerical results
In the following, we investigate the performance of several algorithms for solving the LVGGMS problem, where all the corresponding subproblems can be solved in a similar way as shown in the above analysis. For all algorithms, the maximal number of iterations is set as 1000, the starting iterative values are set as \((X^0,S^0, L^0,\varLambda ^0)=(\mathbf I ,\mathbf 2I ,\mathbf I ,\mathbf 0 )\), and motivated by Remark 2, the following stopping conditions are used
together with \(\text {CER(k)}:=\left\| X^{k}-S^{k}+L^{k}\right\| _F\le 10^{-4}\), where \(F^*\) is the approximate optimal objective function value obtained by running GS-ADMM (88) after 1000 iterations. In (87), we set \((\nu ,\mu )=(0.005,0.05)\) and the given data C is randomly generated by the following MATLAB code with \(m=100\), which are downloaded from S. Boyd’s homepageFootnote 2:
5.2.1 Performance of different versions of GS-ADMM
In the following, we denote
First, we would like to investigate the performance of the above different versions of GS-ADMM for solving the LVGGMS problem with variance of the penalty parameter \(\beta \). The results are reported in Table 1 with \(\text {TOL}=\text {Tol}=1.0\times 10^{-7}\), and \((\tau ,s)=(0.8,1.17)\). For GS-ADMM-I and GS-ADMM-II, \((\sigma _1,\sigma _2)=(2,3)\). Here, “Iter” and “CPU” denote the iteration number and the CPU time in seconds, and the bold letter indicates the best result of each algorithm. From Table 1, we can observe that:
-
Both the iteration number and the CPU time of all the algorithms have a similar changing pattern, which decreases originally and then increases along with the decrease of the value of \(\beta \).
-
For the same value of \(\beta \), the results of GS-ADMM-III are slightly better than other algorithms in terms of the iteration number, CPU time, and the feasibility errors CER, IER and OER.
-
GS-ADMM-III with \(\beta =0.5\) can terminate after 579 iterations to achieve the tolerance \(10^{-7}\), while the other algorithms with \(\beta =0.5\) fail to achieve this tolerance within given number of iterations.
In general, the algorithm (88) with \(\beta = 0.06\) performs better than other cases. Hence, in the following experiments for GS-ADMM, we adapt GS-ADMM-III with default \(\beta = 0.06\). Also note that \(\sigma _2 =0\), which is not allowed by the algorithms discussed in [15].
Second, we investigate how the stepsizes \((\tau ,s)\in {\mathcal {G}}\) with different values would affect the performance of GS-ADMM-III. Table 2 reports the comparison results with variance of \((\tau ,s)\) for \(\text {TOL}=\text {Tol}=1.0\times 10^{-5}\). One obvious observation from Table 2 is that both the iteration number and the CPU time decrease along with the increase of s (or \(\tau \)) for fixed value of \(\tau \) (or s), which indicates that the stepsizes of \((\tau ,s)\in {\mathcal {G}}\) could influence the performance of GS-ADMM significantly. In addition, the results in Table 2 also indicate that using more flexible but with both relatively larger stepsizes \(\tau \) and s of the dual variables often gives the best convergence speed. Comparing all the reported results in Table 2, by setting \((\tau ,s)=(0.9,1.09)\), GS-ADMM-III gives the relative best performance for solving the problem (87).
5.2.2 Comparison of GS-ADMM with other state-of-the-art algorithms
In this subsection, we would like to carry out some numerical comparison of solving the problem (87) by using GS-ADMM-III and the other four methods:
-
The Proximal Jacobian Decomposition of ALM [17] (denoted by “PJALM”);
-
The splitting method in [14] (denoted by “HTY”);
-
The generalized parametrized proximal point algorithm [1] (denoted by “GR-PPA”).
-
The twisted version of the proximal ADMM [23] (denoted by “T-ADMM”).
We set \((\tau ,s)=(0.9,1.09)\) for GS-ADMM-III and the parameter \(\beta = 0.05\) for all the comparison algorithms. The default parameter \(\mu =2.01\) and \(H = \beta {\mathbf{I}}\) are used for HTY [14]. As suggested by the theory and numerical experiments in [17], the proximal parameter is set as 2 for PJALM. As shown in [1], the relaxation factor of GR-PPA is set as 1.8 and other default parameters are chosen as
For T-ADMM, the symmetric matrices therein are chosen as \(M_2=M_2=v\mathbf I \) with \(v=\beta \) and the correction factor is set as \(a=1.6\) [23]. The results obtained by the above algorithms under different tolerances are reported in Table 3. With fixed tolerance \(\text {TOL}=10^{-9}\) and \(\text {Tol}=10^{-15}\), the convergence behavior of the error measurements IER(k) and OER(k) by the five algorithms using different starting points are shown in Figs. 3, 4 and 5. From Table 3 and Figs. 3, 4 and 5, we may have the following observation:
-
Under all different tolerances, GS-ADMM-III performs significantly better than other four algorithms in both the number of iterations and CPU time.
-
GR-PPA is slightly better than PJALM and HTY, and T-ADMM outperforms PJALM, HTY and GR-PPA.
-
the convergence curves in Figs. 3, 4 and 5 illustrate that using different starting points, GS-ADMM-III also converges fastest among the comparing methods.
All these numerical results demonstrate the effectiveness and robustness of GS-ADMM-III, which is perhaps due to the symmetric updating of the Lagrange multipliers and the proper choice of the stepsizes.
6 Conclusion
Since the direct extension of ADMM in a Gauss–Seidel fashion for solving the three-block separable convex optimization problem is not necessarily convergent analyzed by Chen et al. [3], there has been a constantly increasing interest in developing and improving the theory of the ADMM for solving the multi-block separable convex optimization. In this paper, we propose an algorithm, called GS-ADMM, which could solve the general model (1) by taking advantages of the multi-block structure. In our GS-ADMM, the Gauss–Seidel fashion is taken for updating the two grouped variables, while the block variables within each group are updated in a Jacobi scheme, which would make the algorithm more be attractive and effective for solving big size problems. We provide a new convergence domain for the stepsizes of the dual variables, which is significantly larger than the convergence domains given in the literature. Global convergence as well as the \({\mathcal {O}}(1/t)\) ergodic convergence rate of the GS-ADMM is established. In addition, two special cases of GS-ADMM, which allows one of the proximal parameters to be zero, are also discussed.
This paper simplifies the analysis in [16] and provides an easy way to analyze the convergence of the symmetric ADMM. Our preliminary numerical experiments show that with proper choice of parameters, the performance of the GS-ADMM could be very promising. Besides, from the presented convergence analysis, we can see that the theories in the paper can be naturally extended to use more general proximal terms, such as letting \(P_i^k(x_k) := \frac{\beta }{2}\Vert x_i - x_i^k\Vert _{{\mathbf{P}}_i}\) and \(Q_j^k(y_j) := \frac{ \beta }{2}\Vert y_j - y_j^k\Vert _{{\mathbf{Q}}_j}\) in (7), where \({\mathbf{P}}_i\) and \({\mathbf{Q}}_i\) are matrices such that \({\mathbf{P}}_i \succ (p-1) A_i ^\mathsf{T}A_i\) and \({\mathbf{Q}}_j \succ (q-1) B_j ^\mathsf{T}B_j\) for all \(i=1, \ldots , p\) and \(j=1, \ldots , q\). Finally, the different ways of partitioning the variables of the problem also gives the flexibility of GS-ADMM.
Notes
References
Bai, J.C., Li, J.C., Li, J.F.: A novel parameterized proximal point algorithm with applications in statistical learning. Optimization Online, March (2017) http://www.optimization-online.org/DB_HTML/2017/03/5901.html
Chandrasekaran, V., Parrilo, P.A., Willsky, A.S.: Latent variable graphical model selection via convex optimization. Ann. Stat. 40, 1935–1967 (2012)
Chen, C.H., He, B.S., Ye, Y.Y., Yuan, X.M.: The direct extension of ADMM for multi-block minimization problems is not necessarily convergent. Math. Program. 155, 57–79 (2016)
Dong, B., Yu, Y., Tian, D.D.: Alternating projection method for sparse model updating problems. J. Eng. Math. 93, 159–173 (2015)
Fortin, M., Glowinski, R.: Augmented Lagrangian Methods: Applications to the Numerical Solution of Boundary-Value Problems, pp. 299–331. North-Holland, Amsterdam (1983)
Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer Ser. Oper. Res. 1, Springer, New York (2003)
Fortin, M.: Numerical Methods for Nonlinear Variational Problems. Springer, New York (1984)
Glowinski, R., Marrocco, A.: Approximation paréléments finis d’rdre un et résolution, par pénalisation-dualité d’une classe de problèmes de Dirichlet non linéaires. Rev. Fr. Autom. Inform. Rech. Opér. Anal. Numér. 2, 41–76 (1975)
Gu, Y., Jiang, B., Han, D.: A semi-proximal-based strictly contractive Peaceman-Rachford splitting method. arXiv:1506.02221 (2015)
Hestenes, M.R.: Multiplier and gradient methods. J. Optim. Theory Appl. 4, 303–320 (1969)
He, B.S., Yuan, X.M.: On the \(\cal{O}(1/n)\) convergence rate of the Douglas–Rachford alternating direction method. SIAM J. Numer. Anal. 50, 700–709 (2012)
He, B.S., Liu, H., Wang, Z.R., Yuan, X.M.: A strictly contractive Peaceman–Rachford splitting method for convex programming. SIAM J. Optim. 24, 1011–1040 (2014)
He, B.S., Hou, L.S., Yuan, X.M.: On full Jacobian decomposition of the augmented Lagrangian method for separable convex programming. SIAM J. Optim. 25, 2274–2312 (2015)
He, B.S., Tao, M., Yuan, X.M.: A splitting method for separable convex programming. IMA J. Numer. Anal. 31, 394–426 (2015)
He, B.S., Yuan, X.M.: Block-wise alternating direction method of multipliers for multiple-block convex programming and beyond. SIAM J. Comput. Math. 1, 145–174 (2015)
He, B.S., Ma, F., Yuan, X.M.: Convergence study on the symmetric version of ADMM with larger step sizes. SIAM J. Imaging Sci. 9, 1467–1501 (2016)
He, B.S., Xu, H.K., Yuan, X.M.: On the proximal Jacobian decomposition of ALM for multiple-block separable convex minimization problems and its relationship to ADMM. J. Sci. Comput. 66, 1204–1217 (2016)
He, B.S.: On the convergence properties of alternating direction method of multipliers. Numer. Math. J. Chin. Univ. (Chine. Ser.) 39, 81–96 (2017)
Liu, Z.S., Li, J.C., Li, G., Bai, J.C., Liu, X.N.: A new model for sparse and low-rank matrix decomposition. J. Appl. Anal. Comput. 7, 600–616 (2017)
Ma, S.Q.: Alternating proximal gradient method for convex minimization. J. Sci. Comput. 68, 546–572 (2016)
Rothman, A.J., Bickel, P.J., Levina, E., Zhu, J.: Sparse permutation invariant covariance estimation. Electron. J. Stat. 2, 494–515 (2008)
Tao, M., Yuan, X.M.: Recovering low-rank and sparse components of matrices from incomplete and noisy observations. SIAM J. Optim. 21, 57–81 (2011)
Wang, J.J., Song, W.: An algorithm twisted from generalized ADMM for multi-block separable convex minimization models. J. Comput. Appl. Math. 309, 342–358 (2017)
Acknowledgements
The authors would like to thank the anonymous referees for providing very constructive comments. Jianchao Bai also wish to thank Prof. Defeng Sun at National University of Singapore for his valuable discussions on ADMM and Prof. Pingfan Dai at Xi’an Jiaotong University for discussion on an early version of the paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
The work was supported by the National Science Foundation of China under Grants 11671318 and 11571178, the Science Foundation of Fujian province of China under Grant 2016J01028, and the National Science Foundation of USA under Grant 1522654.
Rights and permissions
About this article
Cite this article
Bai, J., Li, J., Xu, F. et al. Generalized symmetric ADMM for separable convex optimization. Comput Optim Appl 70, 129–170 (2018). https://doi.org/10.1007/s10589-017-9971-0
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10589-017-9971-0
Keywords
- Separable convex programming
- Multiple blocks
- Parameter convergence domain
- Alternating direction method of multipliers
- Global convergence
- Complexity
- Statistical learning