1 Introduction

We consider the following grouped multi-block separable convex programming problem

$$\begin{aligned} \begin{array}{lll} \min &{} \sum \limits _{i=1}^{p}f_i(x_i)+ \sum \limits _{j=1}^{q}g_j(y_j) \\ \text {s.t. } &{} \sum \limits _{i=1}^{p}A_i x_i+\sum \limits _{j=1}^{q}B_jy_j=c,\\ &{} x_i\in \mathcal {X}_i, \; i=1, \ldots , p, \\ &{} y_j\in \mathcal {Y}_j, \; j =1, \ldots , q, \\ \end{array} \end{aligned}$$
(1)

where \(f_i(x_i):\mathcal {R}^{m_i}\rightarrow \mathcal {R},\ g_j(y_j):\mathcal {R}^{d_j}\rightarrow \mathcal {R}\) are closed and proper convex functions (possibly nonsmooth); \(A_i\in \mathcal {R}^{n\times m_i}, B_j\in \mathcal {R}^{n\times d_j}\) and \( c\in \mathcal {R}^{n}\) are given matrices and vectors, respectively; \(\mathcal {X}_i\subset \mathcal {R}^{m_i}\) and \(\mathcal {Y}_j\subset \mathcal {R}^{d_j} \) are closed convex sets; \(p \ge 1\) and \(q \ge 1\) are two integers. Throughout this paper, we assume that the solution set of the problem (1) is nonempty and all the matrices \(A_i\), \(i=1,\ldots , p\), and \(B_j\), \(j=1,\ldots ,q\), have full column rank. And in the following, we denote \(\mathcal {A}=\left( A_1, \ldots ,A_p\right) ,\mathcal {B}=\left( B_1, \ldots ,B_q\right) \), \({\mathbf{x}} = (x_1 ^\mathsf{T}, \ldots , x_p ^\mathsf{T}) ^\mathsf{T}\), \({\mathbf{y}} = (y_1 ^\mathsf{T}, \ldots , y_q ^\mathsf{T}) ^\mathsf{T}\), \({\mathcal {X}} = {\mathcal {X}}_1 \times {\mathcal {X}}_2 \times \cdots {\mathcal {X}}_p \), \({\mathcal {Y}} = {\mathcal {Y}}_1 \times {\mathcal {Y}}_2 \times \cdots {\mathcal {Y}}_q \) and \(\mathcal {M}=\mathcal {X} \times \mathcal {Y} \times \mathcal {R}^n\).

In the last few years, the problem (1) has been extensively investigated due to its wide applications in different fields, such as the sparse inverse covariance estimation problem [21] in finance and statistics, the model updating problem [4] in the design of vibration structural dynamic system and bridges, the low rank and sparse representations [19] in image processing and so forth. One standard way to solve the problem (1) is the classical Augmented Lagrangian Method (ALM) [10], which minimizes the following augmented Lagrangian function

$$\begin{aligned} \mathcal {L}_\beta \left( {\mathbf{x}}, {\mathbf{y}},\lambda \right) =L\left( {\mathbf{x}}, {\mathbf{y}},\lambda \right) +\frac{\beta }{2} \Vert {\mathcal {A}} {\mathbf{x}} + {\mathcal {B}} {\mathbf{y}} -c\Vert ^2, \end{aligned}$$

where \(\beta >0\) is a penalty parameter for the equality constraint and

$$\begin{aligned} L\left( {\mathbf{x}}, {\mathbf{y}},\lambda \right) =\sum \limits _{i=1}^{p}f_i(x_i)+ \sum \limits _{j=1}^{q}g_j(y_j) - \langle \lambda , {\mathcal {A}} {\mathbf{x}} + {\mathcal {B}} {\mathbf{y}} -c \rangle \end{aligned}$$
(2)

is the Lagrangian function of the problem (1) with the Lagrange multiplier \(\lambda \in \mathcal {R}^{n}\). Then, the ALM procedure for solving (1) can be described as follows:

$$\begin{aligned} \left\{ \begin{array}{l} \left( {\mathbf{x}}^{k+1},{\mathbf{y}}^{k+1}\right) =\arg \min \left\{ \mathcal {L}_\beta \left( {\mathbf{x}},{\mathbf{y}}, \lambda ^k\right) |\ {\mathbf{x}} \in \mathcal {X}, {\mathbf{y}} \in \mathcal {Y}\right\} ,\\ \lambda ^{k+1}=\lambda ^{k}-\beta ({\mathcal {A}}{\mathbf{x}}^{k+1} + {\mathcal {B}}{\mathbf{y}}^{k+1} -c). \end{array}\right. \end{aligned}$$

However, ALM does not make full use of the separable structure of the objective function of (1) and hence, could not take advantage of the special properties of the component objective functions \(f_i\) and \(g_j\) in (1). As a result, in many recent real applications involving big data, solving the subproblems of ALM becomes very expensive.

One effective approach to overcome such difficulty is the Alternating Direction Method of Multipliers (ADMM), which was originally proposed in [8] and could be regarded as a splitting version of ALM. At each iteration, ADMM first sequentially optimize over one block variable while fixing all the other block variables, and then follows by updating the Lagrange multiplier. A natural extension of ADMM for solving the multi-block case problem (1) takes the following iterations:

$$\begin{aligned} \left\{ \begin{array}{lll} \text {For}\ i=1,2,\ldots ,p,\\ \qquad x_{i}^{k+1}=\arg \min \limits _{x_{i}\in \mathcal {X}_{i}} \mathcal {L}_\beta \left( x_1^{k+1},\ldots ,x_{i-1}^{k+1}, x_i,x_{i+1}^k,\ldots , x_{p}^k,{\mathbf{y}}^k,\lambda ^k\right) ,\\ \text {For}\ j=1,2,\ldots ,q,\\ \qquad y_{j}^{k+1}=\arg \min \limits _{y_{j}\in \mathcal {Y}_{j}} \mathcal {L}_\beta \left( {\mathbf{x}}^{k+1},y_1^{k+1},\ldots ,y_{j-1}^{k+1}, y_j,y_{j+1}^k,\ldots , y_{q}^k,\lambda ^k\right) ,\\ \lambda ^{k+1}=\lambda ^k-\beta \left( \mathcal {A}{\mathbf{x}}^{k+1}+\mathcal {B}{\mathbf{y}}^{k+1}-c\right) . \end{array}\right. \end{aligned}$$
(3)

Obviously, the scheme (3) is a serial algorithm which uses the newest information of the variables at each iteration. Although the above scheme was proved to be convergent for the two-block, i.e., \(p=q=1\), separable convex minimization (see [11]), as shown in [3], the direct extension of ADMM (3) for the multi-block case, i.e., \(p+q \ge 3\), without proper modifications is not necessarily convergent. Another natural extension of ADMM is to use the Jacobian fashion, where the variables are updated simultaneously after each iteration, that is,

$$\begin{aligned} \left\{ \begin{array}{lll} \text {For}\ i=1,2,\ldots ,p,\\ \qquad x_{i}^{k+1}=\arg \min \limits _{x_{i}\in \mathcal {X}_{i}} \mathcal {L}_\beta \left( x_1^k,\ldots ,x_{i-1}^k, x_i,x_{i+1}^k,\ldots , x_{p}^k, {\mathbf{y}}^k,\lambda ^k\right) ,\\ \text {For}\ j=1,2,\ldots ,q,\\ \qquad y_{j}^{k+1}=\arg \min \limits _{y_{j}\in \mathcal {Y}_{j}} \mathcal {L}_\beta \left( {\mathbf{x}}^k,y_1^k,\ldots ,y_{j-1}^k, y_j,y_{j+1}^k,\ldots , y_{q}^k,\lambda ^k\right) ,\\ \lambda ^{k+1}=\lambda ^k-\beta \left( \mathcal {A}{\mathbf{x}}^{k+1}+\mathcal {B}{\mathbf{y}}^{k+1}-c\right) . \end{array}\right. \end{aligned}$$
(4)

As shown in [13], however, the Jacobian scheme (4) is not necessarily convergent either. To ensure the convergence, He et al. [14] proposed a novel ADMM-type splitting method that by adding certain proximal terms, allowed some of the subproblems to be solved in parallel, i.e., in a Jacobian fashion. And in [14], some sparse low-rank models and image painting problems were tested to verify the efficiency of their method.

More recently, a Symmetric ADMM (S-ADMM) was proposed by He et al. [16] for solving the two-block separable convex minimization, where the algorithm performs the following updating scheme:

$$\begin{aligned} \left\{ \begin{array}{lll} {\mathbf{x}}^{k+1}=\arg \min \left\{ \mathcal {L}_\beta \left( {\mathbf{x}},{\mathbf{y}}^k,\lambda ^k\right) \ | \ {\mathbf{x}} \in \mathcal {X} \right\} ,\\ \lambda ^{k+\frac{1}{2}}=\lambda ^k- \tau \beta \left( {\mathcal {A}} {\mathbf{x}}^{k+1}+ {\mathcal {B}} {\mathbf{y}} ^{k}-c\right) ,\\ {\mathbf{y}}^{k+1}=\arg \min \left\{ \mathcal {L}_\beta ( {\mathbf{x}}^{k+1},{\mathbf{y}},\lambda ^{k+\frac{1}{2}}) \ | \ {\mathbf{y}} \in \mathcal {Y}\right\} ,\\ \lambda ^{k+1}=\lambda ^{k+\frac{1}{2}}-s\beta \left( {\mathcal {A}} {\mathbf{x}}^{k+1}+{\mathcal {B}} {\mathbf{y}}^{k+1}-c\right) , \end{array}\right. \end{aligned}$$
(5)

and the stepsizes \((\tau ,s)\) were restricted into the domain

$$\begin{aligned} \mathcal {H}=\left\{ (\tau ,s) \ | \ s \in (0, (\sqrt{5} + 1)/2 ), \ \tau +s >0, \ \tau \in (-1,1), \ |\tau |<1+s-s^2 \right\} \end{aligned}$$
(6)

in order to ensure its global convergence. The main improvement of [16] is that the scheme (5) largely extends the domain of the stepsizes \((\tau ,s)\) of other ADMM-type methods [12]. What’s more, the numerical performance of S-ADMM on solving the widely used basis pursuit model and the total-variational image debarring model significantly outperforms the original ADMM in both the CPU time and the number of iterations. Besides, Gu, et al.[9] also studied a semi-proximal-based strictly contractive Peaceman-Rachford splitting method, that is (5) with two additional proximal penalty terms for the x and y update. But their method has a nonsymmetric convergence domain of the stepsize and still focuses on the two-block case problem, which limits its applications for solving large-scale problems with multiple block variables.

Mainly motivated by the work of [9, 14, 16], we would like to generalize S-ADMM with more wider convergence domain of the stepsizes to tackle the multi-block separable convex programming model (1), which more frequently appears in recent applications involving big data [2, 20]. Our algorithm framework can be described as follows:

$$\begin{aligned} (\mathbf GS-ADMM )\ \left\{ \begin{array}{lll} \text {For}\ i=1,2,\ldots ,p,\\ \quad x_{i}^{k+1}=\arg \min \limits _{x_{i}\in \mathcal {X}_{i}} \mathcal {L}_\beta (x_1^k,\ldots , x_i,\ldots ,x_p^k,{\mathbf{y}}^k,\lambda ^k) +P_i^k(x_i), \\ \quad \text {where } P_i^k(x_i) = \frac{\sigma _1\beta }{2}\left\| A_{i}(x_{i}-x_{i}^k)\right\| ^2,\\ \lambda ^{k+\frac{1}{2}}=\lambda ^k-\tau \beta (\mathcal {A} {\mathbf{x}}^{k+1}+\mathcal {B}{\mathbf{y}}^{k}-c),\\ \text {For}\ j=1,2,\ldots ,q,\\ \quad y_{j}^{k+1}=\arg \min \limits _{y_{j}\in \mathcal {Y}_{j}} \mathcal {L}_\beta ({\mathbf{x}}^{k+1},y_1^k,\ldots , y_j,\ldots ,y_q^k,\lambda ^{k+\frac{1}{2}}) + Q_j^k(y_j), \\ \quad \text {where } Q_j^k(y_j) = \frac{\sigma _2\beta }{2}\left\| B_{j}(y_{j}-y_{j}^k)\right\| ^2,\\ \lambda ^{k+1}=\lambda ^{k+\frac{1}{2}}-s\beta (\mathcal {A}{\mathbf{x}}^{k+1}+\mathcal {B}{\mathbf{y}}^{k+1}-c). \end{array}\right. \end{aligned}$$
(7)

In the above generalized symmetric ADMM (GS-ADMM), \(\tau \) and s are two stepsize parameters satisfying

$$\begin{aligned} (\tau , s) \in {\mathcal {K}} = \left\{ (\tau , s) \ | \ \tau + s>0, \ \tau \le 1,\ -\tau ^2 - s^2 -\tau s + \tau + s + 1 >0 \right\} , \end{aligned}$$
(8)

and \(\sigma _1\in (p-1,+\infty ),\sigma _2\in (q-1,+\infty )\) are two proximal parametersFootnote 1 for the regularization terms \(P_i^k(\cdot )\) and \(Q_j^k(\cdot )\). He and Yuan[15] also investigated the above GS-ADMM (7) but restricted the stepsize \(\tau =s\in (0,1)\), which does not exploit the advantages of using flexible stepsizes given in (8) to improve its convergence.

Fig. 1
figure 1

Stepsize region \(\mathcal {K}\) of GS-ADMM

Fig. 2
figure 2

Stepsize region \(\mathcal {G}\) of GS-ADMM

Major contributions of this paper can be summarized as the following four aspects:

  • Firstly, the new GS-ADMM could deal with the multi-block separable convex programming problem (1), while the original S-ADMM in [16] only works for the two block case and may not be convenient for solving large-scale problems. In addition, the convergence domain \(\mathcal {K}\) for the stepsizes \((\tau , s)\) in (8), shown in Fig. 1, is significantly larger than the domain \(\mathcal {H}\) given in (6) and the convergence domain in [9, 15]. For example, the stepsize s can be arbitrarily close to 5 / 3 when the stepsize \(\tau \) is close to \(-1/3\). Moreover, the above domain in (8) is later enlarged to a symmetric domain \({\mathcal {G}}\) defined in (73), shown in Fig. 2. Numerical experiments in Sec. 5.2.1 also validate that using more flexible and relatively larger stepsizes \((\tau ,s)\) can often improve the convergence speed of GS-ADMM. On the other hand, we can see that when \(\tau =0\), the stepsize s can be chosen in the interval \((0, (\sqrt{5}+1)/2)\), which was firstly suggested by Fortin and Glowinski in [5, 7].

  • Secondly, the global convergence of GS-ADMM as well as its worst-case \({\mathcal {O}}(1/t)\) ergodic convergence rate are established. What’s more, the total \(p+q\) block variables are partitioned into two grouped variables. While a Gauss–Seidel fashion is taken between the two grouped variables, the block variables within each group are updated in a Jacobi scheme. Hence, parallel computing can be implemented for updating the variables within each group, which could be critical in some scenarios for problems involving big data.

  • Thirdly, we discuss two special cases of GS-ADMM, which is (7) with \(p\ge 1, q=1\) and \(\sigma _2=0\) or with \(p=1,q\ge 1\) and \(\sigma _1=0\). These two special cases of GS-ADMM were not discussed in [15] and in fact, to the best of our knowledge, they have not been studied in the literature. We show the convergence domain of the stepsizes \((\tau , s)\) for these two cases is still \({\mathcal {K}}\) defined in (8) that is larger than \(\mathcal {H}\).

  • Finally, numerical experiments are performed on solving a sparse matrix optimization problem arising from the statistical learning. We have investigated the effects of the stepsizes \((\tau ,s)\) and the penal parameter \(\beta \) on the performance of GS-ADMM. And our numerical experiments demonstrate that by properly choosing the parameters, GS-ADMM could perform significantly better than other recently quite popular methods developed in [1, 14, 17, 23].

The paper is organized as follows. In Sect. 2, some preliminaries are given to reformulate the problem (1) into a variational inequality and to interpret the GS-ADMM (7) as a prediction–correction procedure. Section 3 investigates some properties of \(\Vert {\mathbf{w}}^k-{\mathbf{w}}^*\Vert _H^2\) and provides a lower bound of \(\Vert {\mathbf{w}}^{k}-\widetilde{{\mathbf{w}}}^k\Vert _G^2\), where H and G are some particular symmetric matrices. Then, we establish the global convergence of GS-ADMM and show its convergence rate in an ergodic sense. In Sect. 4, we discuss two special cases of GS-ADMM, in which either the penalty parameters \(\sigma _1\) or \(\sigma _2\) is allowed to be zero. Some preliminary numerical experiments are done in Sect. 5. We finally make some conclusions in Sect. 6.

1.1 Notation

Denoted by \(\mathcal {R}, \mathcal {R}^n, \mathcal {R}^{m\times n}\) be the set of real numbers, the set of n dimensional real column vectors and the set of \(m\times n\) real matrices, respectively. For any \(x, y\in \mathcal {R}^n\), \(\langle x, y\rangle =x ^\mathsf{T}y\) denotes their inner product and \(\Vert x\Vert =\sqrt{\langle x, x\rangle }\) denotes the Euclidean norm of x, where the superscript \(^\mathsf{T}\) is the transpose. Given a symmetric matrix G, we define \(\Vert x\Vert _G^2 = x ^\mathsf{T}G x\). Note that with this convention, \(\Vert x\Vert _G^2\) is not necessarily nonnegative unless G is a positive definite matrix (\(\succeq \mathbf 0 \)). For convenience, we use I and \(\mathbf{0}\) to stand respectively for the identity matrix and the zero matrix with proper dimension throughout the context.

2 Preliminaries

In this section, we first use a variational inequality to characterize the solution set of the problem (1). Then, we analyze that GS-ADMM (7) can be treated as a prediction–correction procedure involving a prediction step and a correction step.

2.1 Variational reformulation of (1)

We begin with the following standard lemma whose proof can be found in [16, 18].

Lemma 1

Let \(f:\mathcal {R}^m\longrightarrow \mathcal {R}\) and \(h:\mathcal {R}^m\longrightarrow \mathcal {R}\) be two convex functions defined on a closed convex set \(\varOmega \subset \mathcal {R}^m\) and h is differentiable. Suppose that the solution set \(\varOmega ^* = \arg \min \limits _{x\in \varOmega }\{f(x)+h(x) \}\) is nonempty. Then, we have

$$\begin{aligned} x^* \in \varOmega ^* \ \text{ if } \text{ and } \text{ only } \text{ if } \ x^*\in \varOmega , \ f(x)-f(x^*) +\left\langle x-x^*, \nabla h(x^*)\right\rangle \ge 0, \forall x\in \varOmega . \end{aligned}$$

It is well-known in optimization that a triple point \(({\mathbf{x}}^*, {\mathbf{y}}^*, \lambda ^*) \in {\mathcal {M}}\) is called the saddle-point of the Lagrangian function (2) if it satisfies

$$\begin{aligned} L\left( {\mathbf{x}}^*,{\mathbf{y}}^*,\lambda \right) \le L\left( {\mathbf{x}}^*, {\mathbf{y}}^*, \lambda ^*\right) \le L\left( {\mathbf{x}}, {\mathbf{y}}, \lambda ^*\right) , \quad \forall ({\mathbf{x}}, {\mathbf{y}}, \lambda ) \in \mathcal {M} \end{aligned}$$

which can be also characterized as

$$\begin{aligned} \left\{ \begin{array}{lll} \text {For}\ i=1,2,\ldots ,p,\\ \qquad x_{i}^* = \arg \min \limits _{x_{i}\in \mathcal {X}_{i}} \mathcal {L}_\beta \left( x_1^*,\ldots ,x_{i-1}^*, x_i,x_{i+1}^*,\ldots , x_{p}^*, {\mathbf{y}}^*,\lambda ^*\right) ,\\ \text {For}\ j=1,2,\ldots ,q,\\ \qquad y_{j}^* = \arg \min \limits _{y_{j}\in \mathcal {Y}_{j}} \mathcal {L}_\beta \left( {\mathbf{x}}^*,y_1^*,\ldots ,y_{j-1}^*, y_j,y_{j+1}^*,\ldots , y_{q}^*,\lambda ^*\right) ,\\ \lambda ^* = \arg \max \limits _{ \lambda \in \mathcal {R}^n} \mathcal {L}_\beta \left( {\mathbf{x}}^*,{\mathbf{y}}^*,\lambda \right) . \end{array}\right. \end{aligned}$$

Then, by Lemma 1, the above saddle-point equations can be equivalently expressed as

$$\begin{aligned} \left\{ \begin{array}{lll} \text {For}\ i=1,2,\ldots ,p,\\ \qquad x_i^*\in \mathcal {X}_i, \quad f_i(x_i)-f_i(x_i^*)+\langle x_i-x_i^*, -A_i ^\mathsf{T}\lambda ^*\rangle \ge 0, \forall x_i\in \mathcal {X}_i,\\ \text {For}\ j=1,2,\ldots ,q,\\ \qquad y_j^*\in \mathcal {Y}_j, \quad g_j(y_j)-g_j(y_j^*)+\langle y_j-y_j^*, -B_j ^\mathsf{T}\lambda ^*\rangle \ge 0, \forall y_j\in \mathcal {Y}_j,\\ \lambda ^*\in \mathcal {R}^n, \quad \left\langle \lambda -\lambda ^*, {\mathcal {A}} {\mathbf{x}}^* + {\mathcal {B}} {\mathbf{y}}^* - c \right\rangle \ge 0, \forall \lambda \in \mathcal {R}^n. \end{array}\right. \end{aligned}$$
(9)

Rewriting (9) in a more compact variational inequality (VI) form, we have

$$\begin{aligned} h({\mathbf{u}})- h({\mathbf{u}}^*) + \left\langle {\mathbf{w}}-{\mathbf{w}}^*, \mathcal {J}({\mathbf{w}}^*)\right\rangle \ge 0,\quad \forall {\mathbf{w}}\in \mathcal {M}, \end{aligned}$$
(10)

where

$$\begin{aligned} h({\mathbf{u}})=\sum _{i=1}^{p}f_i (x_i) +\sum _{j=1}^{p}g_j (y_j) \end{aligned}$$

and

$$\begin{aligned} {\mathbf{u}}=\left( \begin{array}{c} {\mathbf{x}}\\ {\mathbf{y}}\\ \end{array}\right) ,\quad {\mathbf{w}}=\left( \begin{array}{c} {\mathbf{x}}\\ {\mathbf{y}}\\ \lambda \end{array}\right) , \quad \mathcal {J}({\mathbf{w}})=\left( \begin{array}{c} -{\mathcal {A}}^\mathsf{T}\lambda \\ -{\mathcal {B}}^\mathsf{T}\lambda \\ {\mathcal {A}} {\mathbf{x}} + {\mathcal {B}} {\mathbf{y}} - c \end{array}\right) . \end{aligned}$$

Noticing that the affine mapping \({\mathcal {J}}\) is skew-symmetric, we immediately get

$$\begin{aligned} \left\langle {\mathbf{w}}-\widehat{{\mathbf{w}}}, \mathcal {J}({\mathbf{w}})-\mathcal {J}(\widehat{{\mathbf{w}}})\right\rangle = 0, \quad \forall \ {\mathbf{w}}, \widehat{{\mathbf{w}}}\in \mathcal {M}. \end{aligned}$$
(11)

Hence, (10) can be also rewritten as

$$\begin{aligned} \text {VI}({\mathbf{h}}, \mathcal {J}, \mathcal {M}):\quad h({\mathbf{u}})- h({\mathbf{u}}^*) + \left\langle {\mathbf{w}}-{\mathbf{w}}^*, \mathcal {J}({\mathbf{w}})\right\rangle \ge 0,\quad \forall {\mathbf{w}} \in \mathcal {M}. \end{aligned}$$
(12)

Because of the nonempty assumption on the solution set of (1), the solution set \(\mathcal {M}^*\) of the variational inequality \(\text {VI}(h, \mathcal {J}, \mathcal {M})\) is also nonempty and convex, see e.g. Theorem 2.3.5 [6] for more details. The following theorem given by Theorem 2.1 [11] provides a concrete way to characterize the set \(\mathcal {M}^*\).

Theorem 1

The solution set of the variational inequality \(\text {VI}(h, \mathcal {J}, \mathcal {M})\) is convex and can be expressed as

$$\begin{aligned} \mathcal {M}^*=\bigcap _{{\mathbf{w}}\in \mathcal {M}}\left\{ \widehat{{\mathbf{w}}}\in \mathcal {M}|\ h({\mathbf{u}})- h(\widehat{{\mathbf{u}}})+ \left\langle {\mathbf{w}}-\widehat{{\mathbf{w}}}, \mathcal {J}({\mathbf{w}})\right\rangle \ge 0\right\} . \end{aligned}$$

2.2 A prediction–correction interpretation of GS-ADMM

Following a similar approach in [16], we next interpret GS-ADMM as a prediction–correction procedure. First, let

$$\begin{aligned} \widetilde{{\mathbf{x}}}^k= & {} \left( \begin{array}{c} \widetilde{x}_1^k\\ \widetilde{x}_2^k\\ \vdots \\ \widetilde{x}_p^k \end{array}\right) =\left( \begin{array}{c} x_1^{k+1}\\ x_2^{k+1}\\ \vdots \\ x_p^{k+1} \end{array}\right) , \quad \widetilde{{\mathbf{y}}}^k=\left( \begin{array}{c} \widetilde{y}_1^k\\ \widetilde{y}_2^k\\ \vdots \\ \widetilde{y}_q^k \end{array}\right) =\left( \begin{array}{c} y_1^{k+1}\\ y_2^{k+1}\\ \vdots \\ y_q^{k+1} \end{array}\right) , \end{aligned}$$
(13)
$$\begin{aligned} \widetilde{\lambda }^k= & {} \lambda ^k-\beta \left( \mathcal {A}{\mathbf{x}}^{k+1}+\mathcal {B}{\mathbf{y}}^{k}-c\right) , \end{aligned}$$
(14)

and

$$\begin{aligned} \widetilde{{\mathbf{u}}}=\left( \begin{array}{c} \widetilde{{\mathbf{x}}}\\ \widetilde{{\mathbf{y}}}\\ \end{array}\right) ,\quad \widetilde{{\mathbf{w}}}^{k}=\left( \begin{array}{c} \widetilde{{\mathbf{x}}}^k\\ \widetilde{{\mathbf{y}}}^k\\ \widetilde{\lambda }^k \end{array}\right) = \left( \begin{array}{c} {\mathbf{x}}^{k+1}\\ {\mathbf{y}}^{k+1}\\ \widetilde{\lambda }^{k} \end{array}\right) . \end{aligned}$$
(15)

Then, by using the above notations, we derive the following basic lemma.

Lemma 2

For the iterates \(\widetilde{{\mathbf{u}}}^k, \widetilde{{\mathbf{w}}}^k\) defined in (15), we have \(\widetilde{{\mathbf{w}}}^k\in \mathcal {M}\) and

$$\begin{aligned} h({\mathbf{u}})-h(\widetilde{{\mathbf{u}}}^k)+ \left\langle {\mathbf{w}}-\widetilde{{\mathbf{w}}}^k, \mathcal {J}(\widetilde{{\mathbf{w}}}^k)+Q(\widetilde{{\mathbf{w}}}^k-{\mathbf{w}}^k)\right\rangle \ge 0, \quad \forall {\mathbf{w}}\in \mathcal {M}, \end{aligned}$$
(16)

where

$$\begin{aligned} Q=\left[ \begin{array}{cc} H_{\mathbf {x}} &{} \mathbf{0}\\ \mathbf{0} &{} \widetilde{Q} \end{array}\right] \end{aligned}$$
(17)

with

$$\begin{aligned} H_{\mathbf {x}}=\beta \,\left[ \begin{array}{ccccccccccccc} \sigma _1 A_1^\mathsf{T}A_1 &{}&{}&{}&{} - A_1^\mathsf{T}A_2 &{}&{}&{}&{} \cdots &{}&{}&{}&{}- A_1^\mathsf{T}A_{p} \\ \\ - A_2^\mathsf{T}A_1 &{}&{}&{}&{} \sigma _1 A_2^\mathsf{T}A_2 &{}&{}&{}&{} \cdots &{}&{}&{}&{}- A_2^\mathsf{T}A_{p} \\ \\ \vdots &{}&{}&{}&{} \vdots &{}&{}&{}&{}\ddots &{}&{}&{}&{}\vdots \\ \\ - A_p^\mathsf{T}A_1 &{}&{}&{}&{} - A_p^\mathsf{T}A_2 &{}&{}&{}&{}\cdots &{}&{}&{}&{}\sigma _1 A_p^\mathsf{T}A_p \end{array}\right] , \end{aligned}$$
(18)
(19)

Proof

Omitting some constants, it is easy to verify that the \(x_i\)-subproblem \((i=1,2,\ldots ,p)\) of GS-ADMM can be written as

$$\begin{aligned} x_i^{k+1}=\arg \min \limits _{x_i\in \mathcal {X}_i}\left\{ f_i(x_i)\!-\langle \lambda ^k, A_ix_i \rangle +\frac{\beta }{2}\left\| A_ix_i - c_{x,i}\right\| ^2+\frac{\sigma _1\beta }{2}\left\| A_i(x_i-x_i^k)\right\| ^2 \right\} , \end{aligned}$$

where \(c_{x,i} = c - \sum \limits _{l=1,l\ne i}^{p}A_lx_l^k - \mathcal {B}{\mathbf{y}}^k\). Hence, by Lemma 1, we have \(x_i^{k+1}\in \mathcal {X}_i\) and

$$\begin{aligned}&f_i(x_i)-f_i(x_i^{k+1}) + \left\langle A_i (x_i-x_i^{k+1}), - \lambda ^k + \beta (A_ix_i^{k+1}-c_{x,i}) \right. \\&\quad \left. +\,\,\sigma _1\beta A_i (x_i^{k+1}-x_i^k ) \right\rangle \ge 0 \end{aligned}$$

for any \( x_i\in \mathcal {X}_i\). So, by the definition of (13) and (14), we get

$$\begin{aligned}&f_i(x_i)-f_i(\widetilde{x}_i^k) + \left\langle A_i (x_i-\widetilde{x}_i^k), - \widetilde{\lambda }^k -\beta \sum _{l=1,l\ne i}^p A_l (\widetilde{x}_l^k-x_l^k ) \right. \nonumber \\&\quad \left. +\,\,\sigma _1\beta \sum _{l=1}^p A_l (\widetilde{x}_l^k-x_l^k ) \right\rangle \ge 0 \end{aligned}$$
(20)

for any \( x_i\in \mathcal {X}_i\). By the way of generating \(\lambda ^{k+\frac{1}{2}}\) in (7) and the definition of (14), the following relation holds

$$\begin{aligned} \lambda ^{k+\frac{1}{2}}=\lambda ^k-\tau (\lambda ^k- \widetilde{\lambda }^k\ ). \end{aligned}$$
(21)

Similarly, the \(y_j\)-subproblem (\(j=1,\ldots ,q\)) of GS-ADMM can be written as

$$\begin{aligned} y_j^{k+1}=\arg \min \limits _{y_j\in \mathcal {Y}_j}\left\{ g_j(y_j)\,{-}\,\left\langle \lambda ^{k+\frac{1}{2}}, B_jy_j\right\rangle {+}\,\frac{\beta }{2}\left\| B_jy_j - c_{y,j}\right\| ^2+\frac{\sigma _2\beta }{2}\left\| B_j(y_j-y_j^k)\right\| ^2 \right\} , \end{aligned}$$

where \(c_{y,j} = c - \mathcal {A}{\mathbf{x}}^{k+1} - \sum \limits _{l=1,l\ne j}^{q}B_ly_l^k\). Hence, by Lemma 1, we have \(y_j^{k+1}\in \mathcal {Y}_j\) and

$$\begin{aligned}&g_j(y_j)-g_j(y_j^{k+1})+ \left\langle B_j (y_j-y_j^{k+1}), - \lambda ^{k+\frac{1}{2}} + \beta (B_jy_j^{k+1} - c_{y,j})\right. \\&\quad \left. +\,\,\sigma _2\beta B_j(y_j^{k+1}-y_j^k) \right\rangle \ge 0 \end{aligned}$$

for any \(y_j \in {\mathcal {Y}}_j\). This inequality, by using (21) and the definition of (13) and (14), can be rewritten as

$$\begin{aligned} g_j(y_j)-g_j(\widetilde{y}_j^{k}) + \left\langle B_j (y_j-\widetilde{y}_j^{k}), - \widetilde{\lambda }^k+(\sigma _2+1) \beta B_j(\widetilde{y}_j^{k}-y_j^k)-\tau (\widetilde{\lambda }^k-\lambda ^k)\right\rangle \ge 0 \end{aligned}$$
(22)

for any \(y_j \in {\mathcal {Y}}_j\). Besides, the equality (14) can be rewritten as

$$\begin{aligned} \left( \mathcal {A}\widetilde{{\mathbf{x}}}^{k}+\mathcal {B}\widetilde{{\mathbf{y}}}^{k}-c\right) -\mathcal {B}\left( \widetilde{{\mathbf{y}}}^{k}-{\mathbf{y}}^k\right) +\frac{1}{\beta } (\widetilde{\lambda }^k-\lambda ^k)=0, \end{aligned}$$

which is equivalent to

$$\begin{aligned} \left\langle \lambda -\widetilde{\lambda }^k, (\mathcal {A}\widetilde{{\mathbf{x}}}^{k}+\mathcal {B}\widetilde{{\mathbf{y}}}^{k}-c)-\mathcal {B} (\widetilde{{\mathbf{y}}}^{k}-{\mathbf{y}}^k) +\frac{1}{\beta } (\widetilde{\lambda }^k-\lambda ^k)\right\rangle \ge 0,\ \forall \lambda \in \mathcal {R}^n. \end{aligned}$$
(23)

Then, (16) follows from (20), (22) and (23). \(\square \)

Lemma 3

For the sequences \(\{{\mathbf{w}}^k\}\) and \(\{\widetilde{{\mathbf{w}}}^k\}\) generated by GS-ADMM, the following equality holds

$$\begin{aligned} {\mathbf{w}}^{k+1}={\mathbf{w}}^k-M({\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^k), \end{aligned}$$
(24)

where

(25)

Proof

It follows from the way of generating \(\lambda ^{k+1}\) in the algorithm and (21) that

$$\begin{aligned} \begin{array}{lll} \lambda ^{k+1} &{}=&{} \lambda ^{k+\frac{1}{2}}-s\beta (\mathcal {A}{\mathbf{x}}^{k+1}+\mathcal {B}{\mathbf{y}}^{k+1}-c) \\ &{}=&{} \lambda ^{k+\frac{1}{2}}-s\beta (\mathcal {A}{\mathbf{x}}^{k+1}+\mathcal {B}{\mathbf{y}}^{k}-c)+s\beta \mathcal {B}({\mathbf{y}}^{k}-{\mathbf{y}}^{k+1}) \\ &{}=&{} \lambda ^k-\tau (\lambda ^k- \widetilde{\lambda }^k)-s(\lambda ^k- \widetilde{\lambda }^k)+s\beta \mathcal {B}({\mathbf{y}}^{k}-\widetilde{{\mathbf{y}}}^{k}) \\ &{}=&{} \lambda ^k-\left[ -s\beta \mathcal {B}({\mathbf{y}}^{k}-\widetilde{{\mathbf{y}}}^{k})+(\tau +s)(\lambda ^k- \widetilde{\lambda }^k) \right] . \end{array} \end{aligned}$$

The above equality together with \(x_i^{k+1}=\widetilde{x}_l^k\), for \(i=1,\ldots ,p\), and \(y_j^{k+1}=\widetilde{y}_j^k\), for \(j=1,\ldots ,q\), imply

which immediately gives (24). \(\square \)

Lemmas 2 and 3 show that our GS-ADMM can be interpreted as a prediction–correction framework, where \({\mathbf{w}}^{k+1}\) and \(\widetilde{{\mathbf{w}}}^k\) are normally called the predictive variable and the correcting variable, respectively.

3 Convergence analysis of GS-ADMM

Compared with (12) and (16), the key for proving the convergence of GS-ADMM is to verify that the extra term in (16) converges to zero, that is,

$$\begin{aligned} \lim _{k\rightarrow \infty } \left\langle {\mathbf{w}}-\widetilde{{\mathbf{w}}}^k,Q ({\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^k) \right\rangle =0,\quad \forall {\mathbf{w}}\in \mathcal {M}. \end{aligned}$$

In this section, we first investigate some properties of the sequence \(\{\Vert {\mathbf{w}}^k-{\mathbf{w}}^*\Vert _H^2\}\). Then, we provide a lower bound of \(\Vert {\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^k\Vert _G^2\). Based on these properties, the global convergence and worst-case \(\mathcal {O}(1/t)\) convergence rate of GS-ADMM are established in the end.

3.1 Properties of \(\left\{ \left\| {\mathbf{w}}^k-{\mathbf{w}}^*\right\| _H^2\right\} \)

It follows from (11) and (16) that \( \widetilde{{\mathbf{w}}}^k\in \mathcal {M}\) and

$$\begin{aligned} h({\mathbf{u}})-h(\widetilde{{\mathbf{u}}}^k)+ \left\langle {\mathbf{w}}-\widetilde{{\mathbf{w}}}^k, \mathcal {J}({\mathbf{w}})\right\rangle \ge \left\langle {\mathbf{w}}-\widetilde{{\mathbf{w}}}^k, Q({\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^k)\right\rangle , \quad \forall {\mathbf{w}}\in \mathcal {M}. \end{aligned}$$
(26)

Suppose \(\tau + s >0\). Then, the matrix M defined in (25) is nonsingular. So, by (24) and a direct calculation, the right-hand term of (26) is rewritten as

$$\begin{aligned} \left\langle {\mathbf{w}}-\widetilde{{\mathbf{w}}}^k, Q({\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^k)\right\rangle =\left\langle {\mathbf{w}}-\widetilde{{\mathbf{w}}}^k, H({\mathbf{w}}^k-{\mathbf{w}}^{k+1})\right\rangle \end{aligned}$$
(27)

where

(28)

with \(H_x\) defined in (18) and

(29)

The following lemma shows that H is a positive definite matrix for proper choice of the parameters \((\sigma _1,\sigma _2, \tau , s)\).

Lemma 4

The matrix H defined in (28) is symmetric positive definite if

$$\begin{aligned} \sigma _1\in (p-1,+\infty ),\quad \sigma _2\in (q-1,+\infty ),\quad \tau \le 1 \quad \text{ and } \quad \tau +s >0. \end{aligned}$$
(30)

Proof

By the block structure of H, we only need to show that the blocks \(H_\mathbf {x}\) and \(\widetilde{H}\) in (28) are positive definite if the parameters \((\sigma _1,\sigma _2, \tau , s)\) satisfy (30). Note that

$$\begin{aligned} H_\mathbf {x}=\beta \,\left[ \begin{array}{cccc} A_1 &{} &{} &{} \\ &{} A_2&{} &{} \\ &{} &{}\ddots &{} \\ &{} &{} &{}A_p \end{array}\right] ^\mathsf{T}H_{\mathbf {x},0} \left[ \begin{array}{cccc} A_1 &{} &{} &{} \\ &{} A_2&{} &{} \\ &{} &{}\ddots &{} \\ &{} &{} &{}A_p \end{array}\right] , \end{aligned}$$
(31)

where

$$\begin{aligned} H_{\mathbf {x},0}=\left[ \begin{array}{cccc} \sigma _1I &{}-I &{} \cdots &{}-I\\ -I &{} \sigma _1I&{}\cdots &{}-I \\ \vdots &{}\vdots &{}\ddots &{}\vdots \\ -I&{} -I&{}\cdots &{}\sigma _1I \end{array}\right] _{p \times p}. \end{aligned}$$
(32)

If \(\sigma _1\in (p-1,+\infty )\), \(H_{\mathbf {x},0}\) is positive definite. Then, it follows from (31) that \(H_\mathbf {x}\) is positive definite if \(\sigma _1\in (p-1,+\infty )\) and all \(A_i\), \(i=1, \ldots , p\), have full column rank.

Now, note that the matrix \(\widetilde{H}\) can be decomposed as

$$\begin{aligned} \widetilde{H}=\widetilde{D}^\mathsf{T}\widetilde{H}_0\widetilde{D}, \end{aligned}$$
(33)

where

(34)

and

According to the fact that

where

$$\begin{aligned} E=\left[ \begin{array}{c} I \\ I \\ \vdots \\ I \end{array}\right] \quad \text{ and } \quad H_{\mathbf {y},0}=\left[ \begin{array}{cccc} \sigma _2 I &{} -I &{} \cdots &{}-I \\ -I &{} \sigma _2 I &{}\cdots &{}-I \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ -I &{} -I &{}\cdots &{} \sigma _2 I \end{array}\right] _{q \times q}, \end{aligned}$$
(35)

we have \(\widetilde{H}_0\) is positive definite if and only if \(H_{\mathbf {y},0}+(1-\tau )EE^\mathsf{T}\) is positive definite and \(\tau + s >0\). Note that \(H_{\mathbf {y},0}\) is positive definite if \(\sigma _2\in (q-1,+\infty )\), and \((1-\tau )EE^\mathsf{T}\) is positive semidefinite if \(\tau \le 1\). So, \(\widetilde{H}_0\) is positive definite if \(\sigma _2\in (q-1,+\infty )\), \( \tau \le 1\) and \(\tau + s >0\). Then, it follows from (33) that \(\widetilde{H}\) is positive definite, if \(\sigma _2\in (q-1,+\infty )\), \( \tau \le 1\), \(\tau + s >0\) and all the matrices \(B_j\), \(j=1, \ldots , q\), have full column rank.

Summarizing the above discussions, the matrix H is positive definite if the parameters \((\sigma _1,\sigma _2, \tau , s)\) satisfy (30). \(\square \)

Theorem 2

The sequences \(\{{\mathbf{w}}^k\}\) and \(\{\widetilde{{\mathbf{w}}}^k\}\) generated by GS-ADMM satisfy

$$\begin{aligned} h({\mathbf{u}})-h(\widetilde{{\mathbf{u}}}^k)+ \left\langle {\mathbf{w}}-\widetilde{{\mathbf{w}}}^k, \mathcal {J}({\mathbf{w}}) \right\rangle\ge & {} \frac{1}{2}\left\{ \left\| {\mathbf{w}}-{\mathbf{w}}^{k+1}\right\| _H^2-\left\| {\mathbf{w}}-{\mathbf{w}}^{k}\right\| _H^2\right\} \nonumber \\&+\frac{1}{2}\left\| {\mathbf{w}}^{k}-\widetilde{{\mathbf{w}}}^k\right\| _G^2, \ \forall {\mathbf{w}}\in \mathcal {M}, \end{aligned}$$
(36)

where

$$\begin{aligned} G=Q+Q^\mathsf{T}-M^\mathsf{T}HM. \end{aligned}$$
(37)

Proof

By substituting

$$\begin{aligned} a={\mathbf{w}},\ b=\widetilde{{\mathbf{w}}}^k,\ c={\mathbf{w}}^k,\ d={\mathbf{w}}^{k+1}, \end{aligned}$$

into the following identity

$$\begin{aligned} 2\langle a-b, H(c-d)\rangle =\Vert a-d\Vert _H^2- \Vert a-c\Vert _H^2+\Vert c-b\Vert _H^2- \Vert d-b\Vert _H^2, \end{aligned}$$

we have

$$\begin{aligned}&2 \left\langle {\mathbf{w}}-\widetilde{{\mathbf{w}}}^k, H({\mathbf{w}}^k-{\mathbf{w}}^{k+1}) \right\rangle \nonumber \\&\quad = \left\| {\mathbf{w}}-{\mathbf{w}}^{k+1}\right\| _H^2- \left\| {\mathbf{w}}-{\mathbf{w}}^k\right\| _H^2 +\left\| {\mathbf{w}}^{k}-\widetilde{{\mathbf{w}}}^k\right\| _H^2- \left\| {\mathbf{w}}^{k+1}-\widetilde{{\mathbf{w}}}^k\right\| _H^2. \end{aligned}$$
(38)

Now, notice that

$$\begin{aligned}&\left\| {\mathbf{w}}^{k}-\widetilde{{\mathbf{w}}}^k\right\| _H^2 - \left\| {\mathbf{w}}^{k+1}-\widetilde{{\mathbf{w}}}^k\right\| _H^2 \nonumber \\&\quad = \left\| {\mathbf{w}}^{k}-\widetilde{{\mathbf{w}}}^k\right\| _H^2 - \left\| \widetilde{{\mathbf{w}}}^k-{\mathbf{w}}^k+{\mathbf{w}}^k-{\mathbf{w}}^{k+1}\right\| _H^2 \nonumber \\&\quad = \left\| {\mathbf{w}}^{k}-\widetilde{{\mathbf{w}}}^k\right\| _H^2- \left\| \widetilde{{\mathbf{w}}}^k-{\mathbf{w}}^k+M({\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^{k})\right\| _H^2 \nonumber \\&\quad = \left\langle {\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^k, \left( HM+(HM)^\mathsf{T}-M^\mathsf{T}HM\right) ({\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^{k}) \right\rangle \nonumber \\&\quad = \left\langle {\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^k, \left( Q+Q^\mathsf{T}-M^\mathsf{T}HM\right) ({\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^{k}) \right\rangle , \end{aligned}$$
(39)

where the second equality holds by (24) and the fourth equality follows from (28). Then, (36) follows from (26)–(27), (38)–(39) and the definition of G in (37). \(\square \)

Theorem 3

The sequences \(\{{\mathbf{w}}^k\}\) and \(\{\widetilde{{\mathbf{w}}}^k\}\) generated by GS-ADMM satisfy

$$\begin{aligned} \left\| {\mathbf{w}}^{k+1}-{\mathbf{w}}^*\right\| _H^2\le \left\| {\mathbf{w}}^{k}-{\mathbf{w}}^*\right\| _H^2-\left\| {\mathbf{w}}^{k}-\widetilde{{\mathbf{w}}}^k\right\| _G^2,\ \forall {\mathbf{w}}^*\in \mathcal {M}^*. \end{aligned}$$
(40)

Proof

Setting \(\mathbf {w}={\mathbf{w}}^* \in \mathcal {M}^*\) in (36) we have

$$\begin{aligned}&\frac{1}{2} \left\| {\mathbf{w}}^{k+1}-{\mathbf{w}}^*\right\| _H^2 \le \frac{1}{2} \left\| {\mathbf{w}}^{k}-{\mathbf{w}}^*\right\| _H^2 - \frac{1}{2} \left\| {\mathbf{w}}^{k}-\widetilde{{\mathbf{w}}}^k\right\| _G^2 + h({\mathbf{u}}^*)-h(\widetilde{{\mathbf{u}}}^k)\\&\quad + \left\langle {\mathbf{w}}^*-\widetilde{{\mathbf{w}}}^k, \mathcal {J}({\mathbf{w}}^*) \right\rangle . \end{aligned}$$

The above inequality together with (10) reduces to the inequality (40). \(\square \)

It can be observed that if \(\left\| {\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^k\right\| _G^2\) is positive, then the inequality (40) implies the contractiveness of the sequence \(\{{\mathbf{w}}^k-{\mathbf{w}}^*\}\) under the H-weighted norm. However, the matrix G defined in (37) is not necessarily positive definite when \(\sigma _1\in (p-1,+\infty )\), \(\sigma _2\in (q-1,+\infty )\) and \((\tau , s) \in {\mathcal {K}}\). Therefore, it is necessary to estimate the lower bound of \(\left\| {\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^k\right\| _G^2\) for the sake of the convergence analysis.

3.2 Lower bound of \(\left\| {\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^k\right\| _G^2\)

This subsection provides a lower bound of \(\left\| {\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^k\right\| _G^2\), for \(\sigma _1\in (p-1,+\infty )\), \(\sigma _2\in (q-1,+\infty )\) and \((\tau , s) \in {\mathcal {K}}\), where \({\mathcal {K}}\) is defined in (8).

By simple calculations, the G given in (37) can be explicitly written as

where \(H_\mathbf {x}\) is defined in (18) and

(41)

In addition, we have

$$\begin{aligned} \widetilde{G}=\widetilde{D}^\mathsf{T}\widetilde{G}_0\widetilde{D}, \end{aligned}$$

where \(\widetilde{D}\) is defined in (34) and

(42)

Now, we present the following lemma which provides a lower bound of \(\Vert {\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^k\Vert _G\).

Lemma 5

Suppose \(\sigma _1\in (p-1,+\infty )\) and \(\sigma _2\in (q-1,+\infty )\). For the sequences \(\{{\mathbf{w}}^k\}\) and \(\{ \widetilde{{\mathbf{w}}}^k \}\) generated by GS-ADMM, there exists \(\xi _1>0\) such that

$$\begin{aligned} \left\| {\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^k\right\| _G^2\ge & {} \xi _1 \left( \sum \limits _{i=1}^{p}\left\| A_i\left( x_i^k-x_i^{k+1}\right) \right\| ^2 +\sum \limits _{j=1}^{q}\left\| B_j\left( y_j^k-y_j^{k+1}\right) \right\| ^2\right) \nonumber \\&+\,\, (1-\tau )\beta \left\| \mathcal {B}\left( {\mathbf{y}}^{k}-{\mathbf{y}}^{k+1}\right) \right\| ^2\nonumber \\&+\,\, (2-\tau -s)\beta \left\| \mathcal {A}{\mathbf{x}}^{k+1}+\mathcal {B}{\mathbf{y}}^{k+1}-c\right\| ^2 \nonumber \\&+ \, 2(1-\tau )\beta \left( \mathcal {A}{\mathbf{x}}^{k+1}+\mathcal {B}{\mathbf{y}}^{k+1}-c\right) ^\mathsf{T}\mathcal {B}\left( {\mathbf{y}}^{k}-{\mathbf{y}}^{k+1}\right) . \end{aligned}$$
(43)

Proof

First, it is easy to derive that

$$\begin{aligned} \Vert {\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^k\Vert _G^2=\left\| \left( \begin{array}{l} A_1\left( x_1^k-x_1^{k+1}\right) \\ \ \ \ \ \ \vdots \\ A_{p}\left( x_p^k-x_p^{k+1}\right) \\ B_1\left( y_1^k-y_1^{k+1}\right) \\ \ \ \ \ \ \vdots \\ B_{q}\left( y_q^k-y_q^{k+1}\right) \\ \mathcal {A}{\mathbf{x}}^{k+1}+\mathcal {B}{\mathbf{y}}^{k+1}-c \end{array}\right) \right\| _{ \overline{G}}^2, \end{aligned}$$

where

with \(H_{\mathbf {x},0}\) and \(\widetilde{G}_0\) defined in (32) and (42), respectively, and

In the above matrix \(\overline{G}_0\), E and \(H_{\mathbf {y},0}\) are defined in (35). Hence, we have

$$\begin{aligned} \frac{1}{\beta } \left\| {\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^k\right\| _G^2= & {} \left\| \left( \begin{array}{l} A_1\left( x_1^k-x_1^{k+1}\right) \\ A_2\left( x_2^k-x_2^{k+1}\right) \\ \ \ \ \ \ \vdots \\ A_{p}\left( x_{p}^k-x_{p}^{k+1}\right) \end{array}\right) \right\| _{ H_{\mathbf {x},0}}^2 + \left\| \left( \begin{array}{l} B_1\left( y_1^k-y_1^{k+1}\right) \\ B_2\left( y_2^k-y_2^{k+1}\right) \\ \ \ \ \ \ \vdots \\ B_{q}\left( y_{q}^k-y_{q}^{k+1}\right) \end{array}\right) \right\| _{H_{\mathbf {y},0}}^2 \nonumber \\&+ \left\| \left( \begin{array}{l} B_1\left( y_1^k-y_1^{k+1}\right) \\ B_2\left( y_2^k-y_2^{k+1}\right) \\ \ \ \ \ \ \vdots \\ B_{q}\left( y_{q}^k-y_{q}^{k+1}\right) \end{array}\right) \right\| _{(1-\tau )EE^\mathsf{T}}^2\nonumber \\&+\,\,(2-\tau -s) \left\| \mathcal {A}{\mathbf{x}}^{k+1}+\mathcal {B}{\mathbf{y}}^{k+1}-c\right\| ^2 \nonumber \\&+ \,2(1-\tau ) \left( \mathcal {A}{\mathbf{x}}^{k+1}+\mathcal {B}{\mathbf{y}}^{k+1}-c\right) ^\mathsf{T}\mathcal {B}\left( {\mathbf{y}}^{k}-{\mathbf{y}}^{k+1}\right) . \end{aligned}$$
(44)

Since \(\sigma _1\in (p-1,+\infty )\) and \(\sigma _2\in (q-1,+\infty )\), \(H_{\mathbf {x},0}\) and \(H_{\mathbf {y},0}\) are positive definite. So, there exists a \(\xi _1 >0\) such that

$$\begin{aligned}&\left\| \left( \begin{array}{l} A_1\left( x_1^k-x_1^{k+1}\right) \\ A_2\left( x_2^k-x_2^{k+1}\right) \\ \ \ \ \ \ \vdots \\ A_{p}\left( x_{p}^k-x_{p}^{k+1}\right) \end{array}\right) \right\| _{ H_{\mathbf {x},0}}^2 + \left\| \left( \begin{array}{l} B_1\left( y_1^k-y_1^{k+1}\right) \\ B_2\left( y_2^k-y_2^{k+1}\right) \\ \ \ \ \ \ \vdots \\ B_{q}\left( y_{q}^k-y_{q}^{k+1}\right) \end{array}\right) \right\| _{H_{\mathbf {y},0}}^2 \nonumber \\&\quad \ge \xi _1 \left( \sum \limits _{i=1}^{p}\Vert A_i\left( x_i^k-x_i^{k+1}\right) \Vert ^2+\sum \limits _{j=1}^{q}\Vert B_j\left( y_j^k-y_j^{k+1}\right) \Vert ^2\right) . \end{aligned}$$
(45)

In view of the definition of E in (35), we have

$$\begin{aligned} \left\| \left( \begin{array}{l} B_1\left( y_1^k-y_1^{k+1}\right) \\ B_2\left( y_2^k-y_2^{k+1}\right) \\ \ \ \ \ \ \vdots \\ B_{q}\left( y_{q}^k-y_{q}^{k+1}\right) \end{array}\right) \right\| _{(1-\tau )EE^\mathsf{T}}^2=(1-\tau )\left\| \mathcal {B}\left( {\mathbf{y}}^{k}-{\mathbf{y}}^{k+1}\right) \right\| ^2. \end{aligned}$$

Then, the inequality (43) follows from (44) and (45). \(\square \)

Lemma 6

Suppose \( \tau >-1\). Then, the sequence \(\{{\mathbf{w}}^k\}\) generated by GS-ADMM satisfies

$$\begin{aligned}&\left( \mathcal {A}{\mathbf{x}}^{k+1}+\mathcal {B}{\mathbf{y}}^{k+1}-c\right) ^\mathsf{T}\mathcal {B}\left( {\mathbf{y}}^{k}-{\mathbf{y}}^{k+1}\right) \nonumber \\&\quad \ge \frac{1-s}{1+\tau }\left( \mathcal {A}{\mathbf{x}}^{k}+\mathcal {B}{\mathbf{y}}^{k}-c\right) ^\mathsf{T}\mathcal {B}\left( {\mathbf{y}}^{k}-{\mathbf{y}}^{k+1}\right) -\frac{\tau }{1+\tau }\left\| \mathcal {B}\left( {\mathbf{y}}^{k}-{\mathbf{y}}^{k+1}\right) \right\| ^2 \nonumber \\&\qquad +\,\,\frac{1}{2(1+\tau )\beta }\left( \left\| {\mathbf{y}}^{k+1}-{\mathbf{y}}^k\right\| ^2_{H_\mathbf {y}}-\left\| {\mathbf{y}}^{k}-{\mathbf{y}}^{k-1}\right\| ^2_{H_\mathbf {y}}\right) . \end{aligned}$$
(46)

Proof

It follows from the optimality condition of \(y_l^{k+1}\)-subproblem that \(y_l^{k+1}\in \mathcal {Y}_l\) and for any \(y_l \in {\mathcal {Y}}_l\), we have

$$\begin{aligned}&g_l(y_l)-g_l(y_l^{k+1}) + \left\langle B_l (y_l-y_l^{k+1}), - \lambda ^{k+\frac{1}{2}}+\sigma _2\beta B_l\left( y_l^{k+1}-y_l^k\right) \right. \\&\left. \quad + \,\beta (B_l y_l^{k+1} -c_{y,l}) \right\rangle \ge 0 \end{aligned}$$

with \(c_{y,l} = c - {\mathcal {A}}{\mathbf{x}}^{k+1} - \sum \limits _{j=1,j\ne l}^{q}B_jy_j^k\), which implies

$$\begin{aligned}&g_l(y_l)-g_l(y_l^{k+1}) \\&\quad + \left\langle B_l (y_l-y_l^{k+1}), - \lambda ^{k+\frac{1}{2}}+\sigma _2\beta B_l\left( y_l^{k+1}-y_l^k\right) + \beta ( {\mathcal {A}}{\mathbf{x}}^{k+1}+{\mathcal {B}}{\mathbf{y}}^{k+1} -c) \right\rangle \\&\quad - \beta \left\langle B_l (y_l-y_l^{k+1}), \sum \limits _{j=1,j\ne l}^{q}B_j (y_j^{k+1} - y_j^k) \right\rangle \ge 0. \end{aligned}$$

For \(l=1,2,\ldots ,q\), letting \(y_l=y_l^k\) in the above inequality and summing them together, we can deduce that

$$\begin{aligned}&\sum \limits _{l=1}^{q}\left( g_l(y_l^k)-g_l(y_l^{k+1}) \right) + \left\langle {\mathcal {B}} ({\mathbf{y}}^k-{\mathbf{y}}^{k+1}), - \lambda ^{k+\frac{1}{2}}+\beta \left( \mathcal {A}{\mathbf{x}}^{k+1}+\mathcal {B}{\mathbf{y}}^{k+1}-c\right) \right\rangle \nonumber \\&\quad \ge \left\| {\mathbf{y}}^{k+1}-{\mathbf{y}}^k\right\| ^2_{H_\mathbf {y}}, \end{aligned}$$
(47)

where

$$\begin{aligned} H_{\mathbf {y}}= & {} \beta \left[ \begin{array}{ccccccccccccc} \sigma _2 B_1^\mathsf{T}B_1 &{}&{}&{}&{} - B_1^\mathsf{T}B_2 &{}&{}&{}&{} \cdots &{}&{}&{}&{}- B_1^\mathsf{T}B_{q} \\ &{}&{}&{} &{} &{}&{}&{}&{} &{}&{}&{}&{} \\ - B_2^\mathsf{T}B_1 &{}&{}&{}&{} \sigma _2 B_2^\mathsf{T}B_2 &{}&{}&{}&{} \cdots &{}&{}&{}&{}- B_2^\mathsf{T}B_{q} \\ &{}&{}&{} &{} &{}&{}&{}&{} &{}&{}&{}&{} \\ \vdots &{}&{}&{} &{} \vdots &{}&{}&{}&{}\ddots &{}&{}&{}&{}\vdots \\ &{}&{}&{} &{} &{}&{}&{}&{} &{}&{}&{}&{} \\ - B_q^\mathsf{T}B_1 &{}&{}&{}&{} - B_q^\mathsf{T}B_2 &{}&{}&{}&{}\cdots &{}&{}&{}&{}\sigma _2 B_q^\mathsf{T}B_q \end{array}\right] \nonumber \\= & {} \beta \left[ \begin{array}{cccc} B_1 &{} &{} &{} \\ &{} &{}\ddots &{} \\ &{} &{} &{}B_q \end{array}\right] ^\mathsf{T}H_{\mathbf {y},0} \left[ \begin{array}{cccc} B_1 &{} &{} &{} \\ &{} &{}\ddots &{} \\ &{} &{} &{}B_q \end{array}\right] \end{aligned}$$
(48)

and \(H_{\mathbf {y},0}\) is defined in (35). Similarly, it follows from the optimality condition of \(y_l^k\)-subproblem that

$$\begin{aligned}&g_l(y_l)-g_l(y_l^k) + \left\langle B_l (y_l-y_l^k),- \lambda ^{k-\frac{1}{2}}+\sigma _2\beta B_l\left( y_l^k-y_l^{k-1}\right) \right. \\&\left. \quad + \,\beta ({\mathcal {A}}{\mathbf{x}}^k+{\mathcal {B}}{\mathbf{y}}^k -c) \right\rangle - \beta \left\langle B_l (y_l-y_l^k), \sum \limits _{j=1,j\ne l}^{q} B_j (y_j^k - y_j^{k-1}) \right\rangle \ge 0. \end{aligned}$$

For \(l=1,2,\ldots ,q\), letting \(y_l=y_l^{k+1}\) in the above inequality and summing them together, we obtain

$$\begin{aligned}&\sum \limits _{l=1}^{q}\left( g_l(y_l^{k+1})-g_p(y_l^k) \right) + \left\langle {\mathcal {B}} ({\mathbf{y}}^{k+1}-{\mathbf{y}}^k), - \lambda ^{k-\frac{1}{2}}+\beta \left( \mathcal {A}{\mathbf{x}}^{k}+\mathcal {B}{\mathbf{y}}^{k}-c\right) \right\rangle \nonumber \\&\quad \ge ({\mathbf{y}}^{k}-{\mathbf{y}}^{k+1})^\mathsf{T}{H_\mathbf {y}}({\mathbf{y}}^{k}-{\mathbf{y}}^{k-1}). \end{aligned}$$
(49)

Since \(\sigma _2 \in (q-1, \infty )\) and all \(B_j\), \(j=1,\ldots , q\), have full column rank, we have from (48) that \(H_\mathbf {y}\) is positive definite. Meanwhile, by the Cauchy-Schwartz inequality, we get

$$\begin{aligned}&\left\| {\mathbf{y}}^{k+1}-{\mathbf{y}}^k\right\| ^2_{H_\mathbf {y}}+\left( {\mathbf{y}}^{k}-{\mathbf{y}}^{k+1}\right) ^\mathsf{T}{H_\mathbf {y}}\left( {\mathbf{y}}^{k}-{\mathbf{y}}^{k-1}\right) \nonumber \\&\quad \ge \frac{1}{2}\left( \left\| {\mathbf{y}}^{k+1}-{\mathbf{y}}^k\right\| ^2_{H_\mathbf {y}}-\left\| {\mathbf{y}}^{k}-{\mathbf{y}}^{k-1}\right\| ^2_{H_\mathbf {y}}\right) . \end{aligned}$$
(50)

By adding (47) to (49) and using (50), we achieve

$$\begin{aligned}&\left\langle {\mathcal {B}} ({\mathbf{y}}^{k}-{\mathbf{y}}^{k+1}), \lambda ^{k-\frac{1}{2}}-\lambda ^{k+\frac{1}{2}} +\beta (\mathcal {A}{\mathbf{x}}^{k+1}+\mathcal {B}{\mathbf{y}}^{k+1}-c ) \right\rangle \nonumber \\&\quad \ge \langle {\mathcal {B}} ({\mathbf{y}}^{k}-{\mathbf{y}}^{k+1}), \beta ( \mathcal {A}{\mathbf{x}}^{k}+\mathcal {B}{\mathbf{y}}^{k}-c)\rangle {+} \frac{1}{2}\left( \left\| {\mathbf{y}}^{k+1}-{\mathbf{y}}^k\right\| ^2_{H_\mathbf {y}}-\left\| {\mathbf{y}}^{k}-{\mathbf{y}}^{k-1}\right\| ^2_{H_\mathbf {y}}\right) .\nonumber \\ \end{aligned}$$
(51)

From the update of \(\lambda ^{k+\frac{1}{2}}\), i.e., \(\lambda ^{k+\frac{1}{2}}=\lambda ^k-\tau \beta \left( \mathcal {A}{\mathbf{x}}^{k+1}+\mathcal {B}{\mathbf{y}}^{k}-c\right) \) and the update of \(\lambda ^k\), i.e., \(\lambda ^k=\lambda ^{k-\frac{1}{2}}-s\beta \left( \mathcal {A}{\mathbf{x}}^{k}+\mathcal {B}{\mathbf{y}}^{k}-c\right) ,\) we have

$$\begin{aligned} \lambda ^{k-\frac{1}{2}}-\lambda ^{k+\frac{1}{2}} = \tau \beta (\mathcal {A}{\mathbf{x}}^{k+1}+\mathcal {B}{\mathbf{y}}^{k+1}-c) +s\beta (\mathcal {A}{\mathbf{x}}^{k}+\mathcal {B}{\mathbf{y}}^{k}-c)+ \tau \beta \mathcal {B} ({\mathbf{y}}^k-{\mathbf{y}}^{k+1}). \end{aligned}$$

Substituting the above inequality into the left-term of (51), the proof is completed. \(\square \)

Theorem 4

Suppose \(\sigma _1\in (p-1,+\infty )\), \(\sigma _2\in (q-1,+\infty )\) and \(\tau > -1\). For the sequences \(\{{\mathbf{w}}^k\}\) and \(\{ \widetilde{{\mathbf{w}}}^k \}\) generated by GS-ADMM , there exists \(\xi _1>0\) such that

$$\begin{aligned} \left\| {\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^k\right\| _G^2\ge & {} \xi _1\left( \sum \limits _{i=1}^{p}\left\| A_i\left( x_i^k-x_i^{k+1}\right) \right\| ^2 +\sum \limits _{j=1}^{q}\left\| B_j\left( y_j^k-y_j^{k+1}\right) \right\| ^2\right) \nonumber \\&+ \, (2-\tau -s)\beta \left\| \mathcal {A}{\mathbf{x}}^{k+1}+\mathcal {B}{\mathbf{y}}^{k+1}-c\right\| ^2\nonumber \\&+\,\,\frac{1-\tau }{1+\tau }\left( \left\| {\mathbf{y}}^{k+1}-{\mathbf{y}}^k\right\| ^2_{H_\mathbf {y}}-\left\| {\mathbf{y}}^{k}-{\mathbf{y}}^{k-1}\right\| ^2_{H_\mathbf {y}}\right) \nonumber \\&+\,\, \frac{(1-\tau )^2}{1+\tau }\beta \left\| \mathcal {B}\left( {\mathbf{y}}^k-{\mathbf{y}}^{k+1}\right) \right\| ^2\nonumber \\&+\,\, \frac{2(1-\tau )(1-s)}{1+\tau }\beta \left( \mathcal {A}{\mathbf{x}}^{k}+\mathcal {B}{\mathbf{y}}^{k}-c\right) ^\mathsf{T}\mathcal {B}\left( {\mathbf{y}}^k-{\mathbf{y}}^{k+1}\right) . \end{aligned}$$
(52)

Proof

The inequality (52) is directly obtained from (43) and (46). \(\square \)

The following theorem gives another variant of the lower bound of \(\Vert {\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^k\Vert _G^2\), which plays a key role in showing the convergence of GS-ADMM.

Theorem 5

Let the sequences \(\{{\mathbf{w}}^k\}\) and \(\{ \widetilde{{\mathbf{w}}}^k \}\) be generated by GS-ADMM. Then, for any

$$\begin{aligned} \sigma _1\in (p-1,+\infty ),\quad \sigma _2\in (q-1,+\infty ) \quad \text{ and } \quad (\tau ,s) \in {\mathcal {K}}, \end{aligned}$$
(53)

where \({\mathcal {K}}\) is defined in (8), there exist constants \(\xi _i(i=1,2)>0\) and \(\xi _j(j=3,4)\ge 0\), such that

$$\begin{aligned} \left\| {\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^k\right\| _G^2\ge & {} \xi _1\left( \sum \limits _{i=1}^{p}\left\| A_i\left( x_i^k-x_i^{k+1}\right) \right\| ^2 +\sum \limits _{j=1}^{q}\left\| B_j\left( y_j^k-y_j^{k+1}\right) \right\| ^2\right) \nonumber \\&+\,\, \xi _2\left\| \mathcal {A}{\mathbf{x}}^{k+1}+\mathcal {B}{\mathbf{y}}^{k+1}-c\right\| ^2 \nonumber \\&+\,\, \xi _3\left( \left\| \mathcal {A}{\mathbf{x}}^{k+1}+\mathcal {B}{\mathbf{y}}^{k+1}-c\right\| ^2-\left\| \mathcal {A}{\mathbf{x}}^{k}+\mathcal {B}{\mathbf{y}}^{k}-c\right\| ^2\right) \nonumber \\&+ \,\xi _4\left( \left\| {\mathbf{y}}^{k+1}-{\mathbf{y}}^k\right\| ^2_{H_\mathbf {y}}-\left\| {\mathbf{y}}^{k}-{\mathbf{y}}^{k-1}\right\| ^2_{H_\mathbf {y}}\right) . \end{aligned}$$
(54)

Proof

By the Cauchy-Schwartz inequality, we have

$$\begin{aligned}&2(1-\tau )(1-s)\left( \mathcal {A}{\mathbf{x}}^{k}+\mathcal {B}{\mathbf{y}}^{k}-c\right) ^\mathsf{T}\mathcal {B}\left( {\mathbf{y}}^k-{\mathbf{y}}^{k+1}\right) \nonumber \\&\quad \ge -(1-s)^2\left\| \mathcal {A}{\mathbf{x}}^{k}+\mathcal {B}{\mathbf{y}}^{k}-c\right\| ^2-(1-\tau )^2\left\| \mathcal {B}\left( {\mathbf{y}}^k-{\mathbf{y}}^{k+1}\right) \right\| ^2. \end{aligned}$$
(55)

Since

$$\begin{aligned} -\tau ^2 - s^2 -\tau s + \tau + s + 1 = - \tau ^2 + (1-s)(\tau +s) + 1, \end{aligned}$$

we have \(\tau > -1\) when \((\tau , s) \in {\mathcal {K}}\). Then, combining (55) with Theorem 4, we deduce

$$\begin{aligned} \left\| {\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^k\right\| _G^2\ge & {} \xi _1 \left( \sum \limits _{i=1}^{p}\left\| A_i\left( x_i^k-x_i^{k+1}\right) \right\| ^2 +\sum \limits _{j=1}^{q}\left\| B_j\left( y_j^k-y_j^{k+1}\right) \right\| ^2\right) \nonumber \\&+ \left( 2-\tau -s-\frac{(1-s)^2}{1+\tau }\right) \beta \left\| \mathcal {A}{\mathbf{x}}^{k+1}+\mathcal {B}{\mathbf{y}}^{k+1}-c\right\| ^2 \nonumber \\&+\frac{(1-s)^2}{1+\tau }\beta \left( \left\| \mathcal {A}{\mathbf{x}}^{k+1}+\mathcal {B}{\mathbf{y}}^{k+1}-c\right\| ^2-\left\| \mathcal {A}{\mathbf{x}}^{k}+\mathcal {B}{\mathbf{y}}^{k}-c\right\| ^2\right) \nonumber \\&+ \frac{1-\tau }{1+\tau }\left( \left\| {\mathbf{y}}^{k+1}-{\mathbf{y}}^k\right\| ^2_{H_\mathbf {y}}-\left\| {\mathbf{y}}^{k}-{\mathbf{y}}^{k-1}\right\| ^2_{H_\mathbf {y}}\right) , \end{aligned}$$
(56)

where \(\xi _1 >0\) is the constant in Theorem 4. Since \( -1 < \tau \le 1 \) and \( \beta > 0 \), we have

$$\begin{aligned} \xi _3 := \frac{(1-s)^2}{1+\tau }\beta \ge 0 \quad \text{ and } \quad \xi _4 := \frac{1-\tau }{1+\tau } \ge 0. \end{aligned}$$
(57)

In addition, when \((\tau , s) \in {\mathcal {K}} \), there is

$$\begin{aligned} -\tau ^2 - s^2 -\tau s + \tau + s + 1 >0, \end{aligned}$$

which, by \(\tau > -1\) and \(\beta >0\), implies

$$\begin{aligned} \xi _2 :=\left( 2-\tau -s-\frac{(1-s)^2}{1+\tau }\right) \beta >0. \end{aligned}$$
(58)

Hence, the proof is completed. \(\square \)

3.3 Global convergence

In this subsection, we show the global convergence and the worst-case O(1 / t) convergence rate of GS-ADMM. The following corollary is obtained directly from Theorems 23 and 5.

Corollary 1

Let the sequences \(\{{\mathbf{w}}^k\}\) and \(\{ \widetilde{{\mathbf{w}}}^k \}\) be generated by GS-ADMM. For any \((\sigma _1, \sigma _2, \tau , s)\) satisfying (53), there exist constants \(\xi _i(i=1,2)>0\) and \(\xi _j(j=3,4)\ge 0\) such that

$$\begin{aligned}&\left\| {\mathbf{w}}^{k+1}-{\mathbf{w}}^*\right\| _H^2+ \xi _3\left\| \mathcal {A}{\mathbf{x}}^{k+1}+\mathcal {B}{\mathbf{y}}^{k+1}-c\right\| ^2+\xi _4\left\| {\mathbf{y}}^{k+1}-{\mathbf{y}}^{k}\right\| ^2_{H_\mathbf {y}}\nonumber \\&\quad \le \left\| {\mathbf{w}}^{k}-{\mathbf{w}}^*\right\| _H^2+\xi _3\left\| \mathcal {A}{\mathbf{x}}^{k}+\mathcal {B}{\mathbf{y}}^{k}-c\right\| ^2 +\xi _4\left\| {\mathbf{y}}^{k}-{\mathbf{y}}^{k-1}\right\| ^2_{H_\mathbf {y}} \nonumber \\&\qquad - \xi _1\left( \sum \limits _{i=1}^{p}\Vert A_i\left( x_i^k-x_i^{k+1}\right) \Vert ^2 +\sum \limits _{j=1}^{q}\left\| B_j\left( y_j^k-y_j^{k+1}\right) \right\| ^2\right) \nonumber \\&\qquad - \xi _2\left\| \mathcal {A}{\mathbf{x}}^{k+1}+\mathcal {B}{\mathbf{y}}^{k+1}-c\right\| ^2, \quad \forall {\mathbf{w}}^*\in \mathcal {M}^*, \end{aligned}$$
(59)

and

$$\begin{aligned}&h({\mathbf{u}})-h(\widetilde{{\mathbf{u}}}^k)+ \langle w-\widetilde{{\mathbf{w}}}^k, \mathcal {J}({\mathbf{w}}) \rangle \nonumber \\&\quad \ge \frac{1}{2}\left( \Vert {\mathbf{w}}-{\mathbf{w}}^{k+1}\Vert _H^2+\xi _3\left\| \mathcal {A}{\mathbf{x}}^{k+1}+\mathcal {B}{\mathbf{y}}^{k+1}-c\right\| ^2+\xi _4\left\| {\mathbf{y}}^{k+1}-{\mathbf{y}}^{k}\right\| ^2_{H_\mathbf {y}}\right) \nonumber \\&\qquad - \frac{1}{2}\left( \Vert {\mathbf{w}}-{\mathbf{w}}^{k}\Vert _H^2+\xi _3\left\| \mathcal {A}{\mathbf{x}}^{k}+\mathcal {B}{\mathbf{y}}^{k}-c\right\| ^2+\xi _4\left\| {\mathbf{y}}^{k}-{\mathbf{y}}^{k-1}\right\| ^2_{H_\mathbf {y}}\right) , \quad \forall {\mathbf{w}} \in \mathcal {M}.\nonumber \\ \end{aligned}$$
(60)

Theorem 6

Let the sequences \(\{{\mathbf{w}}^k\}\) and \(\{ \widetilde{{\mathbf{w}}}^k \}\) be generated by GS-ADMM. Then, for any \((\sigma _1, \sigma _2, \tau , s)\) satisfying (53), we have

$$\begin{aligned}&\lim _{k\rightarrow \infty } \left( \sum \limits _{i=1}^{p}\left\| A_i\left( x_i^k-x_i^{k+1}\right) \right\| ^2 +\sum \limits _{j=1}^{q}\left\| B_j\left( y_j^k-y_j^{k+1}\right) \right\| ^2 \right) =0, \end{aligned}$$
(61)
$$\begin{aligned}&\lim _{k\rightarrow \infty } \left\| \mathcal {A}{\mathbf{x}}^k+\mathcal {B}{\mathbf{y}}^k-c\right\| = 0, \end{aligned}$$
(62)

and there exists a \({\mathbf{w}}^\infty \in \mathcal {M}^*\) such that

$$\begin{aligned} \lim _{k\rightarrow \infty } \widetilde{{\mathbf{w}}}^{k} = {\mathbf{w}}^\infty . \end{aligned}$$
(63)

Proof

Summing the inequality (59) over \(k=1,2,\ldots ,\infty \), we have

$$\begin{aligned}&\xi _1 \sum \limits _{k=1}^{\infty } \left( \sum \limits _{i=1}^{p}\left\| A_i\left( x_i^k-x_i^{k+1}\right) \right\| ^2+\sum \limits _{j=1}^{q}\left\| B_j\left( y_j^k-y_j^{k+1}\right) \right\| ^2\right) \nonumber \\&\qquad +\,\, \xi _2 \sum \limits _{k=1}^{\infty } \left\| \mathcal {A}{\mathbf{x}}^{k+1}+\mathcal {B}{\mathbf{y}}^{k+1}-c\right\| ^2 \\&\quad \le \Vert {\mathbf{w}}^{1}-{\mathbf{w}}^*\Vert _H^2+\xi _1\left\| \mathcal {A}{\mathbf{x}}^{1}+\mathcal {B}{\mathbf{y}}^{1}-c\right\| ^2+\xi _2\left\| {\mathbf{y}}^{1}-{\mathbf{y}}^{0}\right\| ^2_{H_\mathbf {y}}, \end{aligned}$$

which implies that (61) and (62) hold since \(\xi _1 >0\) and \(\xi _2 >0\).

Because \((\sigma _1, \sigma _2, \tau , s)\) satisfy (53), we have by Lemma 4 that H is positive definite. So, it follows from (59) that the sequence \(\{{\mathbf{w}}^{k}\}\) is uniformly bounded. Therefore, there exits a subsequence \(\{{\mathbf{w}}^{k_j}\}\) converging to a point \({\mathbf{w}}^\infty = ({\mathbf{x}}^\infty , {\mathbf{y}}^\infty , \lambda ^\infty ) \in {\mathcal {M}}\). In addition, by the definitions of \(\widetilde{x}_k\), \(\widetilde{y}_k\) and \(\widetilde{\lambda }_k\) in (13) and (14), it follows from (61), (62) and the full column rank assumption of all the matrices \(A_i\) and \(B_j\) that

$$\begin{aligned} \lim _{k\rightarrow \infty } x_i^k-\widetilde{x}_i^k = {\mathbf{0}}, \quad \lim _{k\rightarrow \infty } y_j^k-\widetilde{y}_j^k = {\mathbf{0}} \quad \text{ and } \quad \lim _{k\rightarrow \infty } \lambda ^k-\widetilde{\lambda }^k = {\mathbf{0}}, \end{aligned}$$
(64)

for all \(i=1, \ldots , p\) and \(j=1, \ldots , q\). So, we have \(\lim \limits _{k \rightarrow \infty } {\mathbf{w}}^k - \widetilde{{\mathbf{w}}}^k = \mathbf{0}\). Thus, for any fixed \(\mathbf {w} \in {\mathcal {M}}\), taking \(\widetilde{{\mathbf{w}}}^{k_j}\) in (16) and letting j go to \(\infty \), we obtain

$$\begin{aligned} h({\mathbf{u}})-h({\mathbf{u}}^\infty )+ \langle \mathbf {w}- {\mathbf{w}}^\infty , \mathcal {J}({\mathbf{w}}^\infty ) \rangle \ge 0. \end{aligned}$$
(65)

Hence, \({\mathbf{w}}^\infty \in {\mathcal {M}}^*\) is a solution point of \(\text {VI}(h, \mathcal {J}, \mathcal {M})\) defined in (12).

Since (59) holds for any \({\mathbf{w}}^* \in {\mathcal {M}}^*\), by (59) and \({\mathbf{w}}^\infty \in {\mathcal {M}}^*\), for all \(l \ge k_j\), we have

$$\begin{aligned}&\left\| {\mathbf{w}}^{l}-{\mathbf{w}}^\infty \right\| _H^2+ \xi _3\left\| \mathcal {A}{\mathbf{x}}^{l}+\mathcal {B}{\mathbf{y}}^{l}-c\right\| ^2+\xi _4\left\| {\mathbf{y}}^{l}-{\mathbf{y}}^{l-1}\right\| ^2_{H_\mathbf {y}}\\&\quad \le \left\| {\mathbf{w}}^{k_j}-{\mathbf{w}}^\infty \right\| _H^2+\xi _3\left\| \mathcal {A}{\mathbf{x}}^{k_j} +\mathcal {B}{\mathbf{y}}^{k_j}-c\right\| ^2+\xi _4\left\| {\mathbf{y}}^{k_j}-{\mathbf{y}}^{k_j-1}\right\| ^2_{H_\mathbf {y}}. \end{aligned}$$

This together with (62), (64), \(\lim \limits _{j \rightarrow \infty } {\mathbf{w}}^{k_j} = {\mathbf{w}}^\infty \) and the positive definiteness of H illustrate \(\lim \limits _{l \rightarrow \infty } {\mathbf{w}}^l = {\mathbf{w}}^\infty \). Therefore, the whole sequence \(\{{\mathbf{w}}^k\}\) converges to the solution \({\mathbf{w}}^\infty \in {\mathcal {M}}^*\). This completes the whole proof. \(\square \)

The above Theorem 6 shows the global convergence of our GS-ADMM. Next, we show the \({\mathcal {O}}(1/t)\) convergence rate for the ergodic iterates

$$\begin{aligned} {\mathbf{w}}_t :=\frac{1}{t}\sum _{k=1}^{t}\widetilde{{\mathbf{w}}}^{k} \quad \text{ and } \quad {\mathbf{u}}_t :=\frac{1}{t}\sum _{k=1}^{t}\widetilde{{\mathbf{u}}}^{k}. \end{aligned}$$
(66)

Theorem 7

Let the sequences \(\{{\mathbf{w}}^k\}\) and \(\{ \widetilde{{\mathbf{w}}}^k \}\) be generated by GS-ADMM. Then, for any \((\sigma _1, \sigma _2, \tau , s)\) satisfying (53), there exist \(\xi _j(j =3,4) \ge 0\) such that

$$\begin{aligned}&h({\mathbf{u}}_t)-h({\mathbf{u}})+ \langle {\mathbf{w}}_t-{\mathbf{w}}, \mathcal {J}({\mathbf{w}})\rangle \nonumber \\&\quad \le \frac{1}{2t}\left( \left\| {\mathbf{w}}-{\mathbf{w}}^{1}\right\| _H^2+\xi _3\left\| \mathcal {A}{\mathbf{x}}^{1}+\mathcal {B}{\mathbf{y}}^{1}-c\right\| ^2 +\xi _4\left\| {\mathbf{y}}^{1}-{\mathbf{y}}^{0}\right\| ^2_{H_\mathbf {y}}\right) , \quad \forall \mathbf {w}\in \mathcal {M}.\nonumber \\ \end{aligned}$$
(67)

Proof

For \(k=1, \ldots , t\), summing the inequality (60), we have

$$\begin{aligned}&t h({\mathbf{u}}) - \sum \limits _{k=1}^{t}h(\widetilde{{\mathbf{u}}}^k)+\left\langle t {\mathbf{w}} - \sum \limits _{k=1}^{t} \widetilde{{\mathbf{w}}}^k, \mathcal {J}({\mathbf{w}})\right\rangle \nonumber \\&\quad \ge \frac{1}{2}\left( \left\| {\mathbf{w}}-{\mathbf{w}}^{t+1}\right\| _H^2+\xi _3 \left\| \mathcal {A}{\mathbf{x}}^{t+1}+\mathcal {B}{\mathbf{y}}^{t+1}-c\right\| ^2+\xi _4\left\| {\mathbf{y}}^{t+1}-{\mathbf{y}}^{t}\right\| ^2_{H_\mathbf {y}}\right) \nonumber \\&- \frac{1}{2}\left( \Vert {\mathbf{w}}-{\mathbf{w}}^{1}\Vert _H^2+\xi _3\left\| \mathcal {A}{\mathbf{x}}^{1}+\mathcal {B}{\mathbf{y}}^{1}-c\right\| ^2+\xi _4\left\| {\mathbf{y}}^{1}-{\mathbf{y}}^{0}\right\| ^2_{H_\mathbf {y}}\right) , \quad \forall {\mathbf{w}}\in \mathcal {M}.\nonumber \\ \end{aligned}$$
(68)

Since \((\sigma _1, \sigma _2, \tau , s)\) satisfy (53), \(H_\mathbf {y}\) is positive definite. And by Lemma 4, H is also positive definite. So, it follows from (68) that

$$\begin{aligned}&\frac{1}{t}\sum \limits _{k=1}^{t}h(\widetilde{{\mathbf{u}}}^k)-h({\mathbf{u}})+\left\langle \frac{1}{t}\sum \limits _{k=1}^{t}\widetilde{{\mathbf{w}}}^k-{\mathbf{w}}, \mathcal {J}({\mathbf{w}})\right\rangle \nonumber \\&\quad \le \frac{1}{2t} \left( \left\| {\mathbf{w}}-{\mathbf{w}}^{1}\right\| _H^2+\xi _3\left\| \mathcal {A}{\mathbf{x}}^{1}+\mathcal {B}{\mathbf{y}}^{1}-c\right\| ^2 +\xi _4\left\| {\mathbf{y}}^{1}-{\mathbf{y}}^{0}\right\| ^2_{H_\mathbf {y}}\right) , \quad \forall {\mathbf{w}} \in \mathcal {M}.\nonumber \\ \end{aligned}$$
(69)

By the convexity of h and (66), we have

$$\begin{aligned} h({\mathbf{u}}_t)\le \frac{1}{t}\sum _{k=1}^{t} h(\widetilde{{\mathbf{u}}}^{k}). \end{aligned}$$

Then, (67) follows from (69). \(\square \)

Remark 1

In the above Theorems 6 and 7, we assume the parameters \((\sigma _1, \sigma _2, \tau , s)\) satisfy (53). However, because of the symmetric role played by the x and y iterates in the GS-ADMM, substituting the index \(k+1\) by k for the x and \(\lambda \) iterates, the GS-ADMM algorithm (7) can be clearly written as

$$\begin{aligned} \ \left\{ \begin{array}{lll} \text {For}\ j=1,2,\ldots ,q,\\ \quad y_{j}^{k+1}=\arg \min \limits _{y_{j}\in \mathcal {Y}_{j}} \mathcal {L}_\beta ({\mathbf{x}}^k,y_1^k,\ldots , y_j,\ldots ,y_q^k,\lambda ^{k-\frac{1}{2}}) + Q_j^k(y_j), \\ \quad \text {where } Q_j^k(y_j) = \frac{\sigma _2\beta }{2}\left\| B_{j}(y_{j}-y_{j}^k)\right\| ^2,\\ \lambda ^{k}=\lambda ^{k-\frac{1}{2}}-s\beta (\mathcal {A}{\mathbf{x}}^k+\mathcal {B}{\mathbf{y}}^{k+1}-c) \\ \\ \text {For}\ i=1,2,\ldots ,p,\\ \quad x_{i}^{k+1}=\arg \min \limits _{x_{i}\in \mathcal {X}_{i}} \mathcal {L}_\beta (x_1^k,\ldots , x_i,\ldots ,x_p^k,{\mathbf{y}}^{k+1},\lambda ^k) +P_i^k(x_i), \\ \quad \text {where } P_i^k(x_i) = \frac{\sigma _1\beta }{2}\left\| A_{i}(x_{i}-x_{i}^k)\right\| ^2,\\ \lambda ^{k+\frac{1}{2}}=\lambda ^k-\tau \beta (\mathcal {A}{\mathbf{x}}^{k+1}+\mathcal {B}{\mathbf{y}}^{k+1}-c). \end{array}\right. \end{aligned}$$
(70)

So, by applying Theorems 6 and 7 on the algorithm (70), it also converges and has the \({\mathcal {O}}(1/t)\) convergence rate when \((\sigma _1, \sigma _2, \tau , s)\) satisfy

$$\begin{aligned} \sigma _1\in (p-1,+\infty ),\quad \sigma _2\in (q-1,+\infty ) \quad \text{ and } \quad (\tau ,s) \in \overline{{\mathcal {K}}}, \end{aligned}$$
(71)

where

$$\begin{aligned} \overline{{\mathcal {K}}} = \left\{ (\tau , s) \ | \ \tau + s>0, \ s \le 1, \ -\tau ^2 - s^2 -\tau s + \tau + s + 1 >0 \right\} . \end{aligned}$$
(72)

Hence, the convergence domain \({\mathcal {K}}\) in Theorems 6 and 7 can be easily enlarged to the symmetric domain, shown in Fig. 2,

$$\begin{aligned} {\mathcal {G}} = {\mathcal {K}} \cup \overline{{\mathcal {K}}} = \left\{ (\tau , s) \ | \ \tau + s>0,\ -\tau ^2 - s^2 -\tau s + \tau + s + 1 >0 \right\} . \end{aligned}$$
(73)

Remark 2

Theorem 6 implies that we can apply the following easily usable stopping criterion for GS-ADMM:

$$\begin{aligned} \max \left\{ \left\| x_i^k-x_i^{k+1}\right\| _\infty ,\left\| y_j^k-y_j^{k+1}\right\| _\infty , \left\| \mathcal {A}{\mathbf{x}}^{k+1}+\mathcal {B}{\mathbf{y}}^{k+1}-c\right\| _\infty \right\} \le tol, \end{aligned}$$

where \(tol >0\) is a given tolerance error. One the other hand, Theorem 7 tells us that for any given compact set \(\mathcal {K}\subset \mathcal {M}\), if

$$\begin{aligned} \eta = \sup _{{\mathbf{w}}\in \mathcal {K}}\left\{ \Vert \mathbf {w}-{\mathbf{w}}^{1}\Vert _H^2+\xi _3\left\| \mathcal {A}{\mathbf{x}}^{1}+\mathcal {B}{\mathbf{y}}^{1}-c\right\| ^2+\xi _4\left\| {\mathbf{y}}^{1}-{\mathbf{y}}^{0}\right\| ^2_{H_\mathbf {y}}\right\} < \infty , \end{aligned}$$

we have

$$\begin{aligned} h({\mathbf{u}}_t)-h({\mathbf{u}})+ \left\langle {\mathbf{w}}_t-{\mathbf{w}}, \mathcal {J}({\mathbf{w}})\right\rangle \le \frac{\eta }{2t}, \end{aligned}$$

which shows that GS-ADMM has a worst-case \(\mathcal {O}(1/t)\) convergence rate in an ergodic sense.

4 Two special cases of GS-ADMM

Note that in GS-ADMM (7), the two proximal parameters \(\sigma _1\) and \(\sigma _2\) are required to be strictly positive for the generalized \(p+q\) block separable convex programming. However, when taking \(\sigma _1=\sigma _2=0\) for the two-block problem, i.e., \(p=q=1\), GS-ADMM would reduce to the scheme (5), which is globally convergent [16]. Such observations motivate us to further investigate the following two special cases:

$$\begin{aligned} \begin{array}{l} \text {(a) GS-ADMM (7) with}\ p\ge 1, q=1, \sigma _1 \in (p-1, \infty ) \text{ and } \sigma _2=0;\\ \text {(b) GS-ADMM (7) with}\ p=1, q\ge 1, \sigma _1=0 \text{ and } \sigma _2 \in (q-1, \infty ). \end{array} \end{aligned}$$

4.1 Convergence for the case (a)

The corresponding GS-ADMM for the first case (a) can be simplified as follows:

$$\begin{aligned} \quad \ \left\{ \begin{array}{lll} \text {For}\ i=1,2,\ldots ,p,\\ \quad x_{i}^{k+1}=\arg \min \limits _{x_{i}\in \mathcal {X}_{i}} \mathcal {L}_\beta \left( x_1^k,\ldots , x_i,\ldots ,x_p^k,y^k,\lambda ^k\right) +P_i^k(x_i), \\ \quad \text {where } P_i^k(x_i) = \frac{\sigma _1\beta }{2}\left\| A_{i}\left( x_{i}-x_{i}^k\right) \right\| ^2,\\ \lambda ^{k+\frac{1}{2}}=\lambda ^k-\tau \beta \left( \mathcal {A}{\mathbf{x}}^{k+1}+By^{k}-c\right) ,\\ y^{k+1}=\arg \min \limits _{y\in \mathcal {Y}} \mathcal {L}_\beta \left( {\mathbf{x}}^{k+1},y,\lambda ^{k+\frac{1}{2}}\right) , \\ \lambda ^{k+1}=\lambda ^{k+\frac{1}{2}}-s\beta \left( \mathcal {A}{\mathbf{x}}^{k+1}+By^{k+1}-c\right) . \end{array}\right. \end{aligned}$$
(74)

And, the corresponding matrices QMH and G in this case are the following:

(75)

where \(H_\mathbf {x}\) is defined in (18) and

(76)
(77)
(78)
(79)

It can be verified that H in (78) is positive definite as long as

$$\begin{aligned} \sigma _1\in (p-1,+\infty ), \qquad (\tau ,s) \in \{(\tau ,s) \ | \ \tau +s>0, \tau <1 \}. \end{aligned}$$

Analogous to (44), we have

$$\begin{aligned}&\frac{1}{\beta } \Vert {\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^k\Vert _G^2= \left\| \left( \begin{array}{l} A_1\left( x_1^k-x_1^{k+1}\right) \\ A_2\left( x_2^k-x_2^{k+1}\right) \\ \ \ \ \ \ \vdots \\ A_{p}\left( x_{p}^k-x_{p}^{k+1}\right) \end{array}\right) \right\| _{ H_{\mathbf {x},0}}^2\nonumber \\&\qquad +\,\, (2-\tau -s)\left\| \mathcal {A}{\mathbf{x}}^{k+1}+B y^{k+1}-c\right\| ^2 +(1-\tau )\left\| B\left( y^k -y^{k+1}\right) \right\| ^2\nonumber \\&\qquad +\,\, 2(1-\tau ) \left( \mathcal {A}{\mathbf{x}}^{k+1}+By^{k+1}-c\right) ^\mathsf{T}B\left( y^k -y^{k+1}\right) \nonumber \\&\quad \ge \xi _1 \sum \limits _{i=1}^{p}\left\| A_i\left( x_i^k-x_i^{k+1}\right) \right\| ^2+(2-\tau -s)\left\| \mathcal {A}{\mathbf{x}}^{k+1}+By^{k+1}-c\right\| ^2 \nonumber \\&\qquad +\,\,(1{-}\,\tau ) \left\| B\left( y^k {-}y^{k+1}\right) \right\| ^2{+} 2(1{-}\tau ) \left( \mathcal {A}{\mathbf{x}}^{k+1}+B y^{k+1}-c\right) ^\mathsf{T}B\left( y^k {-}y^{k+1}\right) ,\nonumber \\ \end{aligned}$$
(80)

for some \(\xi _1 >0\), since \(H_{\mathbf {x},0}\) defined in (32) is positive definite. When \(\sigma _2 = 0\), by a slight modification for the proof of Lemma 6, we have the following lemma.

Lemma 7

Suppose \(\tau >-1\). The sequence \(\{{\mathbf{w}}^k\}\) generated by the algorithm (74) satisfies

$$\begin{aligned}&\left( \mathcal {A}{\mathbf{x}}^{k+1}+By^{k+1}-c\right) ^\mathsf{T}B\left( y^k-y^{k+1}\right) \nonumber \\&\quad \ge \frac{1-s}{1+\tau }\left( \mathcal {A}{\mathbf{x}}^{k}+By^{k}-c\right) ^\mathsf{T}B\left( y^k-y^{k+1}\right) -\frac{\tau }{1+\tau }\left\| B\left( y^k-y^{k+1}\right) \right\| ^2. \end{aligned}$$

Then, the following two theorems are similar to Theorems 5 and 6.

Theorem 8

Let the sequences \(\{{\mathbf{w}}^k\}\) and \(\{ \widetilde{{\mathbf{w}}}^k \}\) be generated by the algorithm (74). For any

$$\begin{aligned} \sigma _1\in (p-1,+\infty ) \quad \text{ and } \quad (\tau ,s) \in {\mathcal {K}}, \end{aligned}$$

where \({\mathcal {K}}\) is given in (8), there exist constants \(\xi _i(i=1,2)>0\) and \(\xi _3\ge 0\), such that

$$\begin{aligned} \left\| {\mathbf{w}}^k-\widetilde{{\mathbf{w}}}^k\right\| _G^2\ge & {} \xi _1 \sum \limits _{i=1}^{p}\Vert A_i\left( x_i^k-x_i^{k+1}\right) \Vert ^2 + \xi _2\left\| \mathcal {A}{\mathbf{x}}^{k+1}+ By^{k+1}-c\right\| ^2 \\&+\,\, \xi _3\left( \left\| \mathcal {A}{\mathbf{x}}^{k+1}+ By^{k+1}-c\right\| ^2-\left\| \mathcal {A}{\mathbf{x}}^{k}+ By^{k}-c\right\| ^2\right) \nonumber . \end{aligned}$$
(81)

Proof

For any \((\tau , s) \in {\mathcal {K}}\), we have \(\tau > -1\). Then, by Lemma 7, the inequality (80) and the Cauchy-Schwartz inequality (55), we can deduce that (81) holds with \(\xi _1\) given in (80), \(\xi _2\) and \(\xi _3\) given in (58) and (57), respectively. \(\square \)

Theorem 9

Let the sequences \(\{{\mathbf{w}}^k\}\) and \(\{ \widetilde{{\mathbf{w}}}^k \}\) be generated by the algorithm (74). For any

$$\begin{aligned} \sigma _1\in (p-1,+\infty ) \quad \text{ and } \quad (\tau ,s) \in {\mathcal {K}}_1, \end{aligned}$$
(82)

where

$$\begin{aligned} {\mathcal {K}}_1 = \left\{ (\tau , s) \ | \ \tau + s>0, \ \tau <1, \ -\tau ^2 - s^2 -\tau s + \tau + s + 1 >0 \right\} , \end{aligned}$$

we have

$$\begin{aligned} \lim _{k\rightarrow \infty } \sum \limits _{i=1}^{p}\left\| A_i\left( x_i^k-x_i^{k+1}\right) \right\| ^2=0 \quad \text{ and } \quad \lim _{k\rightarrow \infty } \left\| \mathcal {A}{\mathbf{x}}^k+By^k-c\right\| = 0, \end{aligned}$$
(83)

and there exists a \({\mathbf{w}}^\infty \in \mathcal {M}^*\) such that \(\lim \limits _{k\rightarrow \infty } \widetilde{{\mathbf{w}}}^{k} = {\mathbf{w}}^\infty . \)

Proof

First, it is clear that Theorem 3 still holds, which, combining with Theorem 8, gives

$$\begin{aligned}&\Vert {\mathbf{w}}^{k+1}-{\mathbf{w}}^*\Vert _H^2+ \xi _3\left\| \mathcal {A}{\mathbf{x}}^{k+1}+By^{k+1}-c\right\| ^2\nonumber \\&\quad \le \Vert {\mathbf{w}}^{k}-{\mathbf{w}}^*\Vert _H^2+\xi _3\left\| \mathcal {A}{\mathbf{x}}^{k}+By^{k}-c\right\| ^2 \nonumber \\&\quad - \xi _1 \sum \limits _{i=1}^{p}\Vert A_i\left( x_i^k-x_i^{k+1}\right) \Vert ^2 - \xi _2\left\| \mathcal {A}{\mathbf{x}}^{k+1}+ By^{k+1}-c\right\| ^2, \quad \forall {\mathbf{w}}^*\in \mathcal {M}^*.\qquad \end{aligned}$$
(84)

Hence, (83) holds. For \((\sigma _1, \tau , s)\) satisfying (82), H in (78) is positive definite. So, by (84), \(\{{\mathbf{w}}^k\}\) is uniformly bounded and therefore, there exits a subsequence \(\{{\mathbf{w}}^{k_j}\}\) converging to a point \({\mathbf{w}}^\infty = ({\mathbf{x}}^\infty , y^\infty , \lambda ^\infty ) \in {\mathcal {M}}\). So, it follows from (83) and the full column rank assumption of all the matrices \(A_i\) that

$$\begin{aligned} \lim _{k\rightarrow \infty } x_i^k-\widetilde{x}_i^k = \lim _{k\rightarrow \infty } x_i^k-x_i^{k+1} = {\mathbf{0}} \quad \text{ and } \quad \lim _{k\rightarrow \infty } \lambda ^k-\widetilde{\lambda }^k = {\mathbf{0}}, \end{aligned}$$
(85)

for all \(i=1, \ldots , p\). Hence, by \(\lim \limits _{j \rightarrow \infty } {\mathbf{w}}^{k_j} = {\mathbf{w}}^\infty \) and (83), we have

$$\begin{aligned} \lim _{j \rightarrow \infty } {\mathbf{x}}^{k_j+1} = {\mathbf{x}}^\infty \quad \text{ and } \quad \mathcal {A}{\mathbf{x}}^\infty +By^\infty -c = {\mathbf{0}}, \end{aligned}$$

and therefore, by the full column rank assumption of B and (83),

$$\begin{aligned} \lim _{j \rightarrow \infty } y^{k_j+1} = \lim \limits _{j \rightarrow \infty } \widetilde{y}^{k_j} = y^\infty . \end{aligned}$$

Hence, by (85), we have \( \lim \limits _{j \rightarrow \infty } {\mathbf{w}}^{k_j} - \widetilde{{\mathbf{w}}}^{k_j} = {\mathbf{0}}\). Thus, by taking \(\widetilde{{\mathbf{w}}}^{k_j}\) in (16) and letting j go to \(\infty \), the inequality (65) still holds. Then, the rest proof of this theorem follows from the same proof of Theorem 6. \(\square \)

Based on Theorem 8 and by a similar proof to those of Theorem 7, we can also show that the algorithm (74) has the worst-case \({\mathcal {O}}(1/t)\) convergence rate, which is omitted here for conciseness.

4.2 Convergence for the case (b)

The corresponding GS-ADMM for the case (b) can be simplified as follows

$$\begin{aligned} \ \left\{ \begin{array}{lll} x^{k+1}=\arg \min \limits _{x\in \mathcal {X}} \mathcal {L}_\beta (x,{\mathbf{y}}^k,\lambda ^k) \\ \lambda ^{k+\frac{1}{2}}=\lambda ^k-\tau \beta (Ax^{k+1}+\mathcal {B}{\mathbf{y}}^{k}-c),\\ \text {For}\ j=1,2,\ldots ,q,\\ \quad y_{j}^{k+1}=\arg \min \limits _{y_{j}\in \mathcal {Y}_{j}} \mathcal {L}_\beta (x^{k+1},y_1^k,\ldots , y_j,\ldots ,y_q^k,\lambda ^{k+\frac{1}{2}}) + Q_j^k(y_j), \\ \quad \text {where } Q_j^k(y_j) = \frac{\sigma _2\beta }{2}\left\| B_{j}(y_{j}-y_{j}^k)\right\| ^2,\\ \lambda ^{k+1}=\lambda ^{k+\frac{1}{2}}-s\beta (Ax^{k+1}+\mathcal {B}{\mathbf{y}}^{k+1}-c). \end{array}\right. \end{aligned}$$
(86)

In this case, the corresponding matrices QMH and G become \(\widetilde{Q},\widetilde{M}, \widetilde{H}\) and \(\widetilde{G}\), which are defined in (19), the lower-right block of M in (25), (29) and (41), respectively.

In what follows, let us define

$$\begin{aligned} {\mathbf{v}}^k=\left( \begin{array}{c} {\mathbf{y}}^k\\ \lambda ^k \\ \end{array}\right) \quad \text{ and } \quad \widetilde{{\mathbf{v}}}^k=\left( \begin{array}{c} \widetilde{{\mathbf{y}}}^k\\ \widetilde{\lambda }^k\\ \end{array}\right) . \end{aligned}$$

Then, by the proof of Theorem 5, we can deduce the following theorem.

Theorem 10

Let the sequences \(\{\mathbf {v}^k\}\) and \(\{ \widetilde{\mathbf {v}}^k\}\) be generated by the algorithm (86). For any

$$\begin{aligned} \sigma _2 \in (q-1,+\infty ) \quad \text{ and } \quad (\tau ,s) \in {\mathcal {K}}, \end{aligned}$$

where \({\mathcal {K}}\) is defined in (8), there exist constants \(\xi _i(i=1,2)>0\) and \(\xi _j(j=3,4)\ge 0\) such that

$$\begin{aligned} \left\| {\mathbf{v}}^k-\widetilde{{\mathbf{v}}}^k\right\| _{\widetilde{G}}^2\ge & {} \xi _1 \sum \limits _{j=1}^{q}\left\| B_j\left( y_j^k-y_j^{k+1}\right) \right\| ^2 + \xi _2\left\| Ax^{k+1}+\mathcal {B}{\mathbf{y}}^{k+1}-c\right\| ^2 \nonumber \\&+\,\, \xi _3\left( \left\| Ax^{k+1}+\mathcal {B}{\mathbf{y}}^{k+1}-c\right\| ^2-\left\| Ax^{k}+\mathcal {B}{\mathbf{y}}^{k}-c\right\| ^2\right) \nonumber \\&+ \,\xi _4\left( \left\| {\mathbf{y}}^{k+1}-{\mathbf{y}}^k\right\| ^2_{H_\mathbf {y}}-\left\| {\mathbf{y}}^{k}-{\mathbf{y}}^{k-1}\right\| ^2_{H_\mathbf {y}}\right) . \end{aligned}$$

By slight modifications of the proof of Theorems 6 and 9, we have the following global convergence theorem.

Theorem 11

Let the sequences \(\{{\mathbf{w}}^k\}\) and \(\{ \widetilde{{\mathbf{w}}}^k \}\) be generated by the algorithm (74). Then, for any

$$\begin{aligned} \sigma _2 \in (q-1,+\infty ) \quad \text{ and } \quad (\tau ,s) \in {\mathcal {K}}, \end{aligned}$$

where \({\mathcal {K}}\) is defined in (8), we have

$$\begin{aligned} \lim _{k\rightarrow \infty } \sum \limits _{j=1}^{q}\left\| B_j\left( y_j^k-y_j^{k+1}\right) \right\| ^2=0 \quad \text{ and } \quad \lim _{k\rightarrow \infty } \left\| Ax^k+{\mathcal {B}}{\mathbf{y}}^k-c\right\| = 0, \end{aligned}$$

and there exists a \({\mathbf{w}}^\infty \in \mathcal {M}^*\) such that \(\lim _{k\rightarrow \infty } \widetilde{{\mathbf{w}}}^{k} = {\mathbf{w}}^\infty . \)

By a similar proof to that of Theorem 7, the algorithm (86) also has the worst-case \({\mathcal {O}}(1/t)\) convergence rate.

Remark 3

Again, substituting the index \(k+1\) by k for the x and \(\lambda \) iterates, the algorithm (74) can be also written as

$$\begin{aligned} \ \left\{ \begin{array}{lll} y^{k+1}=\arg \min \limits _{y\in \mathcal {Y}} \mathcal {L}_\beta ({\mathbf{x}}^k,y,\lambda ^{k-\frac{1}{2}}), \\ \lambda ^{k}=\lambda ^{k-\frac{1}{2}}-s\beta (\mathcal {A}{\mathbf{x}}^k+By^{k+1}-c) \\ \text {For}\ i=1,2,\ldots ,p,\\ \quad x_{i}^{k+1}=\arg \min \limits _{x_{i}\in \mathcal {X}_{i}} \mathcal {L}_\beta (x_1^k,\ldots ,, x_i,\ldots ,x_p^k,y^{k+1},\lambda ^k) +P_i^k(x_i), \\ \quad \text {where } P_i^k(x_i) = \frac{\sigma _1\beta }{2}\left\| A_{i}(x_{i}-x_{i}^k)\right\| ^2,\\ \lambda ^{k+\frac{1}{2}}=\lambda ^k-\tau \beta (\mathcal {A}{\mathbf{x}}^{k+1}+B y^{k+1}-c). \end{array}\right. \end{aligned}$$

So, by applying Theorem 11 on the above algorithm, we know the algorithm (74) also converges globally when \((\sigma _1, \tau , s)\) satisfy

$$\begin{aligned} \sigma _1\in (p-1,+\infty ), \quad \text{ and } \quad (\tau ,s) \in \overline{{\mathcal {K}}}, \end{aligned}$$

where \(\overline{{\mathcal {K}}}\) is given in (72). Hence, the convergence domain \({\mathcal {K}}_1\) in Theorem 9 can be enlarged to the symmetric domain \({\mathcal {G}} = {\mathcal {K}}_1 \cup \overline{{\mathcal {K}}}\) given in (73). By a similar reason, the convergence domain \({\mathcal {K}}\) in Theorem 11 can be enlarged to \({\mathcal {G}}\) as well.

5 Numerical experiments

In this section, we investigate the performance of the proposed GS-ADMM for solving a class of sparse matrix minimization problems. All the algorithms are coded and simulated in MATLAB 7.10(R2010a) on a PC with Intel Core i5 processor(3.3GHz) with 4 GB memory.

5.1 Test problem

Consider the following Latent Variable Gaussian Graphical Model Selection (LVGGMS) problem arising in the statistical learning [2, 20]:

$$\begin{aligned} \begin{array}{lll} \min \limits _{X,S,L\in \mathcal {R}^{n\times n}} &{} F(X,S,L):=\langle X, C\rangle -\log \det (X)+ \nu \Vert S\Vert _1+\mu tr(L) \\ \text {s.t. } &{} X-S+L=\mathbf 0 ,\ L\succeq \mathbf 0 , \end{array} \end{aligned}$$
(87)

where \(C\in \mathcal {R}^{n\times n}\) is the covariance matrix obtained from observation, \(\nu \) and \(\mu \) are two given positive weight parameters, tr(L) stands for the trace of the matrix L and \(\Vert S\Vert _1=\sum _{ij}|S_{ij}|\). Clearly, by two different ways of partitioning the variables of (87), the GS-ADMM (7) can lead to the following two algorithms:

$$\begin{aligned} \left\{ \begin{array}{lll} X^{k+1}=\arg \min \limits _{X} \left\{ \langle X,C\rangle -\log \det (X)+\frac{\beta }{2}\left\| X-S^k+L^k-\frac{\varLambda ^{k}}{\beta }\right\| _F^2 +\frac{\sigma _1\beta }{2}\left\| X-X^k\right\| _F^2\right\} , \\ S^{k+1}=\arg \min \limits _{S} \left\{ \nu \Vert S\Vert _1+\frac{\beta }{2}\left\| X^k-S+L^k-\frac{\varLambda ^{k}}{\beta }\right\| _F^2 +\frac{\sigma _1\beta }{2}\left\| S-S^k\right\| _F^2\right\} , \\ \varLambda ^{k+\frac{1}{2}}=\varLambda ^{k}-\tau \beta (X^{k+1}-S^{k+1}+L^k), \\ L^{k+1}=\arg \min \limits _{L\succeq \mathbf 0 } \left\{ \mu tr(L)+\frac{\beta }{2}\left\| X^{k+1}-S^{k+1}+L-\frac{\varLambda ^{k+\frac{1}{2}}}{\beta }\right\| _F^2 +\frac{\sigma _2\beta }{2}\left\| L-L^k\right\| _F^2\right\} , \\ \varLambda ^{k+1}=\varLambda ^{k+\frac{1}{2}}-s\beta (X^{k+1}-S^{k+1}+L^{k+1}); \end{array}\right. \end{aligned}$$
(88)
$$\begin{aligned} \left\{ \begin{array}{lll} X^{k+1}=\arg \min \limits _{X} \left\{ \langle X,C\rangle -\log \det (X)+\frac{\beta }{2}\left\| X-S^k+L^k-\frac{\varLambda ^{k}}{\beta }\right\| _F^2 +\frac{\sigma _1\beta }{2}\left\| X-X^k\right\| _F^2\right\} , \\ \varLambda ^{k+\frac{1}{2}}=\varLambda ^{k}-\tau \beta (X^{k+1}-S^{k}+L^k), \\ S^{k+1}=\arg \min \limits _{S} \left\{ \nu \Vert S\Vert _1+\frac{\beta }{2}\left\| X^{k+1}-S+L^k-\frac{\varLambda ^{k+\frac{1}{2}}}{\beta }\right\| _F^2 +\frac{\sigma _2\beta }{2}\left\| S-S^k\right\| _F^2\right\} , \\ L^{k+1}=\arg \min \limits _{L\succeq \mathbf 0 } \left\{ \mu tr(L)+\frac{\beta }{2}\left\| X^{k+1}-S^{k}+L-\frac{\varLambda ^{k+\frac{1}{2}}}{\beta }\right\| _F^2 +\frac{\sigma _2\beta }{2}\left\| L-L^k\right\| _F^2\right\} , \\ \varLambda ^{k+1}=\varLambda ^{k+\frac{1}{2}}-s\beta (X^{k+1}-S^{k+1}+L^{k+1}). \end{array}\right. \end{aligned}$$
(89)

Note that all the subproblems in (88) and (89) have closed formula solutions. Next, we take the scheme (88) for an example to show how to get the explicit solutions of the subproblem. By the first-order optimality condition of the X-subproblem in (88), we derive

$$\begin{aligned} \mathbf 0 = C-X^{-1}+\beta \left( X-S^k+L^k-{\varLambda ^{k}}/{\beta }\right) +\sigma _1\beta (X-X^k) \end{aligned}$$

which is equivalent to

$$\begin{aligned} (\sigma _1+1)\beta X^2 +\left[ C+\beta (L^k-S^k)-\varLambda ^k-\sigma _1\beta X^k\right] X-\mathbf I =\mathbf 0 . \end{aligned}$$

Then, from the eigenvalue decomposition

$$\begin{aligned} U\text {Diag}(\rho )U^\mathsf{T}= C+\beta (L^k-S^k)-\varLambda ^k-\sigma _1\beta X^k, \end{aligned}$$

where \(\text {Diag}(\rho )\) is a diagonal matrix with \(\rho _i, i=1, \ldots , n\), on the diagonal, we obtain that the solution of the X-subproblem in (88) is

$$\begin{aligned} X^{k+1}=U\text {Dia}g(\gamma )U ^\mathsf{T}, \end{aligned}$$

where \(\text {Diag}(\gamma )\) is the diagonal matrix with diagonal elements

$$\begin{aligned} \gamma _i=\frac{-\rho _i+\sqrt{\rho _i^2+4(\sigma _1+1)\beta }}{2(\sigma _1+1)\beta }, \quad i=1,2,\ldots ,n. \end{aligned}$$

On the other hand, the S-subproblem in (88) is equivalent to

$$\begin{aligned} \begin{array}{lll} S^{k+1}&{}=&{}\arg \min \limits _{S} \left\{ \nu \Vert S\Vert _1+ \frac{(\sigma _1+1)\beta }{2}\left\| S-\frac{X^k+L^k+\sigma _1S^k-\varLambda ^k/\beta }{(\sigma _1+1) }\right\| _F^2\right\} \\ &{}=&{} \text {Shrink}\left( \frac{X^k+L^k+\sigma _1S^k-\varLambda ^k/\beta }{(\sigma _1+1) }, \frac{\nu }{(\sigma _1+1)\beta }\right) , \end{array} \end{aligned}$$

where \(\text {Shrink}(\cdot ,\cdot )\) is the soft shrinkage operator (see e.g.[22]). Meanwhile, it is easy to verify that the L-subproblem is equivalent to

$$\begin{aligned} \begin{array}{lll} L^{k+1}&{}=&{}\arg \min \limits _{L\succeq \mathbf 0 } \frac{(\sigma _2+1)\beta }{2}\left\| L-\widetilde{L}\right\| _F^2\\ &{}=&{} V\text {Diag}(\max \{\rho ,{\mathbf{0}}\})V^\mathsf{T}, \end{array} \end{aligned}$$

where \(\max \{\rho , {\mathbf{0}}\}\) is taken component-wise and \(V\text {Diag}(\rho )V^\mathsf{T}\) is the eigenvalue decomposition of the matrix

$$\begin{aligned} \widetilde{L}=\frac{\sigma _2 L^k+S^{k+1}+\varLambda ^{k+\frac{1}{2}}/\beta -X^{k+1}-\mu \mathbf I /\beta }{(\sigma _2+1)}. \end{aligned}$$

5.2 Numerical results

In the following, we investigate the performance of several algorithms for solving the LVGGMS problem, where all the corresponding subproblems can be solved in a similar way as shown in the above analysis. For all algorithms, the maximal number of iterations is set as 1000, the starting iterative values are set as \((X^0,S^0, L^0,\varLambda ^0)=(\mathbf I ,\mathbf 2I ,\mathbf I ,\mathbf 0 )\), and motivated by Remark 2, the following stopping conditions are used

$$\begin{aligned} \text {IER(k)}:= & {} \max \left\{ \left\| X^k-X^{k-1}\right\| _\infty ,\left\| S^k-S^{k-1}\right\| _\infty , \left\| L^k-L^{k-1}\right\| _\infty \right\} \le \text {TOL}, \\ \text {OER(k)}:= & {} \frac{|F(X^{k},S^{k},L^{k})-F^*|}{|F^*|}\le \text {Tol}, \end{aligned}$$

together with \(\text {CER(k)}:=\left\| X^{k}-S^{k}+L^{k}\right\| _F\le 10^{-4}\), where \(F^*\) is the approximate optimal objective function value obtained by running GS-ADMM (88) after 1000 iterations. In (87), we set \((\nu ,\mu )=(0.005,0.05)\) and the given data C is randomly generated by the following MATLAB code with \(m=100\), which are downloaded from S. Boyd’s homepageFootnote 2:

figure a

5.2.1 Performance of different versions of GS-ADMM

In the following, we denote

$$\begin{aligned} \begin{array}{l} \text {GS-ADMM}\ (88)\ \text {by}\ ``{} \mathbf GS-ADMM-I '';\\ \text {GS-ADMM}\ (89)\ \text {by}\ ``{} \mathbf GS-ADMM-II '';\\ \text {GS-ADMM}\ (88)\ \text {with}\ \sigma _2=0\ \text {by}\ ``{} \mathbf GS-ADMM-III '';\\ \text {GS-ADMM}\ (89)\ \text {with}\ \sigma _1=0\ \text {by}\ ``{} \mathbf GS-ADMM-IV ''.\\ \end{array} \end{aligned}$$
Table 1 Numerical results of GS-ADMM-I, II, III and IV with different \(\beta \)

First, we would like to investigate the performance of the above different versions of GS-ADMM for solving the LVGGMS problem with variance of the penalty parameter \(\beta \). The results are reported in Table 1 with \(\text {TOL}=\text {Tol}=1.0\times 10^{-7}\), and \((\tau ,s)=(0.8,1.17)\). For GS-ADMM-I and GS-ADMM-II, \((\sigma _1,\sigma _2)=(2,3)\). Here, “Iter” and “CPU” denote the iteration number and the CPU time in seconds, and the bold letter indicates the best result of each algorithm. From Table 1, we can observe that:

  • Both the iteration number and the CPU time of all the algorithms have a similar changing pattern, which decreases originally and then increases along with the decrease of the value of \(\beta \).

  • For the same value of \(\beta \), the results of GS-ADMM-III are slightly better than other algorithms in terms of the iteration number, CPU time, and the feasibility errors CER, IER and OER.

  • GS-ADMM-III with \(\beta =0.5\) can terminate after 579 iterations to achieve the tolerance \(10^{-7}\), while the other algorithms with \(\beta =0.5\) fail to achieve this tolerance within given number of iterations.

In general, the algorithm (88) with \(\beta = 0.06\) performs better than other cases. Hence, in the following experiments for GS-ADMM, we adapt GS-ADMM-III with default \(\beta = 0.06\). Also note that \(\sigma _2 =0\), which is not allowed by the algorithms discussed in [15].

Table 2 Numerical results of GS-ADMM-III with different stepsizes \((\tau , s)\)

Second, we investigate how the stepsizes \((\tau ,s)\in {\mathcal {G}}\) with different values would affect the performance of GS-ADMM-III. Table 2 reports the comparison results with variance of \((\tau ,s)\) for \(\text {TOL}=\text {Tol}=1.0\times 10^{-5}\). One obvious observation from Table 2 is that both the iteration number and the CPU time decrease along with the increase of s (or \(\tau \)) for fixed value of \(\tau \) (or s), which indicates that the stepsizes of \((\tau ,s)\in {\mathcal {G}}\) could influence the performance of GS-ADMM significantly. In addition, the results in Table 2 also indicate that using more flexible but with both relatively larger stepsizes \(\tau \) and s of the dual variables often gives the best convergence speed. Comparing all the reported results in Table 2, by setting \((\tau ,s)=(0.9,1.09)\), GS-ADMM-III gives the relative best performance for solving the problem (87).

Table 3 Comparative results of different algorithms under different tolerances

5.2.2 Comparison of GS-ADMM with other state-of-the-art algorithms

In this subsection, we would like to carry out some numerical comparison of solving the problem (87) by using GS-ADMM-III and the other four methods:

  • The Proximal Jacobian Decomposition of ALM [17] (denoted by “PJALM”);

  • The splitting method in [14] (denoted by “HTY”);

  • The generalized parametrized proximal point algorithm [1] (denoted by “GR-PPA”).

  • The twisted version of the proximal ADMM [23] (denoted by “T-ADMM”).

We set \((\tau ,s)=(0.9,1.09)\) for GS-ADMM-III and the parameter \(\beta = 0.05\) for all the comparison algorithms. The default parameter \(\mu =2.01\) and \(H = \beta {\mathbf{I}}\) are used for HTY [14]. As suggested by the theory and numerical experiments in [17], the proximal parameter is set as 2 for PJALM. As shown in [1], the relaxation factor of GR-PPA is set as 1.8 and other default parameters are chosen as

$$\begin{aligned} (\sigma _1,\sigma _2,\sigma _3,s,\tau ,\varepsilon )=\left( 0.178,0.178,0.178, 10,\frac{\sqrt{5}-1}{2},\frac{\sqrt{5}-1}{2}\right) . \end{aligned}$$

For T-ADMM, the symmetric matrices therein are chosen as \(M_2=M_2=v\mathbf I \) with \(v=\beta \) and the correction factor is set as \(a=1.6\) [23]. The results obtained by the above algorithms under different tolerances are reported in Table 3. With fixed tolerance \(\text {TOL}=10^{-9}\) and \(\text {Tol}=10^{-15}\), the convergence behavior of the error measurements IER(k) and OER(k) by the five algorithms using different starting points are shown in Figs. 3, 4 and 5. From Table 3 and Figs. 3, 4 and 5, we may have the following observation:

Fig. 3
figure 3

Convergence curves of IER and OER with initial values \((X^0,S^0, L^0,\varLambda ^0)=(\mathbf I ,2\mathbf I ,\mathbf I ,\mathbf 0 )\)

Fig. 4
figure 4

Convergence curves of IER and OER with initial values \((X^0,S^0, L^0,\varLambda ^0)=(\mathbf I ,\mathbf I ,\mathbf 0 ,\mathbf 0 )\)

  • Under all different tolerances, GS-ADMM-III performs significantly better than other four algorithms in both the number of iterations and CPU time.

  • GR-PPA is slightly better than PJALM and HTY, and T-ADMM outperforms PJALM, HTY and GR-PPA.

  • the convergence curves in Figs. 3, 4 and 5 illustrate that using different starting points, GS-ADMM-III also converges fastest among the comparing methods.

All these numerical results demonstrate the effectiveness and robustness of GS-ADMM-III, which is perhaps due to the symmetric updating of the Lagrange multipliers and the proper choice of the stepsizes.

Fig. 5
figure 5

Convergence curves of IER and OER with initial values \((X^0,S^0, L^0,\varLambda ^0)=(\mathbf I ,4\mathbf I ,3\mathbf I ,\mathbf 0 )\)

6 Conclusion

Since the direct extension of ADMM in a Gauss–Seidel fashion for solving the three-block separable convex optimization problem is not necessarily convergent analyzed by Chen et al. [3], there has been a constantly increasing interest in developing and improving the theory of the ADMM for solving the multi-block separable convex optimization. In this paper, we propose an algorithm, called GS-ADMM, which could solve the general model (1) by taking advantages of the multi-block structure. In our GS-ADMM, the Gauss–Seidel fashion is taken for updating the two grouped variables, while the block variables within each group are updated in a Jacobi scheme, which would make the algorithm more be attractive and effective for solving big size problems. We provide a new convergence domain for the stepsizes of the dual variables, which is significantly larger than the convergence domains given in the literature. Global convergence as well as the \({\mathcal {O}}(1/t)\) ergodic convergence rate of the GS-ADMM is established. In addition, two special cases of GS-ADMM, which allows one of the proximal parameters to be zero, are also discussed.

This paper simplifies the analysis in [16] and provides an easy way to analyze the convergence of the symmetric ADMM. Our preliminary numerical experiments show that with proper choice of parameters, the performance of the GS-ADMM could be very promising. Besides, from the presented convergence analysis, we can see that the theories in the paper can be naturally extended to use more general proximal terms, such as letting \(P_i^k(x_k) := \frac{\beta }{2}\Vert x_i - x_i^k\Vert _{{\mathbf{P}}_i}\) and \(Q_j^k(y_j) := \frac{ \beta }{2}\Vert y_j - y_j^k\Vert _{{\mathbf{Q}}_j}\) in (7), where \({\mathbf{P}}_i\) and \({\mathbf{Q}}_i\) are matrices such that \({\mathbf{P}}_i \succ (p-1) A_i ^\mathsf{T}A_i\) and \({\mathbf{Q}}_j \succ (q-1) B_j ^\mathsf{T}B_j\) for all \(i=1, \ldots , p\) and \(j=1, \ldots , q\). Finally, the different ways of partitioning the variables of the problem also gives the flexibility of GS-ADMM.