1 Introduction

In this paper, we consider the following nonconvex and nonsmooth optimization problem with multiple block variables:

$$\begin{aligned} \begin{array}{ll} \min &{} f(x_1,x_2,\ldots , x_N) + \sum \limits _{i=1}^{N-1} r_i(x_i) \\ s.t. &{} \sum \limits _{i=1}^N A_i x_i = b, \ x_i\in \mathcal {X}_i, \ i=1,\ldots ,N-1, \end{array} \end{aligned}$$
(1.1)

where f is differentiable and possibly nonconvex, and each \(r_i\) is possibly nonsmooth and nonconvex, \(i=1,\ldots , N-1\); \(A_i\in \mathbb {R}^{m\times n_i}\), \(b\in \mathbb {R}^m\), \(x_i\in \mathbb {R}^{n_i}\); and \(\mathcal {X}_i\subseteq \mathbb {R}^{n_i}\) are convex sets, \(i=1,2,\ldots ,N-1\). One restriction of model (1.1) is that the objective function is required to be smooth with respect to the last block variable \(x_N\). However, in Sect. 4 we shall extend the result to cover the general case where \(r_N(x_N)\) may be present and that \(x_N\) maybe constrained as well. A special case of (1.1) is when the affine constraints are absent, and there is no block structure of the variables (i.e., \(x = x_1\) and other block variables do not show up in (1.1)), which leads to the following more compact form

$$\begin{aligned} \min \ \Phi (x) := f(x) + r(x), \ s.t. \ x \in S \subset \mathbb {R}^n, \end{aligned}$$
(1.2)

where S is a convex and compact set. In this paper, we propose several first-order algorithms for computing an \(\epsilon \)-stationary solution (to be defined later) for (1.1) and (1.2), and analyze their iteration complexities. Throughout, we assume the following condition.

Assumption 1.1

The sets of the stationary solutions for (1.1) and (1.2) are non-empty.

Problem (1.1) arises from a variety of interesting applications. For example, one of the nonconvex models for matrix robust PCA can be cast as follows (see, e.g., [51]), which seeks to decompose a given matrix \(M\in \mathbb {R}^{m\times n}\) into a superposition of a low-rank matrix Z, a sparse matrix E and a noise matrix B:

$$\begin{aligned} \min _{X,Y,Z,E,B} \ \Vert Z-XY^\top \Vert _F^2 + \alpha \mathcal {R}(E), \ s.t. \ M = Z + E + B, \ \Vert B\Vert _F \le \eta , \end{aligned}$$
(1.3)

where \(X\in \mathbb {R}^{m\times r}\), \(Y\in \mathbb {R}^{n\times r}\), with \(r < \min (m,n)\) being the estimated rank of Z; \(\eta >0\) is the noise level, \(\alpha >0\) is a weighting parameter; \(\mathcal {R}(E)\) is a regularization function that can improve the sparsity of E. One of the widely used regularization functions is the \(\ell _1\) norm, which is convex and nonsmooth. However, there are also many nonconvex regularization functions that are widely used in statistical learning and information theory, such as smoothly clipped absolute deviation (SCAD) [23], log-sum penalty (LSP) [15], minimax concave penalty (MCP) [58], and capped-\(\ell _1\) penalty [59, 60], and they are nonsmooth at point 0 if composed with the absolute value function, which is usually the case in statistical learning. Clearly (1.3) is in the form of (1.1). Another example of the form (1.1) is the following nonconvex tensor robust PCA model (see, e.g., [55]), which seeks to decompose a given tensor \(\mathcal {T}\in \mathbb {R}^{n_1\times n_2\times \cdots \times n_d}\) into a superposition of a low-rank tensor \(\mathcal {Z}\), a sparse tensor \(\mathcal {E}\) and a noise tensor \(\mathcal {B}\):

$$\begin{aligned}&\min _{X_i,\mathcal {C},\mathcal {Z},\mathcal {E},\mathcal {B}} \ \Vert \mathcal {Z}-\mathcal {C}\times _1 X_1\times _2 X_2 \times _3 \cdots \times _d X_d\Vert _F^2 + \alpha \mathcal {R}( \mathcal {E}), \\&\quad s.t. \ \mathcal {T}= \mathcal {Z}+ \mathcal {E}+ \mathcal {B}, \ \Vert \mathcal {B}\Vert _F \le \eta , \end{aligned}$$

where \(\mathcal {C}\) is the core tensor that has a smaller size than \(\mathcal {Z}\), and \(X_i\) are matrices with appropriate sizes, \(i=1,\ldots ,d\). In fact, the “low-rank” tensor in the above model corresponds to the tensor with a small core; however a recent work [35] demonstrates that the CP-rank of the core regardless of its size could be as large as the original tensor. Therefore, if one wants to find the low CP-rank decomposition, then the following model is preferred:

$$\begin{aligned} \min _{X_i,\mathcal {Z}, \mathcal {E}, \mathcal {B}} \Vert \mathcal {Z}-\llbracket X_1,X_2,\ldots ,X_{d} \rrbracket \Vert ^{2}_F+\alpha \, \mathcal {R}(\mathcal {E}) + \alpha _{\mathcal {N}}\Vert \mathcal {B}\Vert _F^2, \ s.t. \ \mathcal {T}= \mathcal {Z}+ \mathcal {E}+ \mathcal {B}, \end{aligned}$$

for \(X_i = [a^{i,1},a^{i,2},\ldots , a^{i,R}]\in \mathbb {R}^{n_i \times R}\), \(1\le i \le d\) and \(\llbracket X_1,X_2,\ldots , X_d\rrbracket := \sum \nolimits _{r=1}^{R}a^{1,r}\otimes a^{2,r}\otimes \cdots \otimes a^{d,r},\) where“\(\otimes \)” denotes the outer product of vectors, and R is an estimation of the CP-rank. In addition, the so-called sparse tensor PCA problem [1], which seeks the best sparse rank-one approximation for a given d-th order tensor \(\mathcal {T}\), can also be formulated in the form of (1.1):

$$\begin{aligned} \min \ -\mathcal {T}(x_1, x_2, \ldots , x_d) + \alpha \sum _{i=1}^d \mathcal {R}( x_i ), \ s.t. \ x_i \in S_i = \{x \,|\, \Vert x\Vert _2^2 \le 1\}, \ i=1,2,\ldots ,d, \end{aligned}$$
(1.4)

where \(\mathcal {T}(x_1, x_2, \ldots , x_d) = \sum _{i_1,\ldots , i_d}\mathcal {T}_{i_1,\ldots , i_d}(x_1)_{i_1}\ldots (x_d)_{i_d}\).

The convergence and iteration complexity for various nonconvex and nonsmooth optimization problems have recently attracted considerable research attention; see e.g. [3, 6, 7, 10, 11, 19, 20, 27, 28, 41, 46]. In this paper, we study several solution methods that use only the first-order information of the objective function, including a generalized conditional gradient method, variants of alternating direction method of multipliers, and a proximal block coordinate descent method, for solving (1.1) and (1.2). Specifically, we apply a generalized conditional gradient (GCG) method to solve (1.2). We prove that the GCG can find an \(\epsilon \)-stationary solution for (1.2) in \(O(\epsilon ^{-q})\) iterations under certain mild conditions, where q is a parameter in the Hölder condition that characterizes the degree of smoothness for f. In other words, the convergence rate of the algorithm depends on the degree of “smoothness” of the objective function. It should be noted that a similar iteration bound that depends on the parameter q was reported for convex problems [13], and for general nonconvex problem, [14] analyzed the convergence results, but there was no iteration complexity result. Furthermore, we show that if f is concave, then GCG finds an \(\epsilon \)-stationary solution for (1.2) in \(O(1/\epsilon )\) iterations. For the affinely constrained problem (1.1), we propose two algorithms (called proximal ADMM-g and proximal ADMM-m in this paper), both can be viewed as variants of the alternating direction method of multipliers (ADMM). Recently, there has been an emerging research interest on the ADMM for nonconvex problems (see, e.g., [2, 32, 33, 38, 52, 53, 56]). However, the results in [38, 52, 53, 56] only show that the iterates produced by the ADMM converge to a stationary solution without providing an iteration complexity analysis. Moreover, the objective function is required to satisfy the so-called Kurdyka–Łojasiewicz (KL) property [8, 9, 36, 42] to enable those convergence results. In [33], Hong et al. analyzed the convergence of the ADMM for solving nonconvex consensus and sharing problems. Note that they also analyzed the iteration complexity of the ADMM for the consensus problem. However, they require the nonconvex part of the objective function to be smooth, and nonsmooth part to be convex. In contrast, \(r_i\) in our model (1.1) can be nonconvex and nonsmooth at the same time. Moreover, we allow general constraints \(x_i\in \mathcal {X}_i,i=1,\ldots ,N-1\), while the consensus problem in [33] only allows such constraint for one block variable. A very recent work of Hong [32] discussed the iteration complexity of an augmented Lagrangian method for finding an \(\epsilon \)-stationary solution for the following problem:

$$\begin{aligned} \min \ f(x), \ s.t. \ Ax= b, x \in \mathbb {R}^n, \end{aligned}$$
(1.5)

under the assumption that f is differentiable. We will compare our results with [32] in more details in Sect. 3.

Before proceeding, let us first summarize:

Our contributions

  1. (i)

    We provide definitions of \(\epsilon \)-stationary solution for (1.1) and (1.2) using the variational inequalities. For (1.1), our definition of the \(\epsilon \)-stationary solution allows each \(r_i\) to be nonsmooth and nonconvex.

  2. (ii)

    We study a generalized conditional gradient method with a suitable line search rule for solving (1.2). We assume that the gradient of f satisfies a Hölder condition, and analyze its iteration complexity for obtaining an \(\epsilon \)-stationary solution for (1.2). After we released the first version of this paper, we noticed there are several recent works that study the iteration complexity of conditional gradient method for nonconvex problems. However, our results are different from these. For example, the convergence rate given in [57] is worse than ours, and [43, 44] only consider smooth nonconvex problem with Lipschitz continuous gradient, but our results cover nonsmooth models.

  3. (iii)

    We study two ADMM variants (proximal ADMM-g and proximal ADMM-m) for solving (1.1), and analyze their iteration complexities for obtaining an \(\epsilon \)-stationary solution for nonconvex problem (1.1). In addition, the setup and the assumptions of our model are different from other recent works. For instance, [38] considers a two-block nonconvex problem with an identity coefficient matrix for one block variable in the linear constraint, and requires the coerciveness of the objective or the boundedness of the domain. [53] assumes that the objective function is coercive over the feasible set and the nonsmooth objective is restricted prox-regular or piece-wise linear. While our algorithm assumes the gradient of the smooth part of the objective function is Lipschitz continuous and the nonsmooth part does not involve the last block variable, which is weaker than the assumptions on the objective functions in [38, 53].

  4. (iv)

    As an extension, we also show how to use proximal ADMM-g and proximal ADMM-m to find an \(\epsilon \)-stationary solution for (1.1) without assuming any condition on \(A_N\).

  5. (v)

    When the affine constraints are absent in model (1.1), as a by-product, we demonstrate that the iteration complexity of proximal block coordinate descent (BCD) method with cyclic order can be obtained directly from that of proximal ADMM-g and proximal ADMM-m. Although [11] gives an iteration complexity result of nonconvex BCD, it requires the KL property, and the complexity depends on a parameter in the KL condition, which is typically unknown.

Notation\(\Vert x\Vert _2\) denotes the Euclidean norm of vector x, and \(\Vert x\Vert _{H}^2\) denotes \( x^{\top }H x\) for some positive definite matrix H. For set S and scalar \(p>1\), we denote \(\mathrm {diam}_p(S) := \max _{x,y\, \in S}\Vert x -y \Vert _p\), where \(\Vert x \Vert _p = (\sum _{i=1}^{n}|x_i|^p)^{1/p}\). Without specification, we denote \(\Vert x\Vert =\Vert x\Vert _2\) and \(\mathrm {diam}(S)=\mathrm {diam}_2(S)\) for short. We use \(\mathop \mathrm{dist}(x,S)\) to denote the Euclidean distance of vector x to set S. Given a matrix A, its spectral norm and smallest singular value are denoted by \(\Vert A\Vert _2\) and \(\sigma _{\min }(A)\) respectively. We use \(\lceil a\rceil \) to denote the ceiling of a.

Organization The rest of this paper is organized as follows. In Sect. 2 we introduce the notion of \(\epsilon \)-stationary solution for (1.2) and apply a generalized conditional gradient method to solve (1.2) and analyze its iteration complexity for obtaining an \(\epsilon \)-stationary solution for (1.2). In Sect. 3 we give two definitions of \(\epsilon \)-stationarity for (1.1) under different settings and propose two ADMM variants that solve (1.1) and analyze their iteration complexities to reach an \(\epsilon \)-stationary solution for (1.1). In Sect. 4 we provide some extensions of the results in Sect. 3. In particular, we first show how to remove some of the conditions that we assume in Sect. 3, and then we apply a proximal BCD method to solve (1.1) without affine constraints and provide an iteration complexity analysis. In Sect. 5, we present numerical results to illustrate the practical efficiency of the proposed algorithms.

2 A generalized conditional gradient method

In this section, we study a GCG method for solving (1.2) and analyze its iteration complexity. The conditional gradient (CG) method, also known as the Frank-Wolfe method, was originally proposed in [24], and regained a lot of popularity recently due to its capability in solving large-scale problems (see, [4, 5, 25, 30, 34, 37, 47]). However, these works focus on solving convex problems. Bredies et al. [14] proved the convergence of a generalized conditional gradient method for solving nonconvex problems in Hilbert space. In this section, by introducing a suitable line search rule, we provide an iteration complexity analysis for this algorithm.

We make the following assumption in this section regarding (1.2).

Assumption 2.1

In (1.2), r(x) is convex and nonsmooth, and the constraint set S is convex and compact. Moreover, f is differentiable and there exist some \(p>1\) and \(\rho >0\) such that

$$\begin{aligned} f(y) \le f(x) + \nabla f(x)^{\top }(y-x) + \frac{\rho }{2}\Vert y - x\Vert ^p_p, \quad \forall x,y\in S. \end{aligned}$$
(2.1)

The above inequality (2.1) is also known as the Hölder condition and was used in other works on first-order algorithms (e.g., [21]). It can be shown that (2.1) holds for a variety of functions. For instance, (2.1) holds for any p when f is concave, and is valid for \(p=2\) when \(\nabla f\) is Lipschitz continuous.

2.1 An \(\epsilon \)-stationary solution for problem (1.2)

For smooth unconstrained problem \( \min _x f(x)\), it is natural to define the \(\epsilon \)-stationary solution using the criterion \(\Vert \nabla f(x)\Vert _2\le \epsilon .\) Nesterov [48] and Cartis et al. [17] showed that the gradient descent type methods with properly chosen step size need \(O({1}/{\epsilon ^2})\) iterations to find such a solution. Moreover, Cartis et al. [16] constructed an example showing that the \(O({1}/{\epsilon ^2})\) iteration complexity is tight for the steepest descent type algorithm. However, the case for the constrained nonsmooth nonconvex optimization is subtler. There exist some works on how to define \(\epsilon \)-optimality condition for the local minimizers of various constrained nonconvex problems [18, 22, 28, 32, 49]. Cartis et al. [18] proposed an approximate measure for smooth problem with convex set constraint. [49] discussed general nonsmooth nonconvex problem in Banach space by using the tool of limiting Fréchet \(\epsilon \)-subdifferential. Ngai et al. [22] showed that under certain conditions \(\epsilon \)-KKT solutions can converge to a stationary solution as \(\epsilon \rightarrow 0\). Here the \(\epsilon \)-KKT solution is defined by relaxing the complimentary slackness and equilibrium equations of KKT conditions. Ghadimi et al. [28] considered the following notion of \(\epsilon \)-stationary solution for (1.2):

$$\begin{aligned} P_{S}(x,\gamma ):=\frac{1}{\gamma } (x - x^+), \quad \text{ where } x^+ = \arg \min _{y \in S}{\nabla f(x)^{\top }y + \frac{1}{\gamma }V(y,x) + r(y)},\nonumber \\ \end{aligned}$$
(2.2)

where \(\gamma >0\) and V is a prox-function. They proposed a projected gradient algorithm to solve (1.2) and proved that it takes no more than \(O({1}/{\epsilon ^2})\) iterations to find an x satisfying

$$\begin{aligned} \Vert P_{S}(x,\gamma ) \Vert _2^2 \le \epsilon . \end{aligned}$$
(2.3)

Our definition of an \(\epsilon \)-stationary solution for (1.2) is as follows.

Definition 2.2

We call x an \(\epsilon \)-stationary solution (\(\epsilon \ge 0\)) for (1.2) if the following holds:

$$\begin{aligned} \psi _S(x):=\inf _{y\in S} \{ \nabla f(x)^{\top }(y-x) + r(y) - r(x) \} \ge - \epsilon . \end{aligned}$$
(2.4)

If \(\epsilon =0\), then x is called a stationary solution for (1.2).

Observe that if \(r(\cdot )\) is continuous then any cluster point of \(\epsilon \)-stationary solutions defined above is a stationary solution for (1.2) as \(\epsilon \rightarrow 0\). Moreover, the stationarity condition is weaker than the usual KKT optimality condition. To see this, we first rewrite (1.2) as the following equivalent unconstrained problem

$$\begin{aligned} \min _x f(x) + r(x) + \iota _S(x) \end{aligned}$$

where \(\iota _S(x)\) is the indicator function of S. Suppose that x is any local minimizer of this problem and thus also a local minimizer of (1.2). Since f is differentiable, r and \(\iota _S\) are convex, Fermat’s rule [50] yields

$$\begin{aligned} 0 \in \partial \left( f(x) + r(x) + \iota _S(x) \right) = \nabla f(x) + \partial r(x) + \partial \iota _S(x), \end{aligned}$$
(2.5)

which further implies that there exists some \(z \in \partial r (x)\) such that

$$\begin{aligned} (\nabla f(x) + z)^{\top }(y-x) \ge 0, \quad \forall y\in S. \end{aligned}$$

Using the convexity of \(r(\cdot )\), it is equivalent to

$$\begin{aligned} \nabla f(x)^{\top }(y-x) + r(y) - r(x) \ge 0,\; \forall y\in S. \end{aligned}$$
(2.6)

Therefore, (2.6) is a necessary condition for local minimum of (1.2) as well.

Furthermore, we claim that \(\psi _S(x) \ge -\epsilon \) implies \(\Vert P_{S}(x,\gamma )\Vert _2^2 \le \epsilon /\gamma \) with the prox-function \(V(y,x)=\Vert y-x\Vert _2^2/2\). In fact, (2.2) guarantees that

$$\begin{aligned} \left( \nabla f(x)+\frac{1}{\gamma }(x^+ - x) +z \right) ^{\top }(y - x^+) \ge 0, \quad \forall \; y \in S, \end{aligned}$$
(2.7)

for some \(z \in \partial r(x^+)\). By choosing \(y = x\) in (2.7) one obtains

$$\begin{aligned} \nabla f(x)^{\top }(x - x^+) + r(x) - r(x^+) \ge \left( \nabla f(x) +z \right) ^{\top }(x - x^+) \ge \frac{1}{\gamma }\Vert x^+ - x\Vert _2^2.\nonumber \\ \end{aligned}$$
(2.8)

Therefore, if \(\psi _S(x) \ge -\epsilon \), then \(\Vert P_{S}(x,\gamma ) \Vert _2^2 \le \frac{\epsilon }{\gamma } \) holds.

2.2 The algorithm

For given point z, we define an approximation of the objective function of (1.2) to be:

$$\begin{aligned} \ell (y;x) := f(x) + \nabla f(x)^{\top }(y-x) + r(y), \end{aligned}$$
(2.9)

which is obtained by linearizing the smooth part (function f) of \(\Phi \) in (1.2). Our GCG method for solving (1.2) is described in Algorithm 1, where \(\rho \) and p are from Assumption 2.1.

figure a

In each iteration of Algorithm 1, we first perform an exact minimization on the approximated objective function \(\ell (y;x)\) to form a direction \(d_k\). Then the step size \(\alpha _k\) is obtained by an exact line search (which differentiates the GCG from a normal CG method) along the direction \(d_k\), where f is approximated by p-powered function and the nonsmooth part is replaced by its upper bound. Finally, the iterate is updated by moving along the direction \(d_k\) with step size \(\alpha _k\).

Note that here we assumed that solving the subproblem in Step 1 of Algorithm 1 is relatively easy. That is, we assumed the following assumption.

Assumption 2.3

All subproblems in Step 1 of Algorithm 1 can be solved relatively easily.

Remark 2.4

Assumption 2.3 is quite common in conditional gradient method. For a list of functions r and sets S such that Assumption 2.3 is satisfied, see [34].

Remark 2.5

It is easy to see that the sequence \(\{ \Phi (x^k) \}\) generated by GCG is monotonically nonincreasing [14], which implies that any cluster point of \(\{ x^k \}\) cannot be a strict local maximizer.

2.3 An iteration complexity analysis

Before we proceed to the main result on iteration complexity of GCG, we need the following lemma that gives a sufficient condition for an \(\epsilon \)-stationary solution for (1.2). This lemma is inspired by [27], and it indicates that if the progress gained by minimizing (2.9) is small, then z must already be close to a stationary solution for (1.2).

Lemma 2.6

Define \(z_{\ell } := \mathop \mathrm{argmin}_{x\in S} \ \ell (x;z)\). The improvement of the linearization at point z is defined as

$$\begin{aligned} \triangle \ell _z := \ell (z;z) - \ell ({z}_{\ell };z)=- \nabla f(z)^{\top }({z}_{\ell }-z) + r(z) - r(z_\ell ). \end{aligned}$$

Given \(\epsilon \ge 0\), for any \(z \in S\), if \(\triangle \ell _z \le \epsilon \), then z is an \(\epsilon \)-stationary solution for (1.2) as defined in Definition 2.2.

Proof

From the definition of \({z}_{\ell }\), we have

$$\begin{aligned} \ell (y;z) - \ell ({z}_{\ell };z) = \nabla f(z)^{\top }(y-{z}_{\ell }) + r(y) - r(z_\ell ) \ge 0, \forall y \in S, \end{aligned}$$

which implies that

$$\begin{aligned}&\nabla f(z)^{\top }(y-z) + r(y) - r(z)\\&\quad = \nabla f(z)^{\top }(y-{z}_{\ell }) + r(y) - r(z_\ell ) + \nabla f(z)^{\top }({z}_{\ell }-z) + r(z_\ell ) - r(z) \\&\quad \ge \nabla f(z)^{\top }({z}_{\ell }-z)+ r(z_\ell ) - r(z), \forall y \in S. \end{aligned}$$

It then follows immediately that if \(\triangle \ell _z \le \epsilon \), then \(\nabla f(z)^{\top }(y-z)+ r(y) - r(z) \ge - \triangle \ell _z \ge - \epsilon \).\(\square \)

Denoting \(\Phi ^*\) to be the optimal value of (1.2), we are now ready to give the main result of the iteration complexity of GCG (Algorithm 1) for obtaining an \(\epsilon \)-stationary solution for (1.2).

Theorem 2.7

For any \(\epsilon \in (0, \mathrm {diam}^p_p(S) \rho ) \), GCG finds an \(\epsilon \)-stationary solution for (1.2) within \(\left\lceil \frac{2(\Phi (x^{0}) - \Phi ^*)(\mathrm {diam}^p_p(S)\rho )^{q-1}}{\epsilon ^q} \right\rceil \) iterations, where \(\frac{1}{p} + \frac{1}{q} =1\).

Proof

For ease of presentation, we denote \(D:= \mathrm {diam}_p(S)\) and \(\triangle \ell ^k := \triangle \ell _{x^k}\). By Assumption 2.1, using the fact that \(\frac{\epsilon }{D^p\rho } < 1\), and by the definition of \(\alpha _k\) in Algorithm 1, we have

$$\begin{aligned}&({\epsilon }/{(D^p\rho )})^{\frac{1}{p-1}}\triangle \ell ^k - \frac{1}{2\rho ^{1/(p-1)}} ({\epsilon }/{D})^{\frac{p}{p-1}} \nonumber \\&\quad \le -({\epsilon }/{(D^p\rho )})^{\frac{1}{p-1}} ( \nabla f(x^{k})^{\top }(y^{k}-x^{k})+r(y^{k}) - r(x^{k}) ) \nonumber \\&\qquad - \frac{\rho }{2} ( {\epsilon }/{(D^p\rho )})^{\frac{p}{p-1}} \Vert y^{k} - x^{k}\Vert ^p_p \nonumber \\&\quad \le -\alpha _k \left( \nabla f(x^{k})^{\top }(y^{k}-x^{k})+r(y^{k}) - r(x^{k}) \right) - \frac{\rho \alpha _k^p}{2}\Vert y^{k} - x^{k}\Vert ^p_p \nonumber \\&\quad \le -\nabla f(x^{k})^{\top }(x^{k+1}-x^{k}) + r(x^{k})-r(x^{k+1}) - \frac{\rho }{2}\Vert x^{k+1} - x^{k}\Vert ^p_p \nonumber \\&\quad \le f(x^{k}) - f(x^{k+1}) + r(x^{k})-r(x^{k+1}) = \Phi (x^{k}) - \Phi (x^{k+1}), \end{aligned}$$
(2.10)

where the third inequality is due to the convexity of function r and the fact that \(x^{k+1} - x^{k} = \alpha _k(y^{k}-x^{k})\), and the last inequality is due to (2.1). Furthermore, (2.10) immediately yields

$$\begin{aligned} \triangle \ell ^k \le ( {\epsilon }/{(D^p\rho )})^{-\frac{1}{p-1}} (\Phi (x^{k}) - \Phi (x^{k+1}) ) + \frac{\epsilon }{2}. \end{aligned}$$
(2.11)

For any integer \(K>0\), summing (2.11) over \(k=0,1,\ldots ,K-1\), yields

$$\begin{aligned} K \min _{k\in \{0,1,\ldots ,K-1\}}\triangle \ell ^k&\le \sum _{k=0}^{K-1}\triangle \ell ^k \le ( {\epsilon }/{(D^p\rho )})^{-\frac{1}{p-1}} \left( \Phi (x^{0}) - \Phi (x^{K})\right) + \frac{\epsilon }{2}K \\&\le ( {\epsilon }/{(D^p\rho )})^{-\frac{1}{p-1}} (\Phi (x^{0}) - \Phi ^* )+ \frac{\epsilon }{2}K, \end{aligned}$$

where \(\Phi ^*\) is the optimal value of (1.2). It is easy to see that by setting \(K = \left\lceil \frac{2(\Phi (x^{0}) - \Phi ^*)(D^p\rho )^{q-1}}{\epsilon ^q} \right\rceil \), the above inequality implies \(\triangle \ell _{x^{k^*}} \le \epsilon \), where \(k^*\in \mathop \mathrm{argmin}_{k\in \{0,\ldots ,K-1\}}\triangle \ell ^k\). According to Lemma 2.6, \(x^{k^*}\) is an \(\epsilon \)-stationary solution for (1.2) as defined in Definition 2.2.\(\square \)

Finally, if f is concave, then the iteration complexity can be improved as \(O(1/\epsilon )\).

Proposition 2.8

Suppose that f is a concave function. If we set \(\alpha _k = 1\) for all k in GCG (Algorithm 1), then it returns an \(\epsilon \)-stationary solution for (1.2) within \(\left\lceil \frac{\Phi (x^{0}) - \Phi ^*}{\epsilon } \right\rceil \) iterations.

Proof

By setting \(\alpha _k = 1\) in Algorithm 1 we have \(x^{k+1}=y^k\) for all k. Since f is concave, it holds that

$$\begin{aligned} \triangle \ell ^k = -\nabla f(x^{k})^{\top }(x^{k+1}-x^{k})+ r(x^{k})-r(x^{k+1})\le \Phi (x^{k}) - \Phi (x^{k+1}). \end{aligned}$$

Summing this inequality over \(k=0,1,\ldots ,K-1\) yields \(K \min _{k\in \{0,1,\ldots ,K-1\}}\triangle \ell ^k \le \Phi (x^{0}) - \Phi ^*,\) which leads to the desired result immediately.\(\square \)

3 Variants of ADMM for solving nonconvex problems with affine constraints

In this section, we study two variants of the ADMM (Alternating Direction Method of Multipliers) for solving the general problem (1.1), and analyze their iteration complexities for obtaining an \(\epsilon \)-stationary solution (to be defined later) under certain conditions. Throughout this section, the following two assumptions regarding problem (1.1) are assumed.

Assumption 3.1

The gradient of the function f is Lipschitz continuous with Lipschitz constant \(L>0\), i.e., for any \((x_1^1,\ldots ,x_N^1)\) and \((x_1^2,\ldots ,x_N^2)\in \mathcal {X}_1\times \cdots \times \mathcal {X}_{N-1}\times \mathbb {R}^{n_N}\), it holds that

$$\begin{aligned}&\left\| \nabla f(x_1^1,x_2^1,\ldots , x_N^1) - \nabla f(x_1^2,x_2^2,\ldots , x_N^2) \right\| \nonumber \\&\quad \le L\left\| \left( x_1^1 - x_1^2, x_2^1 - x_2^2, \ldots , x_N^1 - x_N^2\right) \right\| , \end{aligned}$$
(3.1)

which implies that for any \((x_1,\ldots ,x_{N-1})\in \mathcal {X}_1\times \cdots \times \mathcal {X}_{N-1}\) and \(x_N\), \(\hat{x}_N\in \mathbb {R}^{n_N}\), we have

$$\begin{aligned} f(x_1,\ldots ,x_{N-1}, x_N)\le & {} f(x_1,\ldots ,x_{N-1}, \hat{x}_N) + (x_N - \hat{x}_N)^{\top }\nabla _N f(x_1,\ldots ,x_{N-1}, \hat{x}_N) \nonumber \\&+\,\frac{L}{2}\Vert x_N - \hat{x}_N\Vert ^2. \end{aligned}$$
(3.2)

Assumption 3.2

f and \(r_i, i=1,\ldots ,N-1\) are all lower bounded over the appropriate domains defined via the sets \(\mathcal {X}_1, \mathcal {X}_2, \ldots , \mathcal {X}_{N-1}, \mathbb {R}^{n_N}\), and we denote

$$\begin{aligned} f^* = \inf \limits _{x_i\in \mathcal {X}_i,i=1,\ldots ,N-1; x_N\in \mathbb {R}^{n_N}} \ \{f(x_1,x_2,\ldots ,x_N)\} \end{aligned}$$

and \(r_i^* = \inf \limits _{x_i\in \mathcal {X}_i} \ \{r_i(x_i)\}\) for \(i=1,2,\ldots ,N-1\).

3.1 Preliminaries

To characterize the optimality conditions for (1.1) when \(r_i\) is nonsmooth and nonconvex, we need to recall the notion of the generalized gradient (see, e.g., [50]).

Definition 3.3

Let \(h:\mathbb {R}^{n} \rightarrow \mathbb {R}\cup \{+\infty \}\) be a proper lower semi-continuous function. Suppose \(h(\bar{x})\) is finite for a given \(\bar{x}\). For \(v \in \mathbb {R}^n\), we say that

  1. (i).

    v is a regular subgradient (also called Fr\(\acute{e}\)chet subdifferential) of h at \(\bar{x}\), written \(v \in \hat{\partial }h(\bar{x})\), if

    $$\begin{aligned} \lim _{x \ne \bar{x}}\inf _{x \rightarrow \bar{x}}\frac{h(x) - h(\bar{x}) - \langle v, x - \bar{x} \rangle }{\Vert x - \bar{x}\Vert } \ge 0; \end{aligned}$$
  2. (ii).

    v is a general subgradient of h at \(\bar{x}\), written \(v \in \partial h(\bar{x})\), if there exist sequences \(\{x^k\}\) and \(\{v^k\}\) such that \(x^{k} \rightarrow \bar{x}\) with \(h(x^{k}) \rightarrow h(\bar{x})\), and \(v^k \in \hat{\partial }h(x^k)\) with \(v^k \rightarrow v\) when \(k \rightarrow \infty \).

The following proposition lists some well-known facts about the lower semi-continuous functions.

Proposition 3.4

Let \(h:\mathbb {R}^{n} \rightarrow \mathbb {R}\cup \{+\infty \}\) and \(g:\mathbb {R}^{n} \rightarrow \mathbb {R}\cup \{+\infty \}\) be proper lower semi-continuous functions. Then it holds that:

  1. (i)

    (Theorem 10.1 in [50]) Fermat’s rule remains true: if \(\bar{x}\) is a local minimum of h, then \(0 \in \partial h(\bar{x})\).

  2. (ii)

    If \(h(\cdot )\) is continuously differentiable at x, then \(\partial (h + g) (x) = \nabla h(x) + \partial g(x)\).

  3. (iii)

    (Exercise 10.10 in [50]) If h is locally Lipschitz continuous at x, then \(\partial (h + g) (x) \subset \partial h(x) + \partial g(x)\).

  4. (iv)

    Suppose h(x) is locally Lipschitz continuous, X is a closed and convex set, and \(\bar{x}\) is a local minimum of h on X. Then there exists \(v \in \partial h(\bar{x})\) such that \((x - \bar{x})^{\top } v \ge 0,\forall x\in X\).

In our analysis, we frequently use the following identity that holds for any vectors abcd,

$$\begin{aligned} (a-b)^\top (c-d) = \frac{1}{2}\left( \Vert a-d\Vert _2^2-\Vert a-c\Vert _2^2+\Vert b-c\Vert _2^2-\Vert b-d\Vert _2^2\right) . \end{aligned}$$
(3.3)

3.2 An \(\epsilon \)-stationary solution for problem (1.1)

We now introduce notions of \(\epsilon \)-stationarity for (1.1) under the following two settings: (i) Setting 1: \(r_i\) is Lipschitz continuous, and \(\mathcal {X}_i\) is a compact set, for \(i=1,\ldots ,N-1\); (ii) Setting 2: \(r_i\) is lower semi-continuous, and \(\mathcal {X}_i = \mathbb {R}^{n_i}\), for \(i=1,\ldots ,N-1\).

Definition 3.5

(\(\epsilon \)-stationary solution for (1.1) in Setting 1) Under the conditions in Setting 1, for \(\epsilon \ge 0\), we call \({\left( x_1^*,\ldots , x_N^* \right) }\) an \(\epsilon \)-stationary solution for (1.1) if there exists a Lagrange multiplier \(\lambda ^*\) such that the following holds for any \(\left( x_1,\ldots ,x_N\right) \in \mathcal {X}_1\times \cdots \times \mathcal {X}_{N-1}\times \mathbb {R}^{n_N}\):

$$\begin{aligned} \left( x_i-{x}^*_i\right) ^\top \left[ {g}^*_i + \nabla _i f(x_1^*,\ldots , x^*_N) - A_i^\top {\lambda }^*\right]\ge & {} -\epsilon , \quad i=1,\ldots ,N-1, \end{aligned}$$
(3.4)
$$\begin{aligned} \left\| \nabla _N f(x_1^*,\ldots , x_{N-1}^*, x_N^*) - A_{N}^\top \lambda ^* \right\|\le & {} \epsilon , \end{aligned}$$
(3.5)
$$\begin{aligned} \left\| \sum _{i=1}^{N} A_i x_i^* - b \right\|\le & {} \epsilon , \end{aligned}$$
(3.6)

where \({g}^*_i\) is a general subgradient of \(r_i\) at point \(x_i^*\). If \(\epsilon =0\), we call \({ \left( x_1^*,\ldots , x_N^* \right) }\) a stationary solution for (1.1).

If \(\mathcal {X}_i = \mathbb {R}^{n_i}\) for \(i=1,\ldots , N-1\), then the VI style conditions in Definition 3.5 reduce to the following.

Definition 3.6

Under the conditions in Setting 2, for \(\epsilon \ge 0\), we call \({ \left( x_1^*,\ldots , x_N^*\right) }\) to be an \(\epsilon \)-stationary solution for (1.1) if there exists a Lagrange multiplier \(\lambda ^*\) such that (3.5), (3.6) and the following holds for any \(\left( x_1,\ldots ,x_N\right) \in \mathcal {X}_1\times \cdots \times \mathcal {X}_{N-1}\times \mathbb {R}^{n_N}\):

$$\begin{aligned} \mathop \mathrm{dist}\left( -\nabla _i f(x_1^*,\ldots , x^*_N) + A_i^\top \lambda ^*, {\partial } r_i(x_i^*)\right) \le \epsilon , \quad i=1,\ldots ,N-1, \end{aligned}$$
(3.7)

where \({\partial } r_i(x_i^*)\) is the general subgradient of \(r_i\) at \(x_i^*\), \(i=1,2,\ldots ,N-1\). If \(\epsilon =0\), we call \({ \left( x_1^*,\ldots , x_N^*\right) }\) to be a stationary solution for (1.1).

The two settings of problem (1.1) considered in this section and their corresponding definitions of \(\epsilon \)-stationary solution, are summarized in Table 1.

Table 1 \(\epsilon \)-stationary solution of (1.1) in two settings

A very recent work of Hong [32] proposes a definition of an \(\epsilon \)-stationary solution for problem (1.5), and analyzes the iteration complexity of a proximal augmented Lagrangian method for obtaining such a solution. Specifically, \((x^*,\lambda ^*)\) is called an \(\epsilon \)-stationary solution for (1.5) in [32] if \(Q(x^*,\lambda ^*) \le \epsilon \), where

$$\begin{aligned} Q(x,\lambda ) := \Vert \nabla _x\mathcal {L}_\beta (x, \lambda ) \Vert ^2 + \Vert Ax - b\Vert ^2, \end{aligned}$$

and \(\mathcal {L}_\beta (x,\lambda ) := f(x) - \lambda ^\top \left( Ax - b\right) + \frac{\beta }{2}\left\| Ax - b\right\| ^2\) is the augmented Lagrangian function of (1.5). Note that [32] assumes that f is differentiable and has bounded gradient in (1.5). It is easy to show that an \(\epsilon \)-stationary solution in [32] is equivalent to an \(O(\sqrt{\epsilon })\)-stationary solution for (1.1) according to Definition 3.6 with \(r_i=0\) and f being differentiable. Note that there is no set constraint in (1.5), and so the notion of the \(\epsilon \)-stationarity in [32] is not applicable in the case of Definition 3.5.

Proposition 3.7

Consider the \(\epsilon \)-stationary solution in Definition 3.6 applied to problem (1.5), i.e., one block variable and \(r_i(x)=0\). Then \(x^*\) is a \(\gamma _1\sqrt{{\epsilon }}\)-stationary solution in Definition 3.6, with Lagrange multiplier \(\lambda ^*\) and \(\gamma _1 = 1/(\sqrt{2\beta ^2\Vert A\Vert _2^2+3})\), implies \(Q(x^*,\lambda ^*) \le \epsilon \). On the contrary, if \(Q(x^*,\lambda ^*) \le \epsilon \), then \(x^*\) is a \(\gamma _2\sqrt{{\epsilon }}\)-stationary solution from Definition  3.6 with Lagrange multiplier \(\lambda ^*\), where \(\gamma _2 = \sqrt{2 (1 + \beta ^2 \Vert A\Vert _2^2)}\).

Proof

Suppose \(x^*\) is a \(\gamma _1\sqrt{{\epsilon }}\)-stationary solution as defined in Definition 3.6. We have \(\Vert \nabla f(x^*) - A^{\top }\lambda ^* \Vert \le \gamma _1\sqrt{\epsilon }\) and \(\Vert Ax^* - b\Vert \le \gamma _1\sqrt{\epsilon }\), which implies that

$$\begin{aligned} Q(x^*,\lambda ^*)= & {} \Vert \nabla f(x^*) - A^{\top }\lambda ^* + \beta A^{\top } (Ax^* - b)\Vert ^2 + \Vert Ax^* - b\Vert ^2\\\le & {} 2 \Vert \nabla f(x^*) - A^{\top }\lambda ^* \Vert ^2 + 2 \beta ^2 \Vert A\Vert _2^2\Vert Ax^* - b\Vert ^2+ \Vert Ax^* - b\Vert ^2\\\le & {} 2\gamma _1^2 \epsilon + (2 \beta ^2 \Vert A\Vert _2^2 + 1)\gamma _1^2 \epsilon = \epsilon . \end{aligned}$$

On the other hand, if \(Q(x^*,\lambda ^*)\le \epsilon \), then we have \(\Vert \nabla f(x^*) - A^{\top }\lambda ^* + \beta A^{\top } (Ax^* - b)\Vert ^2 \le \epsilon \) and \(\Vert Ax^* - b\Vert ^2 \le \epsilon \). Therefore,

$$\begin{aligned}&\Vert \nabla f(x^*) - A^{\top }\lambda ^*\Vert ^2 \\&\quad \le 2\Vert \nabla f(x^*) - A^{\top }\lambda ^* + \beta A^{\top } (Ax^* - b)\Vert ^2 + 2\Vert -\beta A^{\top } (Ax^* - b)\Vert ^2\\&\quad \le 2\Vert \nabla f(x^*) - A^{\top }\lambda ^* + \beta A^{\top } (Ax^* - b)\Vert ^2 + 2\beta ^2 \Vert A\Vert _2^2 \Vert Ax^* - b\Vert ^2\\&\quad \le 2 (1 + \beta ^2 \Vert A\Vert _2^2)\,\epsilon . \end{aligned}$$

The desired result then follows immediately.\(\square \)

In the following, we introduce two variants of ADMM, to be called proximal ADMM-g and proximal ADMM-m, that solve (1.1) under some additional assumptions on \(A_N\). In particular, proximal ADMM-g assumes \(A_N=I\), and proximal ADMM-m assumes \(A_N\) to have full row rank.

3.3 Proximal gradient-based ADMM (proximal ADMM-g)

Our proximal ADMM-g solves (1.1) under the condition that \(A_N=I\). In this case, the problem reduces to a so-called sharing problem in the literature which has the following form

$$\begin{aligned} \begin{array}{ll} \min &{} f(x_1,\ldots ,x_N) + \sum \limits _{i=1}^{N-1} r_i(x_i) \\ s.t. &{} \sum \limits _{i=1}^{N-1} A_i x_i + x_N = b, \ x_i\in \mathcal {X}_i, \ i=1,\ldots ,N-1.\end{array} \end{aligned}$$

For applications of the sharing problem, see [12, 33, 39, 40]. Our proximal ADMM-g for solving (1.1) with \(A_N=I\) is described in Algorithm 2. It can be seen from Algorithm 2 that proximal ADMM-g is based on the framework of augmented Lagrangian method, and can be viewed as a variant of the ADMM. The augmented Lagrangian function of (1.1) is defined as

$$\begin{aligned}&\mathcal {L}_\beta (x_1,\ldots ,x_N,\lambda ):=f(x_1,\ldots ,x_N)\\&\qquad +\sum _{i=1}^{N-1}r_i(x_i)-\left\langle \lambda ,\sum _{i=1}^N A_ix_i-b \right\rangle +\frac{\beta }{2}\left\| \sum _{i=1}^NA_ix_i-b\right\| _2^2, \end{aligned}$$

where \(\lambda \) is the Lagrange multiplier associated with the affine constraint, and \(\beta >0\) is a penalty parameter. In each iteration, proximal ADMM-g minimizes the augmented Lagrangian function plus a proximal term for block variables \(x_1,\ldots ,x_{N-1}\), with other variables being fixed; and then a gradient descent step is conducted for \(x_N\), and finally the Lagrange multiplier \(\lambda \) is updated. The interested readers are referred to [26] for gradient-based ADMM and its various stochastic variants for convex optimization.

figure b

Remark 3.8

Note that here we actually assumed that all subproblems in Step 1 of Algorithm 2, though possibly nonconvex, can be solved to global optimality. Many important problems arising from statistics satisfy this assumption. In fact, when the coupled objective is absent or can be linearized, after choosing some proper matrix \(H_i\), the solution of the corresponding subproblem is given by the proximal mappings of \(r_i\). As we mentioned earlier, many nonconvex regularization functions such as SCAD, LSP, MCP and Capped-\(\ell _1\) admit closed-form proximal mappings. Moreover, in Algorithm 2, we can choose

$$\begin{aligned} \beta > \max \left( \frac{18\sqrt{3}+6}{13}L,\; \max \limits _{i=1,2,\ldots ,N-1}\frac{6 L^2}{\sigma _{\min }(H_i)}\right) , \end{aligned}$$
(3.8)

and

$$\begin{aligned} \gamma \in \left( \frac{13\beta -\sqrt{13\beta ^2 - 12\beta L - 72L^2}}{6L^2 + \beta L + 13\beta ^2}, \frac{13\beta + \sqrt{13\beta ^2 - 12\beta L - 72L^2}}{6L^2 + \beta L + 13\beta ^2} \right) \end{aligned}$$
(3.9)

which guarantee the convergence rate of the algorithm as shown in Lemma 3.9 and Theorem 3.12.

Before presenting the main result on the iteration complexity of proximal ADMM-g, we need some lemmas.

Lemma 3.9

Suppose the sequence \(\{(x_1^k,\ldots ,x_N^k, \lambda ^k)\}\) is generated by Algorithm 2. The following inequality holds

$$\begin{aligned} \Vert \lambda ^{k+1} - \lambda ^k \Vert ^2\le & {} 3(\beta - 1/\gamma )^2 \Vert x_N^k - x_N^{k+1} \Vert ^2 \nonumber \\+ & {} 3 ((\beta - 1/\gamma )^2 + L^2)\Vert x_N^{k-1} - x_N^k \Vert ^2 + 3L^2\sum _{i=1}^{N-1} \Vert x_i^{k+1} - x_i^k\Vert ^2.\nonumber \\ \end{aligned}$$
(3.10)

Proof

Note that Steps 2 and 3 of Algorithm 2 yield that

$$\begin{aligned} \lambda ^{k+1} = (\beta - 1/\gamma ) (x_N^k - x_N^{k+1}) + \nabla _N f(x_1^{k+1},\ldots ,x_{N-1}^{k+1},x_N^k). \end{aligned}$$
(3.11)

Combining (3.11) and (3.1) yields that

$$\begin{aligned}&\Vert \lambda ^{k+1} - \lambda ^k \Vert ^2 \\&\quad \le \Vert (\nabla _N f(x_1^{k+1},\ldots ,x_{N-1}^{k+1},x_N^k) - \nabla _N f(x_1^k,\ldots ,x_{N-1}^k,x_N^{k-1}) ) \\&\qquad + (\beta - 1/\gamma ) (x_N^k - x_N^{k+1}) - (\beta - 1/\gamma )(x_N^{k-1} - x_N^k) \Vert ^2 \\&\quad \le 3 \Vert \nabla _N f(x_1^{k+1},\ldots ,x_{N-1}^{k+1},x_N^k) - \nabla _N f(x_1^k,\ldots ,x_{N-1}^k,x_N^{k-1}) \Vert ^2 \\&\qquad + 3(\beta - 1/\gamma )^2 \Vert x_N^k - x_N^{k+1}\Vert ^2 + 3\left[ \beta - \frac{1}{\gamma }\right] ^2\left\| x_N^{k-1} - x_N^k \right\| ^2 \\&\quad \le 3\left[ \beta - \frac{1}{\gamma }\right] ^2\left\| x_N^k - x_N^{k+1} \right\| ^2 + 3\left[ \left( \beta - \frac{1}{\gamma }\right) ^2 + L^2\right] \left\| x_N^{k-1} - x_N^k \right\| ^2 \\&\qquad + 3L^2\sum _{i=1}^{N-1} \left\| x_i^{k+1} - x_i^k\right\| ^2. \end{aligned}$$

\(\square \)

We now define the following function, which will play a crucial role in our analysis:

$$\begin{aligned}&\Psi _G\left( x_1,x_2,\ldots ,x_N,\lambda ,\bar{x}\right) \nonumber \\&\quad = \mathcal {L}_\beta (x_1,x_2,\ldots ,x_N, \lambda ) + \frac{3}{\beta }\left[ \left( \beta - \frac{1}{\gamma }\right) ^2 + L^2\right] \left\| x_N - \bar{x}\right\| ^2. \end{aligned}$$
(3.12)

Lemma 3.10

Suppose the sequence \(\{(x_1^k,\ldots ,x_N^k, \lambda ^k)\}\) is generated by Algorithm 2, where the parameters \(\beta \) and \(\gamma \) are taken according to (3.8) and (3.9) respectively. Then \(\Psi _G(x_1^{k+1}, \ldots , x_N^{k+1}, \lambda ^{k+1}, x_N^k)\) monotonically decreases over \(k\ge 0\).

Proof

From Step 1 of Algorithm 2 it is easy to see that

$$\begin{aligned} \mathcal {L}_\beta \left( x_1^{k+1},\ldots , x_{N-1}^{k+1}, x_N^k, \lambda ^k\right) \le \mathcal {L}_\beta \left( x_1^k, \ldots , x_N^k, \lambda ^k\right) - \sum \limits _{i=1}^{N-1}\frac{1}{2}\left\| x_i^k -x_i^{k+1}\right\| ^2_{H_i}. \end{aligned}$$
(3.13)

From Step 2 of Algorithm 2 we get that

$$\begin{aligned} \begin{array}{lll} 0 &{} = &{} \left( x_N^k - x_N^{k+1}\right) ^\top \left[ \nabla f(x_1^{k+1}, \ldots , x_{N-1}^{k+1}, x_N^k) - \lambda ^k \right. \\ &{}&{}\left. + \beta \left( \sum _{i=1}^{N-1} A_i x_i^{k+1} + x_N^k - b \right) - \frac{1}{\gamma }\left( x_N^k - x_N^{k+1}\right) \right] \\ &{} \le &{} f(x_1^{k+1},\ldots , x_{N-1}^{k+1}, x_N^k) - f(x_1^{k+1}, \ldots , x_N^{k+1}) \\ &{}&{}+ \frac{L}{2}\left\| x_N^k - x_N^{k+1} \right\| ^2 - \left( x_N^k - x_N^{k+1}\right) ^\top \lambda ^k \\ &{} &{} + \frac{\beta }{2}\left\| x_N^k - x_N^{k+1}\right\| ^2 + \frac{\beta }{2}\left\| \sum \limits _{i=1}^{N-1} A_i x_i^{k+1} + x_N^k - b\right\| ^2 \\ &{}&{}- \frac{\beta }{2}\left\| \sum \limits _{i=1}^{N-1} A_i x_i^{k+1} + x_N^{k+1} - b\right\| ^2 - \frac{1}{\gamma }\left\| x_N^k - x_N^{k+1}\right\| ^2 \\ &{} = &{} \mathcal {L}_\beta ( x_1^{k+1},\ldots ,x_{N-1}^{k+1}, x_N^k, \lambda ^k) - \mathcal {L}_\beta (x_1^{k+1}, \ldots , x_N^{k+1}, \lambda ^k) \\ &{}&{}+ \left( \frac{L+\beta }{2} - \frac{1}{\gamma }\right) \left\| x_N^k - x_N^{k+1}\right\| ^2, \end{array} \end{aligned}$$
(3.14)

where the inequality follows from (3.2) and (3.3). Moreover, the following equality holds trivially

$$\begin{aligned} \mathcal {L}_\beta (x_1^{k+1}, \ldots , x_N^{k+1}, \lambda ^{k+1}) = \mathcal {L}_\beta (x_1^{k+1}, \ldots , x_N^{k+1}, \lambda ^k) + \frac{1}{\beta }\left\| \lambda ^k - \lambda ^{k+1}\right\| ^2. \end{aligned}$$
(3.15)

Combining (3.13), (3.14), (3.15) and (3.10) yields that

$$\begin{aligned}&\mathcal {L}_\beta (x_1^{k+1}, \ldots , x_N^{k+1}, \lambda ^{k+1}) - \mathcal {L}_\beta (x_1^k, \ldots , x_N^k, \lambda ^k) \\&\quad \le \left( \frac{L+\beta }{2} - \frac{1}{\gamma }\right) \left\| x_N^k - x_N^{k+1}\right\| ^2 - \sum \limits _{i=1}^{N-1}\frac{1}{2}\left\| x_i^k -x_i^{k+1}\right\| ^2_{H_i} + \frac{1}{\beta }\left\| \lambda ^k - \lambda ^{k+1}\right\| ^2 \\&\quad \le \left( \frac{L+\beta }{2} - \frac{1}{\gamma } + \frac{3}{\beta }\left[ \beta - \frac{1}{\gamma }\right] ^2\right) \left\| x_N^k - x_N^{k+1}\right\| ^2\\&\qquad + \frac{3}{\beta }\left[ \left( \beta - \frac{1}{\gamma }\right) ^2 + L^2\right] \left\| x_N^{k-1} - x_N^k \right\| ^2 \\&\qquad + \sum \limits _{i=1}^{N-1} \left( x_i^k - x_i^{k+1}\right) ^{\top }\left( \frac{3L^2}{\beta }I - \frac{1}{2}H_i\right) \left( x_i^k - x_i^{k+1}\right) , \end{aligned}$$

which further implies that

$$\begin{aligned}&\Psi _G(x_1^{k+1}, \ldots , x_N^{k+1}, \lambda ^{k+1}, x_N^k) - \Psi _G(x_1^k, \ldots , x_N^k, \lambda ^k, x_N^{k-1}) \nonumber \\&\quad \le \left( \frac{L+\beta }{2} - \frac{1}{\gamma } + \frac{6}{\beta }\left[ \beta - \frac{1}{\gamma }\right] ^2 + \frac{3L^2}{\beta }\right) \left\| x_N^k - x_N^{k+1}\right\| ^2 \nonumber \\&\qquad - \sum \limits _{i=1}^{N-1}\left\| x_i^k - x_i^{k+1}\right\| ^2_{ \frac{1}{2}H_i - \frac{3L^2}{\beta }I}. \end{aligned}$$
(3.16)

It is easy to verify that when \(\beta > \frac{18\sqrt{3}+6}{13}L\), then \(\gamma \) defined as in (3.9) ensures that \(\gamma >0\) and

$$\begin{aligned} \frac{L+\beta }{2} - \frac{1}{\gamma } + \frac{6}{\beta }\left[ \beta - \frac{1}{\gamma }\right] ^2 + \frac{3L^2}{\beta } < 0. \end{aligned}$$
(3.17)

Therefore, choosing \(\beta > \max \left( \frac{18\sqrt{3}+6}{13}L,\; \max \limits _{i=1,2,\ldots ,N-1}\frac{6 L^2}{\sigma _{\min }(H_i)}\right) \) and \(\gamma \) as in (3.9) guarantees that \(\Psi _G(x_1^{k+1}, \ldots , x_N^{k+1}, \lambda ^{k+1}, x_N^k)\) monotonically decreases over \(k\ge 0\). In fact, (3.17) can be verified as follows. By denoting \(z=\beta -\frac{1}{\gamma }\), (3.17) is equivalent to

$$\begin{aligned} 12 z^2 + 2\beta z + \left( 6L^2 + \beta L - \beta ^2\right) < 0, \end{aligned}$$

which holds when \(\beta > \frac{18\sqrt{3}+6}{13}L\) and \(\frac{-\beta - \sqrt{13\beta ^2 - 12\beta L - 72L^2}}{12}< z < \frac{-\beta + \sqrt{13\beta ^2 - 12\beta L - 72L^2}}{12}\), i.e.,

$$\begin{aligned} \frac{-13\beta - \sqrt{13\beta ^2 - 12\beta L - 72L^2}}{12}< -\frac{1}{\gamma } < \frac{-13\beta + \sqrt{13\beta ^2 - 12\beta L - 72L^2}}{12}, \end{aligned}$$

which holds when \(\gamma \) is chosen as in (3.9).\(\square \)

Lemma 3.11

Suppose the sequence \(\{(x_1^k,\ldots ,x_N^k, \lambda ^k)\}\) is generated by Algorithm 2. Under the same conditions as in Lemma 3.10, for any \(k\ge 0\), we have

$$\begin{aligned} \Psi _G\left( x_1^{k+1}, \ldots , x_N^{k+1}, \lambda ^{k+1}, x_N^k\right) \ge \sum _{i=1}^{N-1} r_i^*+f^*, \end{aligned}$$

where \(r_i^*\) and \(f^*\) are defined in Assumption 3.2.

Proof

Note that from (3.11), we have

$$\begin{aligned}&\mathcal {L}_\beta (x_1^{k+1}, \ldots , x_N^{k+1}, \lambda ^{k+1}) \\&\quad = \sum \limits _{i=1}^{N-1} r_i(x_i^{k+1}) + f(x_1^{k+1}, \ldots , x_N^{k+1}) \\&\qquad - \left( \sum \limits _{i=1}^{N-1} A_i x_i^{k+1} + x_N^{k+1} - b \right) ^\top \nabla _N f(x_1^{k+1}, \ldots , x_N^{k+1}) \\&\qquad + \frac{\beta }{2}\left\| \sum \limits _{i=1}^{N-1} A_i x_i^{k+1} + x_N^{k+1} - b \right\| ^2 \\&\qquad - \left( \sum \limits _{i=1}^{N-1} A_i x_i^{k+1} + x_N^{k+1} - b \right) ^\top \times \left[ \left( \beta - \frac{1}{\gamma }\right) \left( x_N^k - x_N^{k+1}\right) \right. \\&\qquad \left. + \left( \nabla _N f(x_1^{k+1}, \ldots , x_{N-1}^{k+1}, x_N^k) - \nabla _N f(x_1^{k+1}, \ldots , x_N^{k+1})\right) \right] \\&\quad \ge \sum \limits _{i=1}^{N-1} r_i(x_i^{k+1}) + f(x_1^{k+1}, \ldots , x_{N-1}^{k+1}, b-\sum _{i=1}^{N-1} A_i x_i^{k+1}) \\&\qquad + \left( \frac{\beta }{2} -\frac{\beta }{6} - \frac{L}{2}\right) \left\| \sum \limits _{i=1}^{N-1} A_i x_i^{k+1} + x_N^{k+1} - b \right\| ^2 \\&\qquad - \frac{3}{\beta }\left[ \left( \beta - \frac{1}{\gamma }\right) ^2 + L^2\right] \left\| x_N^k - x_N^{k+1}\right\| ^2 \\&\quad \ge \sum \limits _{i=1}^{N-1} r_i^* + f^* - \frac{3}{\beta }\left[ \left( \beta - \frac{1}{\gamma }\right) ^2 + L^2\right] \left\| x_N^k - x_N^{k+1}\right\| ^2, \end{aligned}$$

where the first inequality follows from (3.2), and the second inequality is due to \(\beta \ge 3L/2\). The desired result follows from the definition of \(\Psi _G\) in (3.12).\(\square \)

Now we are ready to give the iteration complexity of Algorithm 2 for finding an \(\epsilon \)-stationary solution of (1.1).

Theorem 3.12

Suppose the sequence \(\{(x_1^k,\ldots ,x_N^k, \lambda ^k)\}\) is generated by Algorithm 2. Furthermore, suppose that \(\beta \) satisfies (3.8) and \(\gamma \) satisfies (3.9). Denote

$$\begin{aligned}&\kappa _1 := \frac{3}{\beta ^2}\left[ \left( \beta {-} \frac{1}{\gamma }\right) ^2 {+} L^2\right] ,\quad \kappa _2 := \left( |\beta {-} \frac{1}{\gamma }| {+} L \right) ^2,\quad \kappa _3:=\max \limits _{1\le i\le N-1}\left( \mathrm {diam}(\mathcal {X}_i)\right) ^2, \\&\kappa _4 := \left( L + \beta \sqrt{N}\max \limits _{1\le i\le N}\left[ \Vert A_i \Vert _2^2\right] + \max \limits _{1\le i\le N}\Vert H_i\Vert _2 \right) ^2\\ \end{aligned}$$

and

$$\begin{aligned} \tau:= & {} \min \left\{ -\left( \frac{L+\beta }{2} - \frac{1}{\gamma } + \frac{6}{\beta }\left[ \beta - \frac{1}{\gamma }\right] ^2 + \frac{3L^2}{\beta }\right) , \right. \nonumber \\&\left. \min _{i=1,\ldots ,N-1}\left\{ -\left( \frac{3L^2}{\beta } - \frac{\sigma _{\min }(H_i)}{2}\right) \right\} \right\} > 0. \end{aligned}$$
(3.18)

Then to get an \(\epsilon \)-stationary solution, the number of iterations that the algorithm runs can be upper bounded by:

$$\begin{aligned} K := \left\{ \begin{array}{ll} \left\lceil \frac{2\max \{\kappa _1,\kappa _2,\kappa _4\cdot \kappa _3\}}{\tau \,\epsilon ^2}\left( \Psi _G(x_1^1, \ldots , x_N^1, \lambda ^1, x_N^0) - \sum _{i=1}^{N-1}r_i^* - f^*\right) \right\rceil , &{} \text{ for } \text{ Setting } \text{1 } \\ \left\lceil \frac{2\max \{\kappa _1,\kappa _2,\kappa _4\}}{\tau \,\epsilon ^2}\left( \Psi _G(x_1^1, \ldots , x_N^1, \lambda ^1, x_N^0) - \sum _{i=1}^{N-1}r_i^* - f^*\right) \right\rceil , &{} \text{ for } \text{ Setting } \text{2 } \end{array}\right. \end{aligned}$$
(3.19)

and we can further identify one iteration \(\hat{k} \in \mathop \mathrm{argmin}\limits _{2\le k \le K+1} \sum _{i=1}^N \left( \Vert x_i^k - x_i^{k+1} \Vert ^2 + \Vert x_i^{k-1} - x_i^k \Vert ^2\right) \) such that \((x_1^{\hat{k}}, \ldots , x_N^{\hat{k}} )\) is an \(\epsilon \)-stationary solution for optimization problem (1.1) with Lagrange multiplier \(\lambda ^{\hat{k}}\) and \(A_N=I\), for Settings 1 and 2 respectively.

Proof

For ease of presentation, denote

$$\begin{aligned} \theta _k:=\sum _{i=1}^N(\Vert x_i^{k} - x_i^{k+1}\Vert ^2 + \Vert x_i^{k-1} - x_i^{k}\Vert ^2). \end{aligned}$$
(3.20)

By summing (3.16) over \(k=1,\ldots ,K\), we obtain that

$$\begin{aligned}&\Psi _G(x_1^{K+1}, \ldots , x_N^{K+1}, \lambda ^{K+1}, x_N^K) - \Psi _G(x_1^1, \ldots , x_N^1, \lambda ^1, x_N^0) \nonumber \\&\quad \le -\tau \sum _{k=1}^{K} \sum _{i=1}^N \left\| x_i^k - x_i^{k+1}\right\| ^2, \end{aligned}$$
(3.21)

where \(\tau \) is defined in (3.18). By invoking Lemmas 3.10 and 3.11, we get

$$\begin{aligned} \min _{2\le k \le K+1} \theta _k\le & {} \frac{1}{\tau \,K}\left[ \Psi _G(x_1^1, \ldots , x_N^1, \lambda ^1, x_N^0) + \Psi _G(x_1^2, \ldots , x_N^2, \lambda ^2, x_N^1) - 2\sum _{i=1}^{N}r_i^* - 2f^*\right] \\\le & {} \frac{2}{\tau \,K}\left[ \Psi _G(x_1^1, \ldots , x_N^1, \lambda ^1, x_N^0) - \sum _{i=1}^{N}r_i^* - f^*\right] . \end{aligned}$$

We now derive upper bounds on the terms in (3.5) and (3.6) through \(\theta _k\). Note that (3.11) implies that

$$\begin{aligned}&\Vert \lambda ^{k+1} - \nabla _N f(x_1^{k+1}, \ldots , x_N^{k+1})\Vert \\&\quad \le | \beta - \frac{1}{\gamma } | \, \Vert x_N^k - x_N^{k+1}\Vert + \Vert \nabla _N f(x_1^{k+1}, \ldots , x_{N-1}^{k+1}, x_N^k) - \nabla f(x_1^{k+1}, \ldots , x_N^{k+1}) \Vert \\&\quad \le \left[ |\beta - \frac{1}{\gamma }| + L \right] \Vert x_N^k - x_N^{k+1}\Vert , \end{aligned}$$

which yields

$$\begin{aligned} \Vert \lambda ^{k+1} - \nabla _N f(x_1^{k+1}, \ldots , x_N^{k+1}) \Vert ^2 \le \left[ |\beta - \frac{1}{\gamma }| + L \right] ^2 \theta _k. \end{aligned}$$
(3.22)

From Step 3 of Algorithm 2 and (3.10) it is easy to see that

$$\begin{aligned} \begin{array}{ll} &{} \left\| \sum \limits _{i=1}^{N-1} A_i x_i^{k+1} + x_N^{k+1} - b \right\| ^2 \\ &{}\quad = \frac{1}{\beta ^2}\Vert \lambda ^{k+1} - \lambda ^k \Vert ^2 \\ &{}\quad \le \frac{3}{\beta ^2}\left[ \beta - \frac{1}{\gamma }\right] ^2\left\| x_N^k - x_N^{k+1} \right\| ^2 + \frac{3}{\beta ^2}\left[ \left( \beta - \frac{1}{\gamma }\right) ^2 + L^2\right] \left\| x_N^{k-1} - x_N^k \right\| ^2 \\ &{}\quad \quad + \frac{3L^2}{\beta ^2}\sum _{i=1}^{N-1} \left\| x_i^k - x_i^{k+1}\right\| ^2 \\ &{}\quad \le \frac{3}{\beta ^2}\left[ \left( \beta - \frac{1}{\gamma }\right) ^2 + L^2\right] \theta _k. \end{array} \end{aligned}$$
(3.23)

We now derive upper bounds on the terms in (3.4) and (3.7) under the two settings in Table 1, respectively.

Setting 2 Because \(r_i\) is lower semi-continuous and \(\mathcal {X}_i = \mathbb {R}^{n_i}\), \(i=1,\ldots ,N-1\), it follows from Step 1 of Algorithm 2 that there exists a general subgradient \(g_i\in \partial r_i(x_i^{k+1})\) such that

$$\begin{aligned}&\mathop \mathrm{dist}\left( -\nabla _i f(x_1^{k+1},\ldots , x^{k+1}_N) + A_i^\top {\lambda }^{k+1}, \partial r_i(x_i^{k+1})\right) \nonumber \\&\quad \le \left\| g_i+\nabla _i f(x_1^{k+1},\ldots , x^{k+1}_N) - A_i^\top {\lambda }^{k+1} \right\| \nonumber \\&\quad = \bigg \Vert \nabla _i f(x_1^{k+1},\ldots , x^{k+1}_N)-\nabla _i f(x_1^{k+1},\ldots , x_i^{k+1},x_{i+1}^{k},\ldots ,x^{k}_N) \nonumber \\&\qquad + \, \beta A_i^{\top }\bigg ( \sum _{j=i+1}^{N}A_j(x_j^{k+1} - x_j^{k})\bigg ) - H_i (x_i^{k+1} - x_i^k) \bigg \Vert \nonumber \\&\quad \le L \, \sqrt{\sum _{j=i+1}^N \Vert x_j^k - x_j^{k+1} \Vert ^2 } \,+ \beta \,\Vert A_i \Vert _2\, \,\sum \limits _{j=i+1}^N \Vert A_j \Vert _2 \Vert x_j^{k+1} - x_j^{k}\Vert \nonumber \\&\qquad + \Vert H_i\Vert _2\Vert x_i^{k+1} - x_i^{k}\Vert _2\nonumber \\&\quad \le \left( L + \beta \sqrt{N}\max _{i+1\le j\le N}\left[ \Vert A_j \Vert _2\right] \Vert A_i \Vert _2 \right) \,\sqrt{\sum _{j=i+1}^N \Vert x_j^k - x_j^{k+1} \Vert ^2 } \nonumber \\&\qquad + \Vert H_i\Vert _2 \Vert x_i^{k+1} - x_i^{k}\Vert _2\nonumber \\&\quad \le \left( L + \beta \sqrt{N}\max _{1\le i\le N}\left[ \Vert A_i \Vert _2^2\right] + \max _{1\le i\le N} \Vert H_i\Vert _2 \right) \sqrt{\theta _k}. \end{aligned}$$
(3.24)

By combining (3.24), (3.22) and (3.23) we conclude that Algorithm 2 returns an \(\epsilon \)-stationary solution for (1.1) according to Definition 3.6 under the conditions of Setting 2 in Table 1.

Setting 1 Under this setting, we know \(r_i\) is Lipschitz continuous and \(\mathcal {X}_i \subset \mathbb {R}^{n_i}\) is convex and compact. From Assumption 3.1 and the fact that \(\mathcal {X}_i\) is compact, we know \(r_i(x_i) + f(x_1,\ldots , x_N)\) is locally Lipschitz continuous with respect to \(x_i\) for \(i=1,2,\ldots , N-1\). Similar to (3.24), for any \(x_i \in \mathcal {X}_i\), Step 1 of Algorithm 2 yields that

$$\begin{aligned}&\left( x_i-x_i^{k+1}\right) ^\top \left[ g_i+\nabla _i f(x_1^{k+1},\ldots , x^{k+1}_N) - A_i^\top {\lambda }^{k+1} \right] \nonumber \\&\quad \ge \left( x_i-x_i^{k+1}\right) ^\top \bigg {[}\nabla _i f(x_1^{k+1},\ldots , x^{k+1}_N)-\nabla _i f(x_1^{k+1},\ldots , x_i^{k+1},x_{i+1}^{k},\ldots ,x^{k}_N) \nonumber \\&\qquad + \, \beta A_i^{\top }\bigg {(} \sum _{j=i+1}^{N}A_j(x_j^{k+1} -x_j^{k}) \bigg {)} - H_i(x_i^{k+1} - x_i^k) \bigg {]} \nonumber \\&\quad \ge -L \, \mathrm {diam}(\mathcal {X}_i) \sqrt{\sum _{j=i+1}^N \Vert x_j^k - x_j^{k+1} \Vert ^2 }\nonumber \\&\qquad - \beta \Vert A_i \Vert _2\, \mathrm {diam}(\mathcal {X}_i)\sum \limits _{j=i+1}^N \Vert A_j \Vert _2 \Vert x_j^{k+1} - x_j^{k}\Vert - \mathrm {diam}(\mathcal {X}_i)\, \Vert H_i\Vert _2 \Vert x_i^{k+1} - x_i^{k}\Vert _2 \nonumber \\&\quad \ge - \left( \beta \sqrt{N}\max _{1\le i\le N}\left[ \Vert A_i \Vert _2^2\right] + L + \max _{1\le i\le N}\Vert H_i\Vert _2 \right) \max _{1\le i\le N-1}\left[ \mathrm {diam}(\mathcal {X}_i)\right] \sqrt{\theta _k},\nonumber \\ \end{aligned}$$
(3.25)

where \(g_i\in \partial r_i(x_i^{k+1})\) is a general subgradient of \(r_i\) at \(x_i^{k+1}\). By combining (3.25), (3.22) and (3.23) we conclude that Algorithm 2 returns an \(\epsilon \)-stationary solution for (1.1) according to Definition 3.5 under the conditions of Setting 1 in Table 1.\(\square \)

Remark 3.13

Note that the potential function \(\Psi _G\) defined in (3.12) is related to the augmented Lagrangian function. The augmented Lagrangian function has been used as a potential function in analyzing the convergence of nonconvex splitting and ADMM methods in [2, 31,32,33, 38]. See [32] for a more detailed discussion on this.

Remark 3.14

In Step 1 of Algorithm 2, we can also replace the function

$$\begin{aligned} f(x_1^{k+1},\ldots ,x_{i-1}^{k+1},x_i,x_{i+1}^k,\ldots ,x_N^k) \end{aligned}$$

by its linearization

$$\begin{aligned}&f(x_1^{k+1},\ldots ,x_{i-1}^{k+1},x_i^k,x_{i+1}^k,\ldots ,x_N^k) \\&\quad + \left( x_i - x_i^k \right) ^\top \nabla _i f(x_1^{k+1},\ldots , x_{i-1}^{k+1},x_i^k,x_{i+1}^k, \ldots , x_{N}^k), \end{aligned}$$

so that the subproblem can be solved by computing the proximal mappings of \(r_i\), with some properly chosen matrix \(H_i\) for \(i =1, \ldots , N-1\), and the same iteration bound still holds.

3.4 Proximal majorization ADMM (proximal ADMM-m)

Our proximal ADMM-m solves (1.1) under the condition that \(A_N\) has full row rank. In this section, we use \(\sigma _N\) to denote the smallest eigenvalue of \(A_NA_N^\top \). Note that \(\sigma _N >0\) because \(A_N\) has full row rank. Our proximal ADMM-m can be described as follows

figure c

In Algorithm 3, \(U(x_1,\ldots ,x_{N-1},x_N,\lambda ,\bar{x})\) is defined as

$$\begin{aligned}&U(x_1,\ldots ,x_{N-1},x_N,\lambda ,\bar{x}) \\&\quad = f(x_1, \ldots , x_{N-1}, \bar{x}) + \left( x_N - \bar{x} \right) ^\top \nabla _N f(x_1, \ldots , x_{N-1}, \bar{x}) \\&\qquad + \frac{L}{2}\left\| x_N - \bar{x} \right\| ^2 - \left\langle \lambda , \sum _{i=1}^N A_i x_i - b\right\rangle + \frac{\beta }{2}\left\| \sum _{i=1}^N A_i x_i - b \right\| ^2. \end{aligned}$$

Moreover, \(\beta \) can be chosen as

$$\begin{aligned} \beta > \max \left\{ \frac{18L}{\sigma _N}, \; \max \limits _{1\le i\le N-1}\left\{ \frac{6L^2}{\sigma _N\sigma _{\min }(H_i)}\right\} \right\} . \end{aligned}$$
(3.26)

to guarantee the convergence rate of the algorithm shown in Lemma 3.16 and Theorem 3.18.

It is worth noting that the proximal ADMM-m and proximal ADMM-g differ only in Step 2: Step 2 of proximal ADMM-g takes a gradient step of the augmented Lagrangian function with respect to \(x_N\), while Step 2 of proximal ADMM-m requires to minimize a quadratic function of \(x_N\).

We provide some lemmas that are useful in analyzing the iteration complexity of proximal ADMM-m for solving (1.1).

Lemma 3.15

Suppose the sequence \(\{(x_1^k,\ldots ,x_N^k,\lambda ^k)\}\) is generated by Algorithm 3. The following inequality holds

$$\begin{aligned} \left\| \lambda ^{k+1} - \lambda ^k \right\| ^2 \le \frac{3L^2}{\sigma _N}\left\| x_N^k - x_N^{k+1} \right\| ^2 + \frac{6L^2}{\sigma _N}\left\| x_N^{k-1} - x_N^k \right\| ^2 + \frac{3 L^2}{\sigma _N}\sum _{i=1}^{N-1}\left\| x_i^k - x_i^{k+1} \right\| ^2. \end{aligned}$$
(3.27)

Proof

From the optimality conditions of Step 2 of Algorithm 3, we have

$$\begin{aligned} 0= & {} \nabla _N f(x_1^{k+1}, \ldots , x_{N-1}^{k+1}, x_N^k) - A_N^\top \lambda ^k + \beta A_N^\top \left( \sum _{i=1}^N A_i x_i^{k+1} - b \right) \\&- L\left( x_N^k - x_N^{k+1}\right) \nonumber \\= & {} \nabla _N f(x_1^{k+1}, \ldots , x_{N-1}^{k+1}, x_N^k) - A_N^\top \lambda ^{k+1} - L\left( x_N^k - x_N^{k+1}\right) , \end{aligned}$$

where the second equality is due to Step 3 of Algorithm 3. Therefore, we have

$$\begin{aligned}&\Vert \lambda ^{k+1} - \lambda ^k \Vert ^2 \\&\quad \le \sigma _N^{-1}\Vert A_N^\top \lambda ^{k+1} - A_N^\top \lambda ^k \Vert ^2\\&\quad \le \sigma _N^{-1}\Vert (\nabla _N f(x_1^{k+1}, \ldots , x_{N-1}^{k+1}, x_N^k) \\&\qquad - \nabla _N f(x_1^k, \ldots , x_{N-1}^k, x_N^{k-1}) ) - L(x_N^k - x_N^{k+1}) + L( x_N^{k-1} - x_N^k)\Vert ^2 \\&\quad \le \frac{3}{\sigma _N}\Vert \nabla _N f(x_1^{k+1}, \ldots , x_{N-1}^{k+1}, x_N^k) - \nabla _N f(x_1^k, \ldots , x_{N-1}^k, x_N^{k-1}) \Vert ^2 \\&\qquad + \frac{3L^2}{\sigma _N} (\Vert x_N^k - x_N^{k+1} \Vert ^2 + \Vert x_N^{k-1} - x_N^k\Vert ^2) \\&\quad \le \frac{3L^2}{\sigma _N}\Vert x_N^k - x_N^{k+1} \Vert ^2 + \frac{6L^2}{\sigma _N}\Vert x_N^{k-1} - x_N^k \Vert ^2 + \frac{3 L^2}{\sigma _N}\sum _{i=1}^{N-1}\Vert x_i^k - x_i^{k+1} \Vert ^2. \end{aligned}$$

\(\square \)

We define the following function that will be used in the analysis of proximal ADMM-m:

$$\begin{aligned} \Psi _L\left( x_1, \ldots , x_N, \lambda , \bar{x}\right) = \mathcal {L}_\beta (x_1, \ldots , x_N, \lambda ) + \frac{6L^2}{\beta \sigma _N}\left\| x_N - \bar{x}\right\| ^2. \end{aligned}$$

Similar to the function used in proximal ADMM-g, we can prove the monotonicity and boundedness of function \(\Psi _L\).

Lemma 3.16

Suppose the sequence \(\{(x_1^k,\ldots ,x_N^k,\lambda ^k)\}\) is generated by Algorithm 3, where\(\beta \) is chosen according to (3.26). Then \(\Psi _L(x^{k+1},\ldots ,x_N^{k+1},\lambda ^{k+1},x_N^k)\) monotonically decreases over \(k>0\).

Proof

By Step 1 of Algorithm 3 one observes that

$$\begin{aligned} \mathcal {L}_\beta \left( x_1^{k+1}, \ldots , x_{N-1}^{k+1}, x_N^k,\lambda ^k\right) \le \mathcal {L}_\beta \left( x_1^k, \ldots , x_N^k,\lambda ^k\right) - \sum _{i=1}^{N-1} \frac{1}{2}\left\| x_i^k - x_i^{k+1}\right\| ^2_{H_i}, \end{aligned}$$
(3.28)

while by Step 2 of Algorithm 3 we have

$$\begin{aligned} \begin{array}{lll} 0 &{}=&{} \left( x_N^k - x_N^{k+1}\right) ^\top \left[ \nabla _N f(x_1^{k+1}, \ldots , x_{N-1}^{k+1}, x_N^k) - {A_N}^\top \lambda ^k \right. \\ &{}&{}\left. +\, \beta {A_N}^\top \left( \sum _{i=1}^N A_i x_i^{k+1} - b \right) - L\left( x_N^k - x_N^{k+1}\right) \right] \\ &{}\le &{} f(x_1^{k+1},\ldots , x_{N-1}^{k+1}, x_N^k) - f(x_1^{k+1},\ldots , x_N^{k+1}) - \frac{L}{2}\left\| x_N^k - x_N^{k+1} \right\| ^2 \\ &{}&{}-\, \left( \sum _{i=1}^{N-1} A_i x_i^{k+1} + A_N x_N^k -b\right) ^\top \lambda ^k + \left( \sum _{i=1}^N A_i x_i^{k+1} - b\right) ^\top \lambda ^k \\ &{}&{}+\, \frac{\beta }{2}\left\| \sum _{i=1}^{N-1} A_i x_i^{k+1} + A_N x_N^k -b\right\| ^2 - \frac{\beta }{2}\left\| \sum _{i=1}^N A_i x_i^{k+1} - b\right\| ^2 \\ &{}&{}-\, \frac{\beta }{2}\left\| A_N x_N^k - A_N x_N^{k+1}\right\| ^2 \\ &{}\le &{} \mathcal {L}_{\beta }( x_1^{k+1}, \ldots , x_{N-1}^{k+1}, x_N^k, \lambda ^k) - \mathcal {L}_{\beta }(x_1^{k+1}, \ldots , x_N^{k+1}, \lambda ^k) - \frac{L}{2}\left\| x_N^k - x_N^{k+1}\right\| ^2, \end{array} \end{aligned}$$
(3.29)

where the first inequality is due to (3.2) and (3.3). Moreover, from (3.27) we have

$$\begin{aligned}&\mathcal {L}_\beta (x_1^{k+1}, \ldots , x_N^{k+1}, \lambda ^{k+1}) - \mathcal {L}_\beta ( x_1^{k+1}, \ldots , x_N^{k+1}, \lambda ^k) \nonumber \\&\quad = \frac{1}{\beta }\Vert \lambda ^k - \lambda ^{k+1}\Vert ^2 \nonumber \\&\quad \le \frac{3L^2}{\beta \sigma _N}\Vert x_N^k - x_N^{k+1}\Vert ^2 + \frac{6L^2}{\beta \sigma _N}\Vert x_N^{k-1} - x_N^k \Vert ^2 + \frac{3L^2}{\beta \sigma _N}\sum _{i=1}^{N-1}\Vert x_i^k - x_i^{k+1} \Vert ^2.\qquad \qquad \end{aligned}$$
(3.30)

Combining (3.28), (3.29) and (3.30) yields that

$$\begin{aligned}&\mathcal {L}_\beta (x_1^{k+1}, \ldots , x_N^{k+1}, \lambda ^{k+1}) - \mathcal {L}_\beta (x_1^k, \ldots , x_N^k, \lambda ^k) \\&\quad \le \left( \frac{3L^2}{\beta \sigma _N} -\frac{L}{2} \right) \Vert x_N^k - x_N^{k+1}\Vert ^2 + \sum _{i=1}^{N-1}\Vert x_i^k - x_i^{k+1}\Vert ^2_{ \frac{3L^2}{\beta \sigma _N}I - \frac{1}{2}H_i } \\&\qquad + \frac{6L^2}{\beta \sigma _N}\Vert x_N^{k-1} - x_N^k\Vert ^2, \end{aligned}$$

which further implies that

$$\begin{aligned}&\Psi _L(x_1^{k+1}, \ldots , x_N^{k+1}, \lambda ^{k+1}, x_N^k) - \Psi _L(x_1^k, \ldots , x_N^k, \lambda ^k, x_N^{k-1}) \nonumber \\&\quad \le \left( \frac{9L^2}{\beta \sigma _N} - \frac{L}{2}\right) \left\| x_N^k - x_N^{k+1}\right\| ^2 + \sum _{i=1}^{N-1}\left( \frac{3L^2}{\beta \sigma _N} - \frac{\sigma _{\min }(H_i)}{2}\right) \left\| x_i^k - x_i^{k+1} \right\| ^2 < 0,\nonumber \\ \end{aligned}$$
(3.31)

where the second inequality is due to (3.26). This completes the proof.\(\square \)

The following lemma shows that the function \(\Psi _L\) is lower bounded.

Lemma 3.17

Suppose the sequence \(\{(x_1^k,\ldots ,x_N^k,\lambda ^k)\}\) is generated by Algorithm 3. Under the same conditions as in Lemma 3.16, the sequence \(\{\Psi _L(x^{k+1},\ldots ,x_N^{k+1}, \lambda ^{k+1}, x_N^k)\}\) is bounded from below.

Proof

From Step 3 of Algorithm 3 we have

$$\begin{aligned} \begin{array}{ll} &{} \Psi _L(x_1^{k+1}, \ldots , x_N^{k+1}, \lambda ^{k+1}, x_N^k) \\ &{}\quad \ge \mathcal {L}_\beta (x_1^{k+1}, \ldots , x_N^{k+1}, \lambda ^{k+1}) \\ &{}\quad = \sum _{i=1}^{N-1} r_i(x_i^{k+1}) + f(x_1^{k+1}, \ldots , x_N^{k+1}) \\ &{}\qquad - \left( \sum _{i=1}^N A_i x_i^{k+1} - b \right) ^\top \lambda ^{k+1} + \frac{\beta }{2}\left\| \sum _{i=1}^N A_i x_i^{k+1} - b\right\| ^2 \\ &{}\quad = \sum _{i=1}^{N-1} r_i(x_i^{k+1}) + f(x_1^{k+1}, \ldots , x_N^{k+1}) - \frac{1}{\beta }(\lambda ^k - \lambda ^{k+1})^\top \lambda ^{k+1} + \frac{1}{2\beta }\Vert \lambda ^k - \lambda ^{k+1} \Vert ^2 \\ &{}\quad = \sum _{i=1}^{N-1} r_i(x_i^{k+1}) + f(x_1^{k+1}, \ldots , x_N^{k+1}) - \frac{1}{2\beta }\Vert \lambda ^k\Vert ^2 + \frac{1}{2\beta }\Vert \lambda ^{k+1}\Vert ^2 + \frac{1}{\beta }\Vert \lambda ^k - \lambda ^{k+1}\Vert ^2 \\ &{}\quad \ge \sum _{i=1}^{N-1} r_i^* + f^* - \frac{1}{2\beta }\Vert \lambda ^k\Vert ^2 + \frac{1}{2\beta }\Vert \lambda ^{k+1}\Vert ^2, \end{array} \end{aligned}$$
(3.32)

where the third equality follows from (3.3). Summing this inequality over \(k=0,1,\ldots ,K-1\) for any integer \(K\ge 1\) yields that

$$\begin{aligned} \frac{1}{K}\sum _{k=0}^{K-1} \Psi _L\left( x_1^{k+1}, \ldots , x_N^{k+1}, \lambda ^{k+1}, x_N^k\right) \ge \sum _{i=1}^{N-1} r_i^* + f^* - \frac{1}{2\beta }\left\| \lambda ^0\right\| ^2. \end{aligned}$$

Lemma 3.16 stipulates that \(\{ \Psi _L(x_1^{k+1}, \ldots , x_N^{k+1}, \lambda ^{k+1}, x_N^k) \}\) is a monotonically decreasing sequence; the above inequality thus further implies that the entire sequence is bounded from below.\(\square \)

We are now ready to give the iteration complexity of proximal ADMM-m, whose proof is similar to that of Theorem 3.12.

Theorem 3.18

Suppose the sequence \(\{(x_1^k,\ldots ,x_N^k,\lambda ^k) \}\) is generated by proximal ADMM-m (Algorithm 3), and \(\beta \) satisfies (3.26). Denote

$$\begin{aligned} \kappa _1:= & {} \frac{6L^2}{\beta ^2\sigma _N}, \quad \kappa _2 := 4L^2, \quad \kappa _3 := \max \limits _{1\le i\le N-1}\left( \mathrm {diam}(\mathcal {X}_i)\right) ^2, \\ \kappa _4:= & {} \left( L + \beta \sqrt{N}\max \limits _{1\le i\le N}\left[ \Vert A_i \Vert _2^2\right] + \max \limits _{1\le i\le N}\Vert H_i\Vert _2 \right) ^2, \end{aligned}$$

and

$$\begin{aligned} \tau := \min \left\{ -\left( \frac{9 L^2}{\beta \sigma _N}-\frac{L}{2}\right) , \min _{i=1,\ldots ,N-1}\left\{ -\left( \frac{3L^2}{\beta \sigma _N} - \frac{\sigma _{\min }(H_i)}{2}\right) \right\} \right\} > 0. \end{aligned}$$
(3.33)

Then to get an \(\epsilon \)-stationary solution, the number of iterations that the algorithm runs can be upper bounded by:

$$\begin{aligned} K := \left\{ \begin{array}{ll} \left\lceil \frac{2\max \{\kappa _1,\kappa _2,\kappa _4\cdot \kappa _3\}}{\tau \,\epsilon ^2}(\Psi _L(x_1^1, \ldots , x_N^1, \lambda ^1, x_N^0) - \sum _{i=1}^{N-1}r_i^* - f^*) \right\rceil , &{} \text{ for } \text{ Setting } \text{1 } \\ \left\lceil \frac{2\max \{\kappa _1,\kappa _2,\kappa _4\}}{\tau \,\epsilon ^2}(\Psi _L(x_1^1, \ldots , x_N^1, \lambda ^1, x_N^0) - \sum _{i=1}^{N-1}r_i^* - f^*) \right\rceil , &{} \text{ for } \text{ Setting } \text{2 } \end{array}\right. \end{aligned}$$
(3.34)

and we can further identify one iteration \(\hat{k} \in \mathop \mathrm{argmin}\limits _{2\le k \le K+1} \sum _{i=1}^N \left( \Vert x_i^k - x_i^{k+1} \Vert ^2 + \Vert x_i^{k-1} - x_i^k \Vert ^2\right) \), such that \((x_1^{\hat{k}}, \ldots , x_N^{\hat{k}})\) is an \(\epsilon \)-stationary solution for (1.1) with Lagrange multiplier \(\lambda ^{\hat{k}}\) and \(A_N\) being full row rank, for Settings 1 and 2 respectively.

Proof

By summing (3.31) over \(k=1,\ldots ,K\), we obtain that

$$\begin{aligned}&\Psi _L(x_1^{K+1}, \ldots , x_N^{K+1}, \lambda ^{K+1}, x_N^K) - \Psi _L(x_1^1, \ldots , x_N^1, \lambda ^1, x_N^0) \nonumber \\&\qquad \le -\tau \sum _{k=1}^{K} \sum _{i=1}^N \left\| x_i^k - x_i^{k+1}\right\| ^2, \end{aligned}$$
(3.35)

where \(\tau \) is defined in (3.33). From Lemma 3.17 we know that there exists a constant \(\Psi _L^*\) such that \(\Psi (x_1^{k+1}, \ldots , x_N^{k+1}, \lambda ^{k+1}, x_N^k) \ge \Psi _L^*\) holds for any \(k\ge 1\). Therefore,

$$\begin{aligned} \min _{2\le k \le K+1} \theta _k \le \frac{2}{\tau \,K}\left[ \Psi _L(x_1^1, \ldots , x_N^1, \lambda ^1, x_N^0) - \Psi _L^*\right] , \end{aligned}$$
(3.36)

where \(\theta _k\) is defined in (3.20), i.e., for K defined as in (3.34), \(\theta _{\hat{k}}=O(\epsilon ^2)\).

We now give upper bounds to the terms in (3.5) and (3.6) through \(\theta _k\). Note that Step 2 of Algorithm 3 implies that

$$\begin{aligned}&\Vert A_N^{\top }\lambda ^{k+1} - \nabla _N f(x_1^{k+1}, \ldots , x_N^{k+1})\Vert \\&\quad \le L \, \Vert x_N^k - x_N^{k+1}\Vert + \Vert \nabla _N f(x_1^{k+1}, \ldots , x_{N-1}^{k+1}, x_N^k) - \nabla _N f(x_1^{k+1}, \ldots , x_N^{k+1}) \Vert \\&\quad \le 2L\,\Vert x_N^k - x_N^{k+1}\Vert , \end{aligned}$$

which implies that

$$\begin{aligned} \Vert A_N^\top \lambda ^{k+1} - \nabla _N f(x_1^{k+1}, \ldots , x_N^{k+1}) \Vert ^2 \le 4L^2 \theta _k. \end{aligned}$$
(3.37)

By Step 3 of Algorithm 3 and (3.27) we have

$$\begin{aligned}&\left\| \sum \limits _{i=1}^{N} A_i x_i^{k+1} - b \right\| ^2 \nonumber \\&\quad = \frac{1}{\beta ^2}\Vert \lambda ^{k+1} - \lambda ^k \Vert ^2 \nonumber \\&\quad \le \frac{3L^2}{\beta ^2\sigma _N}\left\| x_N^k - x_N^{k+1} \right\| ^2 + \frac{6L^2}{\beta ^2\sigma _N}\left\| x_N^{k-1} - x_N^k \right\| ^2 + \frac{3 L^2}{\beta ^2\sigma _N}\sum _{i=1}^{N-1}\left\| x_i^k - x_i^{k+1} \right\| ^2 \nonumber \\&\quad \le \frac{6L^2}{\beta ^2\sigma _N}\theta _k. \end{aligned}$$
(3.38)

The remaining proof is to give upper bounds to the terms in (3.4) and (3.7). Since the proof steps are almost the same as Theorem 3.12, we shall only provide the key inequalities below.

Setting 2 Under conditions in Setting 2 in Table 1, the inequality (3.24) becomes

$$\begin{aligned}&\mathop \mathrm{dist}\left( -\nabla _i f(x_1^{k+1},\ldots , x^{k+1}_N) + A_i^\top {\lambda }^{k+1}, \partial r_i(x_i^{k+1})\right) \nonumber \\&\quad \le \left( L + \beta \sqrt{N}\max _{1\le i\le N}\left[ \Vert A_i \Vert _2^2\right] + \max _{1\le i\le N}\Vert H_i\Vert _2\right) \sqrt{\theta _k}. \end{aligned}$$
(3.39)

By combining (3.39), (3.37) and (3.38) we conclude that Algorithm 3 returns an \(\epsilon \)-stationary solution for (1.1) according to Definition 3.6 under the conditions of Setting 2 in Table 1.

Setting 1 Under conditions in Setting 1 in Table 1, the inequality (3.25) becomes

$$\begin{aligned}&\left( x_i-x_i^{k+1}\right) ^\top \left[ g_i+\nabla _i f(x_1^{k+1},\ldots , x^{k+1}_N) - A_i^\top {\lambda }^{k+1} \right] \nonumber \\&\ge - \left( \beta \sqrt{N}\max _{1\le i\le N}\left[ \Vert A_i \Vert _2^2\right] + L + \max _{1\le i\le N}\Vert H_i\Vert _2\right) \max \limits _{1\le i\le N-1}\left[ \mathrm {diam}(\mathcal {X}_i)\right] \sqrt{\theta _k}.\nonumber \\ \end{aligned}$$
(3.40)

By combining (3.40), (3.37) and (3.38) we conclude that Algorithm 3 returns an \(\epsilon \)-stationary solution for (1.1) according to Definition 3.5 under the conditions of Setting 1 in Table 1.\(\square \)

Remark 3.19

In Step 1 of Algorithm 3, we can replace the function \(f(x_1^{k+1},\ldots ,x_{i-1}^{k+1},x_i,x_{i+1}^k,\ldots ,x_N^k)\) by its linearization

$$\begin{aligned}&f(x_1^{k+1},\ldots ,x_{i-1}^{k+1},x_i^k,x_{i+1}^k,\ldots ,x_N^k) \\&\quad + \left( x_i - x_i^k \right) ^\top \nabla _i f(x_1^{k+1},\ldots , x_{i-1}^{k+1},x_i^k,x_{i+1}^k, \ldots , x_{N}^k). \end{aligned}$$

Under the same conditions as in Remark 3.14, the same iteration bound follows by slightly modifying the analysis above.

4 Extensions

4.1 Relaxing the assumption on the last block variable \(x_N\)

It is noted that in (1.1), we have some restrictions on the last block variable \(x_N\), i.e., \(r_N\equiv 0\) and \(A_N=I\) or is full row rank. In this subsection, we show how to remove these restrictions and consider the more general problem

$$\begin{aligned} \begin{array}{ll} \min &{} f(x_1,x_2,\ldots , x_N) + \sum \limits _{i=1}^{N} r_i(x_i) \\ s.t. &{} \sum _{i=1}^N A_i x_i = b, \end{array} \end{aligned}$$
(4.1)

where \(x_i\in \mathbb {R}^{n_i}\) and \(A_i\in \mathbb {R}^{m\times n_i}\), \(i=1,\ldots ,N\).

Before proceeding, we make the following assumption on (4.1).

Assumption 4.1

Denote \(n = n_1+\cdots +n_N\). For any compact set \(S\subseteq \mathbb {R}^n\), and any sequence \(\lambda ^j\in \mathbb {R}^m\) with \(\Vert \lambda ^j\Vert \rightarrow \infty \), \(j=1,2,\ldots \), the following limit

$$\begin{aligned} \lim _{j \rightarrow \infty } \mathop \mathrm{dist}( -\nabla f(x_1,\ldots ,x_N) + A^\top \lambda ^j, \sum _{i=1}^N\partial r_i(x_i)) \rightarrow \infty \end{aligned}$$

holds uniformly for all \((x_1,\ldots ,x_N)\in S\), where \(A = [A_1,\ldots ,A_N]\).

Remark that the above implies A to have full row-rank. Furthermore, if f is continuously differentiable and \(\partial r_i (S) := \bigcup _{x \in S} \partial r_i(x) \) is a compact set for any compact set S, and A has full row rank, then Assumption 4.1 trivially holds. On the other hand, for popular non-convex regularization functions, such as SCAD, MCP and Capped \(\ell _1\)-norm, it can be shown that the corresponding set \(\partial r_i (S)\) is indeed compact set for any compact set S, and so Assumption 4.1 holds in all these cases.

We introduce the following problem that is closely related to (4.1):

$$\begin{aligned} \begin{array}{ll} \min &{} f(x_1,x_2,\ldots , x_N) + \sum \limits _{i=1}^{N} r_i(x_i) + \frac{\mu (\epsilon )}{2} \Vert y \Vert ^2 \\ s.t. &{} \sum _{i=1}^{N}A_{i} x_{i} + y = b, \end{array} \end{aligned}$$
(4.2)

where \(\epsilon >0\) is the target tolerance, and \(\mu (\epsilon )\) is a function of \(\epsilon \) which will be specified later. Now, proximal ADMM-m is ready to be used for solving (4.2) because \(A_{N+1}=I\) and y is unconstrained. We have the following iteration complexity result for proximal ADMM-m to obtain an \(\epsilon \)-stationary solution of (4.1); proximal ADMM-g can be analyzed similarly.

Theorem 4.2

Consider problem (4.1) under Setting 2 in Table 1. Suppose that Assumption 4.1 holds, and the objective in (4.1), i.e., \(f+\sum _{i=1}^Nr_i\), has a bounded level set. Furthermore, suppose that f has a Lipschitz continuous gradient with Lipschitz constant L, and A is of full row rank. Now let the sequence \(\{(x_1^k,\ldots ,x_{N}^k, y^k, \lambda ^k)\}\) be generated by proximal ADMM-m for solving (4.2) with initial iterates \(y^0=\lambda ^0=0\), and \((x_1^0,\ldots ,x_N^0)\) such that \(\sum _{i=1}^NA_ix_i^0=b\). Assume that the target tolerance \(\epsilon \) satisfies

$$\begin{aligned} 0<\epsilon < \min \left\{ \frac{1}{L}, \frac{1}{6\bar{\tau }}\right\} , \text{ where } \bar{\tau }= \frac{1}{2}\min _{i=1,\ldots ,N}\{\sigma _{\min }(H_i)\}. \end{aligned}$$
(4.3)

Then in no more than \(O(1/\epsilon ^4)\) iterations we will reach an iterate \((x_1^{{\hat{K}}+1}, \ldots , x_{N}^{{\hat{K}}+1}, y^{{\hat{K}}+1})\) that is an \(\epsilon \)-stationary solution for (4.2) with Lagrange multiplier \(\lambda ^{{\hat{K}}+1}\). Moreover, \((x_1^{{\hat{K}}+1}, \ldots , x_{N}^{{\hat{K}}+1})\) is an \(\epsilon \)-stationary solution for (4.1) with Lagrange multiplier \(\lambda ^{\hat{K}+1}\).

Proof

Denote the penalty parameter as \(\beta (\epsilon )\). The augmented Lagrangian function of (4.2) is given by

$$\begin{aligned} \begin{array}{lll} &{}&{}\mathcal {L}_{\beta (\epsilon )}(x_1,\ldots ,x_N,y,\lambda )\\ &{}&{}\quad := f(x_1,\ldots ,x_N)+\sum _{i=1}^Nr_i(x_i)+\frac{\mu (\epsilon )}{2}\Vert y\Vert ^2-\langle \lambda ,\sum \limits _{i=1}^NA_ix_i+y-b\rangle \\ &{}&{}\qquad +\, \frac{\beta (\epsilon )}{2}\Vert \sum \limits _{i=1}^NA_ix_i+y-b\Vert ^2. \end{array} \end{aligned}$$

Now we set

$$\begin{aligned} \mu (\epsilon )=1/\epsilon , \text{ and } \beta (\epsilon )=3/\epsilon . \end{aligned}$$
(4.4)

From (4.3) we have \(\mu (\epsilon ) > L\). This implies that the Lipschitz constant of \(f(x_1,x_2,\ldots , x_N) + \frac{\mu (\epsilon )}{2} \Vert y \Vert ^2\), which is the smooth part of the objective in (4.2), is equal to \(\mu (\epsilon )\). Then from the optimality conditions of Step 2 of Algorithm 3, we have \(\mu (\epsilon ) y^{k-1} - \lambda ^k - \mu (\epsilon ) (y^{k-1}-y^k) = 0\), which further implies that \(\mu (\epsilon )y^k=\lambda ^k, \forall k\ge 1\).

Similar to Lemma 3.16, we can prove that \(\mathcal {L}_{\beta (\epsilon )}(x_1^k,\ldots ,x_N^k,y^k,\lambda ^k)\) monotonically decreases. Specifically, since \(\mu (\epsilon )y^k=\lambda ^k\), combining (3.28), (3.29) and the equality in (3.30) yields,

$$\begin{aligned}&\mathcal {L}_{\beta (\epsilon )}(x_1^{k+1}, \ldots , x_N^{k+1}, y^{k+1}, \lambda ^{k+1}) - \mathcal {L}_{\beta (\epsilon )}(x_1^k, \ldots , x_N^k, y^k,\lambda ^k) \nonumber \\&\quad \le -\frac{1}{2}\sum _{i=1}^N\Vert x_i^k-x_i^{k+1}\Vert _{H_i}^2 - \left( \frac{\mu (\epsilon )}{2}-\frac{\mu (\epsilon )^2}{\beta (\epsilon )}\right) \Vert y^k-y^{k+1}\Vert ^2 < 0, \end{aligned}$$
(4.5)

where the last inequality is due to (4.4).

Similar to Lemma 3.17, we can prove that \(\mathcal {L}_{\beta (\epsilon )}(x_1^k,\ldots ,x_N^k,y^k,\lambda ^k)\) is bounded from below, i.e., the exists a constant \(\mathcal {L}^*=f^* + \sum _{i=1}^N r_i^*\) such that

$$\begin{aligned} \mathcal {L}_{\beta (\epsilon )}(x_1^k,\ldots ,x_N^k,y^k,\lambda ^k) \ge \mathcal {L}^*, \quad \text{ for } \text{ all } k. \end{aligned}$$

Actually the following inequalities lead to the above fact:

$$\begin{aligned}&\mathcal {L}_{\beta (\epsilon )}(x_1^k,\ldots ,x_N^k,y^k,\lambda ^k) \nonumber \\&\quad = f(x_1^k,\ldots ,x_N^k) + \sum _{i=1}^N r_i(x_i^k) + \frac{\mu (\epsilon )}{2}\Vert y^k\Vert ^2-\left\langle \lambda ^k, \sum _{i=1}^NA_ix_i^k+y^k-b\right\rangle \nonumber \\&\qquad + \frac{\beta (\epsilon )}{2}\left\| \sum _{i=1}^NA_ix_i^k+y^k-b\right\| ^2 \nonumber \\&\quad = f(x_1^k,\ldots ,x_N^k) + \sum _{i=1}^N r_i(x_i^k) + \frac{\mu (\epsilon )}{2}\Vert y^k\Vert ^2-\left\langle \mu (\epsilon ) y^k, \sum _{i=1}^NA_ix_i^k+y^k-b\right\rangle \nonumber \\&\qquad + \frac{\beta (\epsilon )}{2}\left\| \sum _{i=1}^NA_ix_i^k+y^k-b\right\| ^2 \nonumber \\&\quad \ge \mathcal {L}^* + \mu (\epsilon )\left[ \frac{1}{2}\left\| \sum _{i=1}^NA_ix_i^k-b\right\| ^2 + \left( \frac{\beta (\epsilon )-\mu (\epsilon )}{2\mu (\epsilon )}\right) \left\| \sum _{i=1}^NA_ix_i^k+y^k-b\right\| ^2\right] \ge \mathcal {L}^*, \nonumber \\ \end{aligned}$$
(4.6)

where the second equality is from \(\mu (\epsilon )y^k=\lambda ^k\), and the last inequality is due to (4.4). Moreover, denote \(\mathcal {L}^0\equiv \mathcal {L}_{\beta (\epsilon )}(x_1^0,\ldots ,x_N^0,y^0,\lambda ^0)\), which is a constant independent of \(\epsilon \).

Furthermore, for any integer \(K\ge 1\), summing (4.5) over \(k=0,\ldots ,K\) yields

$$\begin{aligned} \mathcal {L}_{\beta (\epsilon )}(x_1^{K+1}, \ldots , x_N^{K+1}, y^{K+1}, \lambda ^{K+1}) - \mathcal {L}^0 \le - \bar{\tau }\sum _{k=0}^K\theta _k, \end{aligned}$$
(4.7)

where \(\theta _k:=\sum _{i=1}^N\Vert x_i^k-x_i^{k+1}\Vert ^2+ \Vert y^k-y^{k+1}\Vert ^2\). Note that (4.7) and (4.6) imply that

$$\begin{aligned} \min _{0\le k\le K}\theta _k \le \frac{1}{\bar{\tau } K}\left( \mathcal {L}^0-\mathcal {L}^*\right) . \end{aligned}$$
(4.8)

Similar to (3.24), it can be shown that for \(i=1,\ldots ,N\),

$$\begin{aligned} \begin{array}{ll} &{}\mathop \mathrm{dist}\left( -\nabla _i f(x_1^{k+1},\ldots , x^{k+1}_N) + A_i^\top {\lambda }^{k+1}, \partial r_i(x_i^{k+1})\right) \\ &{}\quad \le \left( L + \beta (\epsilon )\sqrt{N}\max _{1\le i\le N} \Vert A_i \Vert _2^2 + \max _{1\le i\le N} \Vert H_i\Vert _2 \right) \sqrt{\theta _k}. \end{array} \end{aligned}$$
(4.9)

Set \(K= 1/\epsilon ^4\) and denote \(\hat{K} = \mathop \mathrm{argmin}_{0\le k\le K}\theta _k\). Then we know \(\theta _{\hat{K}} = O(\epsilon ^4)\). As a result,

$$\begin{aligned}&\left\| \sum _{i=1}^N A_ix_i^{\hat{K}+1}+y^{\hat{K}+1}-b\right\| ^2 \nonumber \\&\qquad = \frac{1}{\beta (\epsilon )^2}\Vert \lambda ^{\hat{K}+1}-\lambda ^{\hat{K}}\Vert ^2=\frac{\mu (\epsilon )^2}{\beta (\epsilon )^2}\Vert y^{\hat{K}+1}-y^{\hat{K}}\Vert ^2\le \frac{1}{9}\theta _{\hat{K}}=O(\epsilon ^4).\qquad \qquad \end{aligned}$$
(4.10)

Note that (4.6) also implies that \(f(x_1^k,\ldots ,x_N^k)+\sum _{i=1}^Nr_i(x_i^k)\) is upper-bounded by a constant. Thus, from the assumption that the level set of the objective is bounded, we know \((x_1^k,\ldots ,x_N^k)\) is bounded. Then Assumption 4.1 implies that \(\lambda ^k\) bounded, which results in \(\Vert y^k\Vert = O(\epsilon )\). Therefore, from (4.10) we have

$$\begin{aligned} \left\| \sum _{i=1}^N A_ix_i^{\hat{K}+1}-b\right\| \le \left\| \sum _{i=1}^N A_ix_i^{\hat{K}+1}+y^{\hat{K}+1}-b\right\| + \left\| y^{\hat{K}+1}\right\| = O(\epsilon ), \end{aligned}$$

which combining with (4.9) yields that \((x_1^{\hat{K}+1}, \ldots , x_N^{\hat{K}+1})\) is an \(\epsilon \)-stationary solution for (4.1) with Lagrange multiplier \(\lambda ^{\hat{K}+1}\), according to Definition 3.6.\(\square \)

Remark 4.3

Without Assumption 4.1, we can still provide an iteration complexity of proximal ADMM-m, but the complexity bound is worse than \(O(1/\epsilon ^4)\). To see this, note that because \(\mathcal {L}_{\beta (\epsilon )}(x_1^k,\ldots ,x_N^k,y^k,\lambda ^k)\) monotonically decreases, the first inequality in (4.6) implies that

$$\begin{aligned} \mu (\epsilon )\frac{1}{2}\left\| \sum _{i=1}^NA_ix_i^k-b\right\| ^2 \le \mathcal {L}^0 - \mathcal {L}^*, \forall k. \end{aligned}$$
(4.11)

Therefore, by setting \(K=1/\epsilon ^6\), \(\mu (\epsilon )=1/\epsilon ^2\) and \(\beta (\epsilon )=3/\epsilon ^2\) instead of (4.4), and combining (4.9) and (4.11), we conclude that \((x_1^{\hat{K}+1}, \ldots , x_N^{\hat{K}+1})\) is an \(\epsilon \)-stationary solution for (4.1) with Lagrange multiplier \(\lambda ^{\hat{K}+1}\), according to Definition 3.6.

4.2 Proximal BCD (block coordinate descent)

In this section, we apply a proximal block coordinate descent method to solve the following variant of (1.1) and present its iteration complexity:

$$\begin{aligned} \begin{array}{ll} \min &{} F(x_1,x_2,\ldots ,x_N):= f(x_1,x_2,\ldots , x_N)+\sum \limits _{i=1}^{N} r_i(x_i) \\ s.t. &{} \ x_i\in \mathcal {X}_i, \ i=1,\ldots ,N,\end{array} \end{aligned}$$
(4.12)

where f is differentiable, \(r_i\) is nonsmooth, and \(\mathcal {X}_i\subset \mathbb {R}^{n_i}\) is a closed convex set for \(i=1,2,\ldots ,N\). Note that f and \(r_i\) can be nonconvex functions. Our proximal BCD method for solving (4.12) is described in Algorithm 4.

figure d

Similar to the settings in Table 1, depending on the properties of \(r_i\) and \(\mathcal {X}_i\), the \(\epsilon \)-stationary solution for (4.12) is as follows.

Definition 4.4

\((x_1^*,\ldots ,x_N^*,\lambda ^*)\) is called an \(\epsilon \)-stationary solution for (4.12), if

  1. (i)

    \(r_i\) is Lipschitz continuous, \(\mathcal {X}_i\) is convex and compact, and for any \(x_i\in \mathcal {X}_i\), \(i=1,\ldots ,N\), it holds that (\(g_i={\partial } r_i(x_i^*)\) denotes a generalized subgradient of \(r_i\))

    $$\begin{aligned} \left( x_i-x_i^*\right) ^\top \left[ \nabla _i f(x_1^*,\ldots , x^*_N) + g_i \right] \ge - \epsilon ; \end{aligned}$$
  2. (ii)

    or, if \(r_i\) is lower semi-continuous, \(\mathcal {X}_i = \mathbb {R}^{n_i}\) for \(i=1,\ldots ,N\), it holds that

    $$\begin{aligned} \mathop \mathrm{dist}\left( -\nabla _i f(x_1^*,\ldots , x^*_N), {\partial } r_i(x_i^*)\right) \le \epsilon . \end{aligned}$$

We now show that the iteration complexity of Algorithm 4 can be obtained from that of proximal ADMM-g. By introducing an auxiliary variable \(x_{N+1}\) and an arbitrary vector \(b\in \mathbb {R}^m\), problem (4.12) can be equivalently rewritten as

$$\begin{aligned} \begin{array}{ll} \min &{} f(x_1,x_2,\ldots , x_N) + \sum \limits _{i=1}^{N} r_i(x_i) \\ s.t. &{} x_{N+1} = b, \ x_i\in \mathcal {X}_i, \ i=1,\ldots ,N. \end{array} \end{aligned}$$
(4.14)

It is easy to see that applying proximal ADMM-g to solve (4.14) (with \(x_{N+1}\) being the last block variable) reduces exactly to Algorithm 4. Hence, we have the following iteration complexity result of Algorithm 4 for obtaining an \(\epsilon \)-stationary solution of (4.12).

Theorem 4.5

Suppose the sequence \(\{(x_1^k,\ldots ,x_N^k)\}\) is generated by proximal BCD (Algorithm 4). Denote

$$\begin{aligned} \kappa _5 := ( L + \max \limits _{1\le i\le N} \Vert H_i\Vert _2 )^2,\,\, \kappa _6 := \max \limits _{1\le i\le N} (\mathrm {diam}(\mathcal {X}_i) )^2. \end{aligned}$$

Letting

$$\begin{aligned} K := \left\{ \begin{array}{ll} \left\lceil \frac{\kappa _5\cdot \kappa _6}{\tau \,\epsilon ^2}(\Psi _G (x_1^1, \ldots , x_N^1, \lambda ^1, x_N^0) - \sum _{i=1}^{N}r_i^* - f^*) \right\rceil &{}\text{ for } \text{ Setting } \text{1 } \\ \left\lceil \frac{ \kappa _5}{\tau \,\epsilon ^2}(\Psi _G (x_1^1, \ldots , x_N^1, \lambda ^1, x_N^0) - \sum _{i=1}^{N}r_i^* - f^*) \right\rceil &{}\text{ for } \text{ Setting } \text{2 }\\ \end{array}\right. \end{aligned}$$

with \(\tau \) being defined in (3.18), and \(\hat{K} := \min \nolimits _{1\le k \le K} \sum _{i=1}^N \left( \Vert x_i^k - x_i^{k+1} \Vert ^2 \right) \), we have that \((x_1^{\hat{K}}, \ldots , x_N^{\hat{K}})\) is an \(\epsilon \)-stationary solution for problem (4.12).

Proof

Note that \(A_1 = \cdots = A_N = 0\) and \(A_{N + 1} = I\) in problem (4.14). By applying proximal ADMM-g with \(\beta > \max \left\{ 18L, \; \max \nolimits _{1\le i\le N}\left\{ \frac{6L^2}{ \sigma _{\min }(H_i) }\right\} \right\} \), Theorem 3.12 holds. In particular, (3.24) and (3.25) are valid in different settings with \(\beta \sqrt{N} \max \nolimits _{i+1 \le j \le N+1} \left[ \Vert A_j\Vert _2\right] \Vert A_i\Vert _2 = 0\) for \(i = 1,\ldots , N\), which leads to the choices of \(\kappa _5\) and \(\kappa _6\) in the above. Moreover, we do not need to consider the optimality with respect to \(x_{N+1}\) and the violation of the affine constraints, thus \(\kappa _1\) and \(\kappa _2\) in Theorem 3.12 are excluded in the expression of K, and the conclusion follows.\(\square \)

5 Numerical experiments

5.1 Robust tensor PCA model

We consider the following nonconvex and nonsmooth model of robust tensor PCA with \(\ell _1\) norm regularization for third-order tensor of dimension \(I_1 \times I_2 \times I_3\). Given an initial estimate R of the CP-rank, we aim to solve the following problem:

$$\begin{aligned} \begin{array}{cl} \min _{A,B,C,\mathcal {Z}, \mathcal {E}, \mathcal {B}}&{} \Vert \mathcal {Z}-\llbracket A,B,C \rrbracket \Vert ^{2}_F+\alpha \,\Vert \mathcal {E}\Vert _1 + \alpha _{\mathcal {N}}\Vert \mathcal {B}\Vert _F^2\\ \text{ s.t. } &{} \mathcal {Z}+ \mathcal {E}+ \mathcal {B}= \mathcal {T}, \end{array} \end{aligned}$$
(5.1)

where \(A \in \mathbb {R}^{I_1\times R}\), \(B \in \mathbb {R}^{I_2\times R}\), \(C \in \mathbb {R}^{I_3\times R}\). The augmented Lagrangian function of (5.1) is given by

$$\begin{aligned}&\mathcal {L}_\beta (A,B,C,\mathcal {Z}, \mathcal {E}, \mathcal {B}, \Lambda ) \\&\quad = \Vert \mathcal {Z}-\llbracket A,B,C \rrbracket \Vert ^{2}_F+\alpha \,\Vert \mathcal {E}\Vert _1 + \alpha _{\mathcal {N}}\Vert \mathcal {B}\Vert ^2_F - \langle \Lambda , \mathcal {Z}+ \mathcal {E}+ \mathcal {B}- \mathcal {T} \rangle \\&\qquad + \frac{\beta }{2} \Vert \mathcal {Z}+ \mathcal {E}+ \mathcal {B}- \mathcal {T} \Vert ^2_F. \end{aligned}$$

The following identities are useful for our presentation later:

$$\begin{aligned} \Vert \mathcal {Z}-\llbracket A,B,C \rrbracket \Vert ^{2}_F= & {} \Vert {Z}_{(1)}-A (C \odot B)^{\top } \Vert ^{2}_F \\= & {} \Vert {Z}_{(2)}-B (C \odot A)^{\top } \Vert ^{2}_F\\= & {} \Vert {Z}_{(3)}-C (B \odot A)^{\top } \Vert ^{2}_F, \end{aligned}$$

where \({Z}_{(i)}\) stands for the mode-i unfolding of tensor \(\mathcal {Z}\) and \(\odot \) stands for the Khatri-Rao product of matrices.

Note that there are six block variables in (5.1), and we choose \(\mathcal {B}\) as the last block variable. A typical iteration of proximal ADMM-g for solving (5.1) can be described as follows (we chose \(H_i=\delta _i I\), with \(\delta _i>0, i=1,\ldots ,5\)):

$$\begin{aligned}\left\{ \begin{array}{lll} A^{k+1} &{}=&{} \left( ({Z})^k_{(1)} (C^k \odot B^k) + \frac{\delta _1}{2}A^{k} \right) \left( ((C^k)^{\top }C^k)\circ ((B^k)^{\top }B^k) + \frac{\delta _1}{2} I_{R\times R} \right) ^{-1}\\ B^{k+1} &{}=&{} \left( ({Z})^k_{(2)} (C^k \odot A^{k+1}) + \frac{\delta _2}{2}B^{k} \right) \left( ((C^k)^{\top }C^k)\circ ((A^{k+1})^{\top }A^{k+1}) + \frac{\delta _2}{2} I_{R\times R} \right) ^{-1}\\ C^{k+1} &{}=&{} \left( ({Z})^k_{(3)} ( B^{k+1} \odot A^{k+1}) + \frac{\delta _3}{2}C^{k} \right) \left( ((B^{k+1})^{\top }B^{k+1}) \circ ((A^{k+1})^{\top }A^{k+1})+ \frac{\delta _3}{2} I_{R\times R} \right) ^{-1}\\ {E}_{(1)}^{k+1} &{} = &{} \mathcal {S}\left( \frac{\beta }{\beta + \delta _4}(T_{(1)} + \frac{1}{\beta }\Lambda ^{k}_{(1)}-{B}_{(1)}^{k} -{Z}_{(1)}^{k})+ \frac{\delta _4}{\beta + \delta _4}{E}_{(1)}^{k},\frac{\alpha }{\beta + \delta _4}\right) \\ {Z}_{(1)}^{k+1} &{}= &{} \frac{1}{2+ 2 \delta _5 + \beta } \left( 2 A^{k+1}(C^{k+1} \odot B^{k+1})^{\top } + 2\delta _5\,({Z}_{(1)})^k + \Lambda ^k_{(1)} - \beta ({E}_{(1)}^{k+1}+{B}_{(1)}^k - T_{(1)}) \right) \\ {B}_{(1)}^{k+1} &{}=&{} {B}_{(1)}^{k} - \gamma \left( 2 \alpha _{\mathcal {N}} {B}_{(1)}^{k} - \Lambda ^k_{(1)} + \beta ({E}_{(1)}^{k+1} + {Z}_{(1)}^{k+1} + {B}_{(1)}^{k} - T_{(1)} )\right) \\ \Lambda ^{k+1}_{(1)} &{}=&{} \Lambda ^k_{(1)} - \beta \left( {Z}_{(1)}^{k+1} + E_{(1)}^{k+1} + B_{(1)}^{k+1} - {T}_{(1)}\right) \end{array}\right. \end{aligned}$$

where \(\circ \) is the matrix Hadamard product and \(\mathcal {S}\) stands for the soft shrinkage operator. The updates in proximal ADMM-m are almost the same as proximal ADMM-g except \({B}_{(1)}\) is updated as

$$\begin{aligned} {B}_{(1)}^{k+1} = \frac{1}{L + \beta }\left( (L - 2 \alpha _{\mathcal {N}}){B}_{(1)}^{k} + \Lambda ^k_{(1)} - \beta ({E}_{(1)}^{k+1} + {Z}_{(1)}^{k+1} - T_{(1)} )\right) . \end{aligned}$$

On the other hand, note that (5.1) can be equivalently written as

$$\begin{aligned} \min _{A,B,C,\mathcal {Z}, \mathcal {E}} \ \Vert \mathcal {Z}-\llbracket A,B,C \rrbracket \Vert ^{2}_F+\alpha \,\Vert \mathcal {E}\Vert _1 + \alpha _{\mathcal {N}}\Vert \mathcal {Z}+ \mathcal {E}- \mathcal {T} \Vert _F^2, \end{aligned}$$
(5.2)

which can be solved by the classical BCD method as well as our proximal BCD (Algorithm 4). In addition, we can apply GCG (Algorithm 1) to solve a variant of (5.1). Note that GCG requires a compact constraint set and thus it does not apply to (5.1) directly. As a result, we consider the following variant of (5.1), where the new quadratic regularization terms in the objective are added to help construct the compact constraint sets.

$$\begin{aligned} \begin{array}{cl} \min &{} \Vert \mathcal {Z}-\llbracket A,B,C \rrbracket \Vert ^{2}_F +\alpha \,\Vert \mathcal {E}\Vert _1 + \alpha _{\mathcal {N}}\Vert \mathcal {Z}+ \mathcal {E}- \mathcal {T} \Vert _F^2 + \frac{\alpha _A}{2}\Vert A\Vert _F^2 + \frac{\alpha _B}{2}\Vert B\Vert _F^2\\ &{}\quad + \frac{\alpha _C}{2}\Vert C\Vert _F^2 + \frac{\alpha _{Z}}{2}\Vert \mathcal {Z}\Vert _F^2 \\ \text{ s.t. } &{} \Vert A\Vert _F \le \rho _1, \Vert B\Vert _F \le \rho _2, \Vert C\Vert _F \le \rho _3, \Vert \mathcal {Z}\Vert _F \le \rho _4, \Vert \mathcal {E}\Vert _{1} \le \rho _5. \end{array} \end{aligned}$$
(5.3)

The new parameter \(\rho _1\) can be identified by the following observation:

$$\begin{aligned} \frac{\alpha _A}{2} \Vert \mathcal {A}^*\Vert _F^2 \le f(A^*, B^*, C^*, \mathcal {Z}^*, \mathcal {E}^*) \le f(0) = \alpha _{\mathcal {N}} \Vert \mathcal {T}\Vert _F^2, \end{aligned}$$

which implies that \(\rho _1 = \sqrt{\frac{2\alpha _{\mathcal {N}}}{\alpha _A}} \Vert \mathcal {T}\Vert _F\). Other parameters \(\rho _2,\ldots ,\rho _5\) can be computed in the same manner.

In the following we shall compare the numerical performance of GCG, BCD, proximal BCD, proximal ADMM-g and proximal ADMM-m for solving (5.1). We let \(\alpha = 2/ \max \{\sqrt{I_1}, \sqrt{I_2}, \sqrt{I_3} \}\) and \(\alpha _{\mathcal {N}}=1\) in model (5.1). We apply proximal ADMM-g and proximal ADMM-m to solve (5.1), apply BCD and proximal BCD to solve (5.2), and apply GCG to solove (5.3) with \(\alpha _A = \alpha _B = \alpha _C = 10\) and \(\alpha _{{Z}=1}\). In all the four algorithms we set the maximum iteration number to be 2000, and the algorithms are terminated either when the maximum iteration number is reached or when \(\theta _k\) as defined in (3.20) is less than \(10^{-6}\). The parameters used in the two ADMM variants are specified in Table 2.

Table 2 Choices of parameters in the two ADMM variants

In the experiment, we randomly generate 20 instances for fixed tensor dimension and CP-rank. Suppose the low-rank part \(\mathcal {Z}^0\) is of rank \(R_{ CP}\). It is generated by

$$\begin{aligned} \mathcal {Z}^0 = \sum \limits _{r=1}^{R_{ CP}}a^{1,r}\otimes a^{2,r} \otimes a^{3,r}, \end{aligned}$$

where vectors \(a^{i,r}\) are generated from standard Gaussian distribution for \(i=1,2,3\), \(r = 1,\ldots ,R_{ CP}\). Moreover, a sparse tensor \(\mathcal {E}^0\) is generated with cardinality of \(0.001\cdot I_1 I_2 I_3\) such that each nonzero component follows from standard Gaussian distribution. Finally, we generate noise \(\mathcal {B}^0 = 0.001* \hat{\mathcal {B}}\), where \(\hat{\mathcal {B}}\) is a Gaussian tensor. Then we set \(\mathcal {T} = \mathcal {Z}^0 + \mathcal {E}^0 +\mathcal {B}^0\) as the observed data in (5.1). A proper initial guess R of the true rank \(R_{ CP}\) is essential for the success of our algorithms. We can borrow the strategy in matrix completion [54], and start from a large R (\(R \ge R_{ CP}\)) and decrease it aggressively once a dramatic change in the recovered tensor \(\mathcal {Z}\) is observed. We report the average performance of 20 instances of the four algorithms with initial guess \(R = R_{ CP}\), \(R = R_{ CP} + 1\) and \(R = R_{ CP} + \lceil 0.2*R_{ CP} \rceil \) in Tables 34 and 5, respectively.

Table 3 Numerical results for tensor robust PCA with initial guess \(R = R_{ CP}\)
Table 4 Numerical results for tensor robust PCA with initial guess \(R = R_{ CP} +1\)
Table 5 Numerical results for tensor robust PCA with initial guess \(R = R_{ CP} + \lceil 0.2*R_{ CP} \rceil \)
Table 6 Numerical results for sparse tensor PCA problem

In Tables 34 and 5, “Err.” denotes the averaged relative error \(\frac{\Vert \mathcal {Z}^*-\mathcal {Z}^0 \Vert _F}{\Vert \mathcal {Z}^0 \Vert _F}\) of the low-rank tensor over 20 instances, where \(\mathcal {Z}^*\) is the solution returned by the corresponding algorithm; “Iter.” denotes the averaged number of iterations over 20 instances; “#” records the number of solutions (out of 20 instances) that have relative error less than 0.01.

Tables 34 and 5 suggest that BCD mostly converges to a local solution rather than the global optimal solution, GCG easily gets stuck at a local solution in a few iterations for this particular problem, while the other three methods are much better in finding the global optimum.

It is interesting to note that the results presented in Table 5 are better than that of Tables 4 and 3 when a larger basis is allowed in tensor factorization. Moreover, in this case, the proximal BCD usually consumes less number of iterations than the two ADMM variants.

5.2 Computing the leading sparse principal component of tensor

In this subsection, we consider the problem (1.4) of finding the leading sparse principal component of a given tensor. To apply the GCG method in the previous section, we adopt \(\Vert \cdot \Vert _1\) as regularizer, and arrive at the following formulation

$$\begin{aligned} \begin{array}{ll} \min &{} -\mathcal {T} (x_1, x_2, \ldots , x_d) + \alpha \sum \limits _{i=1}^{d}\Vert x_i\Vert _1 \\ \text{ s.t. } &{} \Vert x_i\Vert _2 \le 1, i=1,2,\ldots ,d. \end{array} \end{aligned}$$
(5.4)

The subproblem in GCG is in the form of \(\min _{\Vert y\Vert _2^2 \le 1} \{ -y^{\top }b + \rho \Vert y\Vert _1 \}\), which has a closed form solution

$$\begin{aligned} y^* = \left\{ \begin{array}{ll} z/\Vert z\Vert _2, &{}\quad \text{ if }\; \Vert z\Vert _2 \ne 0 \\ 0, &{}\quad \text{ otherwise. } \end{array}\right. \end{aligned}$$

where \(z(j)= sign (b(j))\max \{ |b(j)| - \rho , 0 \}\;\forall \; j=1,2,\ldots , n\).

One undesirable property of the formulation (5.4) is that we may possibly get a zero solution, i.e. \(x_i=0\) for some i, which leads to \(\mathcal {T} (x_1, x_2, \ldots , x_d)=0\). To prevent this from happening, we also apply the BCD method and proximal BCD method to the following equality constrained problem:

$$\begin{aligned} \begin{array}{ll} \min &{} -\mathcal {T} (x_1, x_2, \ldots , x_d) + \alpha \sum \limits _{i=1}^{d}\Vert x_i\Vert _1 \\ \text{ s.t. } &{} \Vert x_i\Vert _2 = 1, i=1,2,\ldots ,d, \end{array} \end{aligned}$$
(5.5)

and compare the results with those returned by our proposed algorithms in Table 6.

In the tests, we let \(\alpha =0.85\), and set the maximum iteration number to be 2000. For each fixed dimension, we randomly generate 10 instances which are the fourth order tensors and the corresponding problems are solved by the three methods, starting from the same initial point. In Table 6, ‘Val.’ refers to the value \(\mathcal {T} (x_1, x_2, \ldots , x_d)\). From this table, we see that GCG is capable of finding a nonzero local optimum within a few hundred steps in most cases, with reasonably sparsity. The three approaches in Table 6 are comparable to each other in terms of the value \(\mathcal {T} (x_1, x_2, \ldots , x_d)\), but BCD consumes the maximum 2000 iterations in quite a few instances, while GCG finds the best local optimum in a few instances (e.g. instances 6 and 9 for \(n=8\), and instances 5 and 8 for \(n=20\)).