Abstract
Nonconvex and nonsmooth optimization problems are frequently encountered in much of statistics, business, science and engineering, but they are not yet widely recognized as a technology in the sense of scalability. A reason for this relatively low degree of popularity is the lack of a well developed system of theory and algorithms to support the applications, as is the case for its convex counterpart. This paper aims to take one step in the direction of disciplined nonconvex and nonsmooth optimization. In particular, we consider in this paper some constrained nonconvex optimization models in block decision variables, with or without coupled affine constraints. In the absence of coupled constraints, we show a sublinear rate of convergence to an \(\epsilon \)-stationary solution in the form of variational inequality for a generalized conditional gradient method, where the convergence rate is dependent on the Hölderian continuity of the gradient of the smooth part of the objective. For the model with coupled affine constraints, we introduce corresponding \(\epsilon \)-stationarity conditions, and apply two proximal-type variants of the ADMM to solve such a model, assuming the proximal ADMM updates can be implemented for all the block variables except for the last block, for which either a gradient step or a majorization–minimization step is implemented. We show an iteration complexity bound of \(O(1/\epsilon ^2)\) to reach an \(\epsilon \)-stationary solution for both algorithms. Moreover, we show that the same iteration complexity of a proximal BCD method follows immediately. Numerical results are provided to illustrate the efficacy of the proposed algorithms for tensor robust PCA and tensor sparse PCA problems.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In this paper, we consider the following nonconvex and nonsmooth optimization problem with multiple block variables:
where f is differentiable and possibly nonconvex, and each \(r_i\) is possibly nonsmooth and nonconvex, \(i=1,\ldots , N-1\); \(A_i\in \mathbb {R}^{m\times n_i}\), \(b\in \mathbb {R}^m\), \(x_i\in \mathbb {R}^{n_i}\); and \(\mathcal {X}_i\subseteq \mathbb {R}^{n_i}\) are convex sets, \(i=1,2,\ldots ,N-1\). One restriction of model (1.1) is that the objective function is required to be smooth with respect to the last block variable \(x_N\). However, in Sect. 4 we shall extend the result to cover the general case where \(r_N(x_N)\) may be present and that \(x_N\) maybe constrained as well. A special case of (1.1) is when the affine constraints are absent, and there is no block structure of the variables (i.e., \(x = x_1\) and other block variables do not show up in (1.1)), which leads to the following more compact form
where S is a convex and compact set. In this paper, we propose several first-order algorithms for computing an \(\epsilon \)-stationary solution (to be defined later) for (1.1) and (1.2), and analyze their iteration complexities. Throughout, we assume the following condition.
Assumption 1.1
The sets of the stationary solutions for (1.1) and (1.2) are non-empty.
Problem (1.1) arises from a variety of interesting applications. For example, one of the nonconvex models for matrix robust PCA can be cast as follows (see, e.g., [51]), which seeks to decompose a given matrix \(M\in \mathbb {R}^{m\times n}\) into a superposition of a low-rank matrix Z, a sparse matrix E and a noise matrix B:
where \(X\in \mathbb {R}^{m\times r}\), \(Y\in \mathbb {R}^{n\times r}\), with \(r < \min (m,n)\) being the estimated rank of Z; \(\eta >0\) is the noise level, \(\alpha >0\) is a weighting parameter; \(\mathcal {R}(E)\) is a regularization function that can improve the sparsity of E. One of the widely used regularization functions is the \(\ell _1\) norm, which is convex and nonsmooth. However, there are also many nonconvex regularization functions that are widely used in statistical learning and information theory, such as smoothly clipped absolute deviation (SCAD) [23], log-sum penalty (LSP) [15], minimax concave penalty (MCP) [58], and capped-\(\ell _1\) penalty [59, 60], and they are nonsmooth at point 0 if composed with the absolute value function, which is usually the case in statistical learning. Clearly (1.3) is in the form of (1.1). Another example of the form (1.1) is the following nonconvex tensor robust PCA model (see, e.g., [55]), which seeks to decompose a given tensor \(\mathcal {T}\in \mathbb {R}^{n_1\times n_2\times \cdots \times n_d}\) into a superposition of a low-rank tensor \(\mathcal {Z}\), a sparse tensor \(\mathcal {E}\) and a noise tensor \(\mathcal {B}\):
where \(\mathcal {C}\) is the core tensor that has a smaller size than \(\mathcal {Z}\), and \(X_i\) are matrices with appropriate sizes, \(i=1,\ldots ,d\). In fact, the “low-rank” tensor in the above model corresponds to the tensor with a small core; however a recent work [35] demonstrates that the CP-rank of the core regardless of its size could be as large as the original tensor. Therefore, if one wants to find the low CP-rank decomposition, then the following model is preferred:
for \(X_i = [a^{i,1},a^{i,2},\ldots , a^{i,R}]\in \mathbb {R}^{n_i \times R}\), \(1\le i \le d\) and \(\llbracket X_1,X_2,\ldots , X_d\rrbracket := \sum \nolimits _{r=1}^{R}a^{1,r}\otimes a^{2,r}\otimes \cdots \otimes a^{d,r},\) where“\(\otimes \)” denotes the outer product of vectors, and R is an estimation of the CP-rank. In addition, the so-called sparse tensor PCA problem [1], which seeks the best sparse rank-one approximation for a given d-th order tensor \(\mathcal {T}\), can also be formulated in the form of (1.1):
where \(\mathcal {T}(x_1, x_2, \ldots , x_d) = \sum _{i_1,\ldots , i_d}\mathcal {T}_{i_1,\ldots , i_d}(x_1)_{i_1}\ldots (x_d)_{i_d}\).
The convergence and iteration complexity for various nonconvex and nonsmooth optimization problems have recently attracted considerable research attention; see e.g. [3, 6, 7, 10, 11, 19, 20, 27, 28, 41, 46]. In this paper, we study several solution methods that use only the first-order information of the objective function, including a generalized conditional gradient method, variants of alternating direction method of multipliers, and a proximal block coordinate descent method, for solving (1.1) and (1.2). Specifically, we apply a generalized conditional gradient (GCG) method to solve (1.2). We prove that the GCG can find an \(\epsilon \)-stationary solution for (1.2) in \(O(\epsilon ^{-q})\) iterations under certain mild conditions, where q is a parameter in the Hölder condition that characterizes the degree of smoothness for f. In other words, the convergence rate of the algorithm depends on the degree of “smoothness” of the objective function. It should be noted that a similar iteration bound that depends on the parameter q was reported for convex problems [13], and for general nonconvex problem, [14] analyzed the convergence results, but there was no iteration complexity result. Furthermore, we show that if f is concave, then GCG finds an \(\epsilon \)-stationary solution for (1.2) in \(O(1/\epsilon )\) iterations. For the affinely constrained problem (1.1), we propose two algorithms (called proximal ADMM-g and proximal ADMM-m in this paper), both can be viewed as variants of the alternating direction method of multipliers (ADMM). Recently, there has been an emerging research interest on the ADMM for nonconvex problems (see, e.g., [2, 32, 33, 38, 52, 53, 56]). However, the results in [38, 52, 53, 56] only show that the iterates produced by the ADMM converge to a stationary solution without providing an iteration complexity analysis. Moreover, the objective function is required to satisfy the so-called Kurdyka–Łojasiewicz (KL) property [8, 9, 36, 42] to enable those convergence results. In [33], Hong et al. analyzed the convergence of the ADMM for solving nonconvex consensus and sharing problems. Note that they also analyzed the iteration complexity of the ADMM for the consensus problem. However, they require the nonconvex part of the objective function to be smooth, and nonsmooth part to be convex. In contrast, \(r_i\) in our model (1.1) can be nonconvex and nonsmooth at the same time. Moreover, we allow general constraints \(x_i\in \mathcal {X}_i,i=1,\ldots ,N-1\), while the consensus problem in [33] only allows such constraint for one block variable. A very recent work of Hong [32] discussed the iteration complexity of an augmented Lagrangian method for finding an \(\epsilon \)-stationary solution for the following problem:
under the assumption that f is differentiable. We will compare our results with [32] in more details in Sect. 3.
Before proceeding, let us first summarize:
Our contributions
-
(i)
We provide definitions of \(\epsilon \)-stationary solution for (1.1) and (1.2) using the variational inequalities. For (1.1), our definition of the \(\epsilon \)-stationary solution allows each \(r_i\) to be nonsmooth and nonconvex.
-
(ii)
We study a generalized conditional gradient method with a suitable line search rule for solving (1.2). We assume that the gradient of f satisfies a Hölder condition, and analyze its iteration complexity for obtaining an \(\epsilon \)-stationary solution for (1.2). After we released the first version of this paper, we noticed there are several recent works that study the iteration complexity of conditional gradient method for nonconvex problems. However, our results are different from these. For example, the convergence rate given in [57] is worse than ours, and [43, 44] only consider smooth nonconvex problem with Lipschitz continuous gradient, but our results cover nonsmooth models.
-
(iii)
We study two ADMM variants (proximal ADMM-g and proximal ADMM-m) for solving (1.1), and analyze their iteration complexities for obtaining an \(\epsilon \)-stationary solution for nonconvex problem (1.1). In addition, the setup and the assumptions of our model are different from other recent works. For instance, [38] considers a two-block nonconvex problem with an identity coefficient matrix for one block variable in the linear constraint, and requires the coerciveness of the objective or the boundedness of the domain. [53] assumes that the objective function is coercive over the feasible set and the nonsmooth objective is restricted prox-regular or piece-wise linear. While our algorithm assumes the gradient of the smooth part of the objective function is Lipschitz continuous and the nonsmooth part does not involve the last block variable, which is weaker than the assumptions on the objective functions in [38, 53].
-
(iv)
As an extension, we also show how to use proximal ADMM-g and proximal ADMM-m to find an \(\epsilon \)-stationary solution for (1.1) without assuming any condition on \(A_N\).
-
(v)
When the affine constraints are absent in model (1.1), as a by-product, we demonstrate that the iteration complexity of proximal block coordinate descent (BCD) method with cyclic order can be obtained directly from that of proximal ADMM-g and proximal ADMM-m. Although [11] gives an iteration complexity result of nonconvex BCD, it requires the KL property, and the complexity depends on a parameter in the KL condition, which is typically unknown.
Notation\(\Vert x\Vert _2\) denotes the Euclidean norm of vector x, and \(\Vert x\Vert _{H}^2\) denotes \( x^{\top }H x\) for some positive definite matrix H. For set S and scalar \(p>1\), we denote \(\mathrm {diam}_p(S) := \max _{x,y\, \in S}\Vert x -y \Vert _p\), where \(\Vert x \Vert _p = (\sum _{i=1}^{n}|x_i|^p)^{1/p}\). Without specification, we denote \(\Vert x\Vert =\Vert x\Vert _2\) and \(\mathrm {diam}(S)=\mathrm {diam}_2(S)\) for short. We use \(\mathop \mathrm{dist}(x,S)\) to denote the Euclidean distance of vector x to set S. Given a matrix A, its spectral norm and smallest singular value are denoted by \(\Vert A\Vert _2\) and \(\sigma _{\min }(A)\) respectively. We use \(\lceil a\rceil \) to denote the ceiling of a.
Organization The rest of this paper is organized as follows. In Sect. 2 we introduce the notion of \(\epsilon \)-stationary solution for (1.2) and apply a generalized conditional gradient method to solve (1.2) and analyze its iteration complexity for obtaining an \(\epsilon \)-stationary solution for (1.2). In Sect. 3 we give two definitions of \(\epsilon \)-stationarity for (1.1) under different settings and propose two ADMM variants that solve (1.1) and analyze their iteration complexities to reach an \(\epsilon \)-stationary solution for (1.1). In Sect. 4 we provide some extensions of the results in Sect. 3. In particular, we first show how to remove some of the conditions that we assume in Sect. 3, and then we apply a proximal BCD method to solve (1.1) without affine constraints and provide an iteration complexity analysis. In Sect. 5, we present numerical results to illustrate the practical efficiency of the proposed algorithms.
2 A generalized conditional gradient method
In this section, we study a GCG method for solving (1.2) and analyze its iteration complexity. The conditional gradient (CG) method, also known as the Frank-Wolfe method, was originally proposed in [24], and regained a lot of popularity recently due to its capability in solving large-scale problems (see, [4, 5, 25, 30, 34, 37, 47]). However, these works focus on solving convex problems. Bredies et al. [14] proved the convergence of a generalized conditional gradient method for solving nonconvex problems in Hilbert space. In this section, by introducing a suitable line search rule, we provide an iteration complexity analysis for this algorithm.
We make the following assumption in this section regarding (1.2).
Assumption 2.1
In (1.2), r(x) is convex and nonsmooth, and the constraint set S is convex and compact. Moreover, f is differentiable and there exist some \(p>1\) and \(\rho >0\) such that
The above inequality (2.1) is also known as the Hölder condition and was used in other works on first-order algorithms (e.g., [21]). It can be shown that (2.1) holds for a variety of functions. For instance, (2.1) holds for any p when f is concave, and is valid for \(p=2\) when \(\nabla f\) is Lipschitz continuous.
2.1 An \(\epsilon \)-stationary solution for problem (1.2)
For smooth unconstrained problem \( \min _x f(x)\), it is natural to define the \(\epsilon \)-stationary solution using the criterion \(\Vert \nabla f(x)\Vert _2\le \epsilon .\) Nesterov [48] and Cartis et al. [17] showed that the gradient descent type methods with properly chosen step size need \(O({1}/{\epsilon ^2})\) iterations to find such a solution. Moreover, Cartis et al. [16] constructed an example showing that the \(O({1}/{\epsilon ^2})\) iteration complexity is tight for the steepest descent type algorithm. However, the case for the constrained nonsmooth nonconvex optimization is subtler. There exist some works on how to define \(\epsilon \)-optimality condition for the local minimizers of various constrained nonconvex problems [18, 22, 28, 32, 49]. Cartis et al. [18] proposed an approximate measure for smooth problem with convex set constraint. [49] discussed general nonsmooth nonconvex problem in Banach space by using the tool of limiting Fréchet \(\epsilon \)-subdifferential. Ngai et al. [22] showed that under certain conditions \(\epsilon \)-KKT solutions can converge to a stationary solution as \(\epsilon \rightarrow 0\). Here the \(\epsilon \)-KKT solution is defined by relaxing the complimentary slackness and equilibrium equations of KKT conditions. Ghadimi et al. [28] considered the following notion of \(\epsilon \)-stationary solution for (1.2):
where \(\gamma >0\) and V is a prox-function. They proposed a projected gradient algorithm to solve (1.2) and proved that it takes no more than \(O({1}/{\epsilon ^2})\) iterations to find an x satisfying
Our definition of an \(\epsilon \)-stationary solution for (1.2) is as follows.
Definition 2.2
We call x an \(\epsilon \)-stationary solution (\(\epsilon \ge 0\)) for (1.2) if the following holds:
If \(\epsilon =0\), then x is called a stationary solution for (1.2).
Observe that if \(r(\cdot )\) is continuous then any cluster point of \(\epsilon \)-stationary solutions defined above is a stationary solution for (1.2) as \(\epsilon \rightarrow 0\). Moreover, the stationarity condition is weaker than the usual KKT optimality condition. To see this, we first rewrite (1.2) as the following equivalent unconstrained problem
where \(\iota _S(x)\) is the indicator function of S. Suppose that x is any local minimizer of this problem and thus also a local minimizer of (1.2). Since f is differentiable, r and \(\iota _S\) are convex, Fermat’s rule [50] yields
which further implies that there exists some \(z \in \partial r (x)\) such that
Using the convexity of \(r(\cdot )\), it is equivalent to
Therefore, (2.6) is a necessary condition for local minimum of (1.2) as well.
Furthermore, we claim that \(\psi _S(x) \ge -\epsilon \) implies \(\Vert P_{S}(x,\gamma )\Vert _2^2 \le \epsilon /\gamma \) with the prox-function \(V(y,x)=\Vert y-x\Vert _2^2/2\). In fact, (2.2) guarantees that
for some \(z \in \partial r(x^+)\). By choosing \(y = x\) in (2.7) one obtains
Therefore, if \(\psi _S(x) \ge -\epsilon \), then \(\Vert P_{S}(x,\gamma ) \Vert _2^2 \le \frac{\epsilon }{\gamma } \) holds.
2.2 The algorithm
For given point z, we define an approximation of the objective function of (1.2) to be:
which is obtained by linearizing the smooth part (function f) of \(\Phi \) in (1.2). Our GCG method for solving (1.2) is described in Algorithm 1, where \(\rho \) and p are from Assumption 2.1.
In each iteration of Algorithm 1, we first perform an exact minimization on the approximated objective function \(\ell (y;x)\) to form a direction \(d_k\). Then the step size \(\alpha _k\) is obtained by an exact line search (which differentiates the GCG from a normal CG method) along the direction \(d_k\), where f is approximated by p-powered function and the nonsmooth part is replaced by its upper bound. Finally, the iterate is updated by moving along the direction \(d_k\) with step size \(\alpha _k\).
Note that here we assumed that solving the subproblem in Step 1 of Algorithm 1 is relatively easy. That is, we assumed the following assumption.
Assumption 2.3
All subproblems in Step 1 of Algorithm 1 can be solved relatively easily.
Remark 2.4
Assumption 2.3 is quite common in conditional gradient method. For a list of functions r and sets S such that Assumption 2.3 is satisfied, see [34].
Remark 2.5
It is easy to see that the sequence \(\{ \Phi (x^k) \}\) generated by GCG is monotonically nonincreasing [14], which implies that any cluster point of \(\{ x^k \}\) cannot be a strict local maximizer.
2.3 An iteration complexity analysis
Before we proceed to the main result on iteration complexity of GCG, we need the following lemma that gives a sufficient condition for an \(\epsilon \)-stationary solution for (1.2). This lemma is inspired by [27], and it indicates that if the progress gained by minimizing (2.9) is small, then z must already be close to a stationary solution for (1.2).
Lemma 2.6
Define \(z_{\ell } := \mathop \mathrm{argmin}_{x\in S} \ \ell (x;z)\). The improvement of the linearization at point z is defined as
Given \(\epsilon \ge 0\), for any \(z \in S\), if \(\triangle \ell _z \le \epsilon \), then z is an \(\epsilon \)-stationary solution for (1.2) as defined in Definition 2.2.
Proof
From the definition of \({z}_{\ell }\), we have
which implies that
It then follows immediately that if \(\triangle \ell _z \le \epsilon \), then \(\nabla f(z)^{\top }(y-z)+ r(y) - r(z) \ge - \triangle \ell _z \ge - \epsilon \).\(\square \)
Denoting \(\Phi ^*\) to be the optimal value of (1.2), we are now ready to give the main result of the iteration complexity of GCG (Algorithm 1) for obtaining an \(\epsilon \)-stationary solution for (1.2).
Theorem 2.7
For any \(\epsilon \in (0, \mathrm {diam}^p_p(S) \rho ) \), GCG finds an \(\epsilon \)-stationary solution for (1.2) within \(\left\lceil \frac{2(\Phi (x^{0}) - \Phi ^*)(\mathrm {diam}^p_p(S)\rho )^{q-1}}{\epsilon ^q} \right\rceil \) iterations, where \(\frac{1}{p} + \frac{1}{q} =1\).
Proof
For ease of presentation, we denote \(D:= \mathrm {diam}_p(S)\) and \(\triangle \ell ^k := \triangle \ell _{x^k}\). By Assumption 2.1, using the fact that \(\frac{\epsilon }{D^p\rho } < 1\), and by the definition of \(\alpha _k\) in Algorithm 1, we have
where the third inequality is due to the convexity of function r and the fact that \(x^{k+1} - x^{k} = \alpha _k(y^{k}-x^{k})\), and the last inequality is due to (2.1). Furthermore, (2.10) immediately yields
For any integer \(K>0\), summing (2.11) over \(k=0,1,\ldots ,K-1\), yields
where \(\Phi ^*\) is the optimal value of (1.2). It is easy to see that by setting \(K = \left\lceil \frac{2(\Phi (x^{0}) - \Phi ^*)(D^p\rho )^{q-1}}{\epsilon ^q} \right\rceil \), the above inequality implies \(\triangle \ell _{x^{k^*}} \le \epsilon \), where \(k^*\in \mathop \mathrm{argmin}_{k\in \{0,\ldots ,K-1\}}\triangle \ell ^k\). According to Lemma 2.6, \(x^{k^*}\) is an \(\epsilon \)-stationary solution for (1.2) as defined in Definition 2.2.\(\square \)
Finally, if f is concave, then the iteration complexity can be improved as \(O(1/\epsilon )\).
Proposition 2.8
Suppose that f is a concave function. If we set \(\alpha _k = 1\) for all k in GCG (Algorithm 1), then it returns an \(\epsilon \)-stationary solution for (1.2) within \(\left\lceil \frac{\Phi (x^{0}) - \Phi ^*}{\epsilon } \right\rceil \) iterations.
Proof
By setting \(\alpha _k = 1\) in Algorithm 1 we have \(x^{k+1}=y^k\) for all k. Since f is concave, it holds that
Summing this inequality over \(k=0,1,\ldots ,K-1\) yields \(K \min _{k\in \{0,1,\ldots ,K-1\}}\triangle \ell ^k \le \Phi (x^{0}) - \Phi ^*,\) which leads to the desired result immediately.\(\square \)
3 Variants of ADMM for solving nonconvex problems with affine constraints
In this section, we study two variants of the ADMM (Alternating Direction Method of Multipliers) for solving the general problem (1.1), and analyze their iteration complexities for obtaining an \(\epsilon \)-stationary solution (to be defined later) under certain conditions. Throughout this section, the following two assumptions regarding problem (1.1) are assumed.
Assumption 3.1
The gradient of the function f is Lipschitz continuous with Lipschitz constant \(L>0\), i.e., for any \((x_1^1,\ldots ,x_N^1)\) and \((x_1^2,\ldots ,x_N^2)\in \mathcal {X}_1\times \cdots \times \mathcal {X}_{N-1}\times \mathbb {R}^{n_N}\), it holds that
which implies that for any \((x_1,\ldots ,x_{N-1})\in \mathcal {X}_1\times \cdots \times \mathcal {X}_{N-1}\) and \(x_N\), \(\hat{x}_N\in \mathbb {R}^{n_N}\), we have
Assumption 3.2
f and \(r_i, i=1,\ldots ,N-1\) are all lower bounded over the appropriate domains defined via the sets \(\mathcal {X}_1, \mathcal {X}_2, \ldots , \mathcal {X}_{N-1}, \mathbb {R}^{n_N}\), and we denote
and \(r_i^* = \inf \limits _{x_i\in \mathcal {X}_i} \ \{r_i(x_i)\}\) for \(i=1,2,\ldots ,N-1\).
3.1 Preliminaries
To characterize the optimality conditions for (1.1) when \(r_i\) is nonsmooth and nonconvex, we need to recall the notion of the generalized gradient (see, e.g., [50]).
Definition 3.3
Let \(h:\mathbb {R}^{n} \rightarrow \mathbb {R}\cup \{+\infty \}\) be a proper lower semi-continuous function. Suppose \(h(\bar{x})\) is finite for a given \(\bar{x}\). For \(v \in \mathbb {R}^n\), we say that
-
(i).
v is a regular subgradient (also called Fr\(\acute{e}\)chet subdifferential) of h at \(\bar{x}\), written \(v \in \hat{\partial }h(\bar{x})\), if
$$\begin{aligned} \lim _{x \ne \bar{x}}\inf _{x \rightarrow \bar{x}}\frac{h(x) - h(\bar{x}) - \langle v, x - \bar{x} \rangle }{\Vert x - \bar{x}\Vert } \ge 0; \end{aligned}$$ -
(ii).
v is a general subgradient of h at \(\bar{x}\), written \(v \in \partial h(\bar{x})\), if there exist sequences \(\{x^k\}\) and \(\{v^k\}\) such that \(x^{k} \rightarrow \bar{x}\) with \(h(x^{k}) \rightarrow h(\bar{x})\), and \(v^k \in \hat{\partial }h(x^k)\) with \(v^k \rightarrow v\) when \(k \rightarrow \infty \).
The following proposition lists some well-known facts about the lower semi-continuous functions.
Proposition 3.4
Let \(h:\mathbb {R}^{n} \rightarrow \mathbb {R}\cup \{+\infty \}\) and \(g:\mathbb {R}^{n} \rightarrow \mathbb {R}\cup \{+\infty \}\) be proper lower semi-continuous functions. Then it holds that:
-
(i)
(Theorem 10.1 in [50]) Fermat’s rule remains true: if \(\bar{x}\) is a local minimum of h, then \(0 \in \partial h(\bar{x})\).
-
(ii)
If \(h(\cdot )\) is continuously differentiable at x, then \(\partial (h + g) (x) = \nabla h(x) + \partial g(x)\).
-
(iii)
(Exercise 10.10 in [50]) If h is locally Lipschitz continuous at x, then \(\partial (h + g) (x) \subset \partial h(x) + \partial g(x)\).
-
(iv)
Suppose h(x) is locally Lipschitz continuous, X is a closed and convex set, and \(\bar{x}\) is a local minimum of h on X. Then there exists \(v \in \partial h(\bar{x})\) such that \((x - \bar{x})^{\top } v \ge 0,\forall x\in X\).
In our analysis, we frequently use the following identity that holds for any vectors a, b, c, d,
3.2 An \(\epsilon \)-stationary solution for problem (1.1)
We now introduce notions of \(\epsilon \)-stationarity for (1.1) under the following two settings: (i) Setting 1: \(r_i\) is Lipschitz continuous, and \(\mathcal {X}_i\) is a compact set, for \(i=1,\ldots ,N-1\); (ii) Setting 2: \(r_i\) is lower semi-continuous, and \(\mathcal {X}_i = \mathbb {R}^{n_i}\), for \(i=1,\ldots ,N-1\).
Definition 3.5
(\(\epsilon \)-stationary solution for (1.1) in Setting 1) Under the conditions in Setting 1, for \(\epsilon \ge 0\), we call \({\left( x_1^*,\ldots , x_N^* \right) }\) an \(\epsilon \)-stationary solution for (1.1) if there exists a Lagrange multiplier \(\lambda ^*\) such that the following holds for any \(\left( x_1,\ldots ,x_N\right) \in \mathcal {X}_1\times \cdots \times \mathcal {X}_{N-1}\times \mathbb {R}^{n_N}\):
where \({g}^*_i\) is a general subgradient of \(r_i\) at point \(x_i^*\). If \(\epsilon =0\), we call \({ \left( x_1^*,\ldots , x_N^* \right) }\) a stationary solution for (1.1).
If \(\mathcal {X}_i = \mathbb {R}^{n_i}\) for \(i=1,\ldots , N-1\), then the VI style conditions in Definition 3.5 reduce to the following.
Definition 3.6
Under the conditions in Setting 2, for \(\epsilon \ge 0\), we call \({ \left( x_1^*,\ldots , x_N^*\right) }\) to be an \(\epsilon \)-stationary solution for (1.1) if there exists a Lagrange multiplier \(\lambda ^*\) such that (3.5), (3.6) and the following holds for any \(\left( x_1,\ldots ,x_N\right) \in \mathcal {X}_1\times \cdots \times \mathcal {X}_{N-1}\times \mathbb {R}^{n_N}\):
where \({\partial } r_i(x_i^*)\) is the general subgradient of \(r_i\) at \(x_i^*\), \(i=1,2,\ldots ,N-1\). If \(\epsilon =0\), we call \({ \left( x_1^*,\ldots , x_N^*\right) }\) to be a stationary solution for (1.1).
The two settings of problem (1.1) considered in this section and their corresponding definitions of \(\epsilon \)-stationary solution, are summarized in Table 1.
A very recent work of Hong [32] proposes a definition of an \(\epsilon \)-stationary solution for problem (1.5), and analyzes the iteration complexity of a proximal augmented Lagrangian method for obtaining such a solution. Specifically, \((x^*,\lambda ^*)\) is called an \(\epsilon \)-stationary solution for (1.5) in [32] if \(Q(x^*,\lambda ^*) \le \epsilon \), where
and \(\mathcal {L}_\beta (x,\lambda ) := f(x) - \lambda ^\top \left( Ax - b\right) + \frac{\beta }{2}\left\| Ax - b\right\| ^2\) is the augmented Lagrangian function of (1.5). Note that [32] assumes that f is differentiable and has bounded gradient in (1.5). It is easy to show that an \(\epsilon \)-stationary solution in [32] is equivalent to an \(O(\sqrt{\epsilon })\)-stationary solution for (1.1) according to Definition 3.6 with \(r_i=0\) and f being differentiable. Note that there is no set constraint in (1.5), and so the notion of the \(\epsilon \)-stationarity in [32] is not applicable in the case of Definition 3.5.
Proposition 3.7
Consider the \(\epsilon \)-stationary solution in Definition 3.6 applied to problem (1.5), i.e., one block variable and \(r_i(x)=0\). Then \(x^*\) is a \(\gamma _1\sqrt{{\epsilon }}\)-stationary solution in Definition 3.6, with Lagrange multiplier \(\lambda ^*\) and \(\gamma _1 = 1/(\sqrt{2\beta ^2\Vert A\Vert _2^2+3})\), implies \(Q(x^*,\lambda ^*) \le \epsilon \). On the contrary, if \(Q(x^*,\lambda ^*) \le \epsilon \), then \(x^*\) is a \(\gamma _2\sqrt{{\epsilon }}\)-stationary solution from Definition 3.6 with Lagrange multiplier \(\lambda ^*\), where \(\gamma _2 = \sqrt{2 (1 + \beta ^2 \Vert A\Vert _2^2)}\).
Proof
Suppose \(x^*\) is a \(\gamma _1\sqrt{{\epsilon }}\)-stationary solution as defined in Definition 3.6. We have \(\Vert \nabla f(x^*) - A^{\top }\lambda ^* \Vert \le \gamma _1\sqrt{\epsilon }\) and \(\Vert Ax^* - b\Vert \le \gamma _1\sqrt{\epsilon }\), which implies that
On the other hand, if \(Q(x^*,\lambda ^*)\le \epsilon \), then we have \(\Vert \nabla f(x^*) - A^{\top }\lambda ^* + \beta A^{\top } (Ax^* - b)\Vert ^2 \le \epsilon \) and \(\Vert Ax^* - b\Vert ^2 \le \epsilon \). Therefore,
The desired result then follows immediately.\(\square \)
In the following, we introduce two variants of ADMM, to be called proximal ADMM-g and proximal ADMM-m, that solve (1.1) under some additional assumptions on \(A_N\). In particular, proximal ADMM-g assumes \(A_N=I\), and proximal ADMM-m assumes \(A_N\) to have full row rank.
3.3 Proximal gradient-based ADMM (proximal ADMM-g)
Our proximal ADMM-g solves (1.1) under the condition that \(A_N=I\). In this case, the problem reduces to a so-called sharing problem in the literature which has the following form
For applications of the sharing problem, see [12, 33, 39, 40]. Our proximal ADMM-g for solving (1.1) with \(A_N=I\) is described in Algorithm 2. It can be seen from Algorithm 2 that proximal ADMM-g is based on the framework of augmented Lagrangian method, and can be viewed as a variant of the ADMM. The augmented Lagrangian function of (1.1) is defined as
where \(\lambda \) is the Lagrange multiplier associated with the affine constraint, and \(\beta >0\) is a penalty parameter. In each iteration, proximal ADMM-g minimizes the augmented Lagrangian function plus a proximal term for block variables \(x_1,\ldots ,x_{N-1}\), with other variables being fixed; and then a gradient descent step is conducted for \(x_N\), and finally the Lagrange multiplier \(\lambda \) is updated. The interested readers are referred to [26] for gradient-based ADMM and its various stochastic variants for convex optimization.
Remark 3.8
Note that here we actually assumed that all subproblems in Step 1 of Algorithm 2, though possibly nonconvex, can be solved to global optimality. Many important problems arising from statistics satisfy this assumption. In fact, when the coupled objective is absent or can be linearized, after choosing some proper matrix \(H_i\), the solution of the corresponding subproblem is given by the proximal mappings of \(r_i\). As we mentioned earlier, many nonconvex regularization functions such as SCAD, LSP, MCP and Capped-\(\ell _1\) admit closed-form proximal mappings. Moreover, in Algorithm 2, we can choose
and
which guarantee the convergence rate of the algorithm as shown in Lemma 3.9 and Theorem 3.12.
Before presenting the main result on the iteration complexity of proximal ADMM-g, we need some lemmas.
Lemma 3.9
Suppose the sequence \(\{(x_1^k,\ldots ,x_N^k, \lambda ^k)\}\) is generated by Algorithm 2. The following inequality holds
Proof
Note that Steps 2 and 3 of Algorithm 2 yield that
Combining (3.11) and (3.1) yields that
\(\square \)
We now define the following function, which will play a crucial role in our analysis:
Lemma 3.10
Suppose the sequence \(\{(x_1^k,\ldots ,x_N^k, \lambda ^k)\}\) is generated by Algorithm 2, where the parameters \(\beta \) and \(\gamma \) are taken according to (3.8) and (3.9) respectively. Then \(\Psi _G(x_1^{k+1}, \ldots , x_N^{k+1}, \lambda ^{k+1}, x_N^k)\) monotonically decreases over \(k\ge 0\).
Proof
From Step 1 of Algorithm 2 it is easy to see that
From Step 2 of Algorithm 2 we get that
where the inequality follows from (3.2) and (3.3). Moreover, the following equality holds trivially
Combining (3.13), (3.14), (3.15) and (3.10) yields that
which further implies that
It is easy to verify that when \(\beta > \frac{18\sqrt{3}+6}{13}L\), then \(\gamma \) defined as in (3.9) ensures that \(\gamma >0\) and
Therefore, choosing \(\beta > \max \left( \frac{18\sqrt{3}+6}{13}L,\; \max \limits _{i=1,2,\ldots ,N-1}\frac{6 L^2}{\sigma _{\min }(H_i)}\right) \) and \(\gamma \) as in (3.9) guarantees that \(\Psi _G(x_1^{k+1}, \ldots , x_N^{k+1}, \lambda ^{k+1}, x_N^k)\) monotonically decreases over \(k\ge 0\). In fact, (3.17) can be verified as follows. By denoting \(z=\beta -\frac{1}{\gamma }\), (3.17) is equivalent to
which holds when \(\beta > \frac{18\sqrt{3}+6}{13}L\) and \(\frac{-\beta - \sqrt{13\beta ^2 - 12\beta L - 72L^2}}{12}< z < \frac{-\beta + \sqrt{13\beta ^2 - 12\beta L - 72L^2}}{12}\), i.e.,
which holds when \(\gamma \) is chosen as in (3.9).\(\square \)
Lemma 3.11
Suppose the sequence \(\{(x_1^k,\ldots ,x_N^k, \lambda ^k)\}\) is generated by Algorithm 2. Under the same conditions as in Lemma 3.10, for any \(k\ge 0\), we have
where \(r_i^*\) and \(f^*\) are defined in Assumption 3.2.
Proof
Note that from (3.11), we have
where the first inequality follows from (3.2), and the second inequality is due to \(\beta \ge 3L/2\). The desired result follows from the definition of \(\Psi _G\) in (3.12).\(\square \)
Now we are ready to give the iteration complexity of Algorithm 2 for finding an \(\epsilon \)-stationary solution of (1.1).
Theorem 3.12
Suppose the sequence \(\{(x_1^k,\ldots ,x_N^k, \lambda ^k)\}\) is generated by Algorithm 2. Furthermore, suppose that \(\beta \) satisfies (3.8) and \(\gamma \) satisfies (3.9). Denote
and
Then to get an \(\epsilon \)-stationary solution, the number of iterations that the algorithm runs can be upper bounded by:
and we can further identify one iteration \(\hat{k} \in \mathop \mathrm{argmin}\limits _{2\le k \le K+1} \sum _{i=1}^N \left( \Vert x_i^k - x_i^{k+1} \Vert ^2 + \Vert x_i^{k-1} - x_i^k \Vert ^2\right) \) such that \((x_1^{\hat{k}}, \ldots , x_N^{\hat{k}} )\) is an \(\epsilon \)-stationary solution for optimization problem (1.1) with Lagrange multiplier \(\lambda ^{\hat{k}}\) and \(A_N=I\), for Settings 1 and 2 respectively.
Proof
For ease of presentation, denote
By summing (3.16) over \(k=1,\ldots ,K\), we obtain that
where \(\tau \) is defined in (3.18). By invoking Lemmas 3.10 and 3.11, we get
We now derive upper bounds on the terms in (3.5) and (3.6) through \(\theta _k\). Note that (3.11) implies that
which yields
From Step 3 of Algorithm 2 and (3.10) it is easy to see that
We now derive upper bounds on the terms in (3.4) and (3.7) under the two settings in Table 1, respectively.
Setting 2 Because \(r_i\) is lower semi-continuous and \(\mathcal {X}_i = \mathbb {R}^{n_i}\), \(i=1,\ldots ,N-1\), it follows from Step 1 of Algorithm 2 that there exists a general subgradient \(g_i\in \partial r_i(x_i^{k+1})\) such that
By combining (3.24), (3.22) and (3.23) we conclude that Algorithm 2 returns an \(\epsilon \)-stationary solution for (1.1) according to Definition 3.6 under the conditions of Setting 2 in Table 1.
Setting 1 Under this setting, we know \(r_i\) is Lipschitz continuous and \(\mathcal {X}_i \subset \mathbb {R}^{n_i}\) is convex and compact. From Assumption 3.1 and the fact that \(\mathcal {X}_i\) is compact, we know \(r_i(x_i) + f(x_1,\ldots , x_N)\) is locally Lipschitz continuous with respect to \(x_i\) for \(i=1,2,\ldots , N-1\). Similar to (3.24), for any \(x_i \in \mathcal {X}_i\), Step 1 of Algorithm 2 yields that
where \(g_i\in \partial r_i(x_i^{k+1})\) is a general subgradient of \(r_i\) at \(x_i^{k+1}\). By combining (3.25), (3.22) and (3.23) we conclude that Algorithm 2 returns an \(\epsilon \)-stationary solution for (1.1) according to Definition 3.5 under the conditions of Setting 1 in Table 1.\(\square \)
Remark 3.13
Note that the potential function \(\Psi _G\) defined in (3.12) is related to the augmented Lagrangian function. The augmented Lagrangian function has been used as a potential function in analyzing the convergence of nonconvex splitting and ADMM methods in [2, 31,32,33, 38]. See [32] for a more detailed discussion on this.
Remark 3.14
In Step 1 of Algorithm 2, we can also replace the function
by its linearization
so that the subproblem can be solved by computing the proximal mappings of \(r_i\), with some properly chosen matrix \(H_i\) for \(i =1, \ldots , N-1\), and the same iteration bound still holds.
3.4 Proximal majorization ADMM (proximal ADMM-m)
Our proximal ADMM-m solves (1.1) under the condition that \(A_N\) has full row rank. In this section, we use \(\sigma _N\) to denote the smallest eigenvalue of \(A_NA_N^\top \). Note that \(\sigma _N >0\) because \(A_N\) has full row rank. Our proximal ADMM-m can be described as follows
In Algorithm 3, \(U(x_1,\ldots ,x_{N-1},x_N,\lambda ,\bar{x})\) is defined as
Moreover, \(\beta \) can be chosen as
to guarantee the convergence rate of the algorithm shown in Lemma 3.16 and Theorem 3.18.
It is worth noting that the proximal ADMM-m and proximal ADMM-g differ only in Step 2: Step 2 of proximal ADMM-g takes a gradient step of the augmented Lagrangian function with respect to \(x_N\), while Step 2 of proximal ADMM-m requires to minimize a quadratic function of \(x_N\).
We provide some lemmas that are useful in analyzing the iteration complexity of proximal ADMM-m for solving (1.1).
Lemma 3.15
Suppose the sequence \(\{(x_1^k,\ldots ,x_N^k,\lambda ^k)\}\) is generated by Algorithm 3. The following inequality holds
Proof
From the optimality conditions of Step 2 of Algorithm 3, we have
where the second equality is due to Step 3 of Algorithm 3. Therefore, we have
\(\square \)
We define the following function that will be used in the analysis of proximal ADMM-m:
Similar to the function used in proximal ADMM-g, we can prove the monotonicity and boundedness of function \(\Psi _L\).
Lemma 3.16
Suppose the sequence \(\{(x_1^k,\ldots ,x_N^k,\lambda ^k)\}\) is generated by Algorithm 3, where\(\beta \) is chosen according to (3.26). Then \(\Psi _L(x^{k+1},\ldots ,x_N^{k+1},\lambda ^{k+1},x_N^k)\) monotonically decreases over \(k>0\).
Proof
By Step 1 of Algorithm 3 one observes that
while by Step 2 of Algorithm 3 we have
where the first inequality is due to (3.2) and (3.3). Moreover, from (3.27) we have
Combining (3.28), (3.29) and (3.30) yields that
which further implies that
where the second inequality is due to (3.26). This completes the proof.\(\square \)
The following lemma shows that the function \(\Psi _L\) is lower bounded.
Lemma 3.17
Suppose the sequence \(\{(x_1^k,\ldots ,x_N^k,\lambda ^k)\}\) is generated by Algorithm 3. Under the same conditions as in Lemma 3.16, the sequence \(\{\Psi _L(x^{k+1},\ldots ,x_N^{k+1}, \lambda ^{k+1}, x_N^k)\}\) is bounded from below.
Proof
From Step 3 of Algorithm 3 we have
where the third equality follows from (3.3). Summing this inequality over \(k=0,1,\ldots ,K-1\) for any integer \(K\ge 1\) yields that
Lemma 3.16 stipulates that \(\{ \Psi _L(x_1^{k+1}, \ldots , x_N^{k+1}, \lambda ^{k+1}, x_N^k) \}\) is a monotonically decreasing sequence; the above inequality thus further implies that the entire sequence is bounded from below.\(\square \)
We are now ready to give the iteration complexity of proximal ADMM-m, whose proof is similar to that of Theorem 3.12.
Theorem 3.18
Suppose the sequence \(\{(x_1^k,\ldots ,x_N^k,\lambda ^k) \}\) is generated by proximal ADMM-m (Algorithm 3), and \(\beta \) satisfies (3.26). Denote
and
Then to get an \(\epsilon \)-stationary solution, the number of iterations that the algorithm runs can be upper bounded by:
and we can further identify one iteration \(\hat{k} \in \mathop \mathrm{argmin}\limits _{2\le k \le K+1} \sum _{i=1}^N \left( \Vert x_i^k - x_i^{k+1} \Vert ^2 + \Vert x_i^{k-1} - x_i^k \Vert ^2\right) \), such that \((x_1^{\hat{k}}, \ldots , x_N^{\hat{k}})\) is an \(\epsilon \)-stationary solution for (1.1) with Lagrange multiplier \(\lambda ^{\hat{k}}\) and \(A_N\) being full row rank, for Settings 1 and 2 respectively.
Proof
By summing (3.31) over \(k=1,\ldots ,K\), we obtain that
where \(\tau \) is defined in (3.33). From Lemma 3.17 we know that there exists a constant \(\Psi _L^*\) such that \(\Psi (x_1^{k+1}, \ldots , x_N^{k+1}, \lambda ^{k+1}, x_N^k) \ge \Psi _L^*\) holds for any \(k\ge 1\). Therefore,
where \(\theta _k\) is defined in (3.20), i.e., for K defined as in (3.34), \(\theta _{\hat{k}}=O(\epsilon ^2)\).
We now give upper bounds to the terms in (3.5) and (3.6) through \(\theta _k\). Note that Step 2 of Algorithm 3 implies that
which implies that
By Step 3 of Algorithm 3 and (3.27) we have
The remaining proof is to give upper bounds to the terms in (3.4) and (3.7). Since the proof steps are almost the same as Theorem 3.12, we shall only provide the key inequalities below.
Setting 2 Under conditions in Setting 2 in Table 1, the inequality (3.24) becomes
By combining (3.39), (3.37) and (3.38) we conclude that Algorithm 3 returns an \(\epsilon \)-stationary solution for (1.1) according to Definition 3.6 under the conditions of Setting 2 in Table 1.
Setting 1 Under conditions in Setting 1 in Table 1, the inequality (3.25) becomes
By combining (3.40), (3.37) and (3.38) we conclude that Algorithm 3 returns an \(\epsilon \)-stationary solution for (1.1) according to Definition 3.5 under the conditions of Setting 1 in Table 1.\(\square \)
Remark 3.19
In Step 1 of Algorithm 3, we can replace the function \(f(x_1^{k+1},\ldots ,x_{i-1}^{k+1},x_i,x_{i+1}^k,\ldots ,x_N^k)\) by its linearization
Under the same conditions as in Remark 3.14, the same iteration bound follows by slightly modifying the analysis above.
4 Extensions
4.1 Relaxing the assumption on the last block variable \(x_N\)
It is noted that in (1.1), we have some restrictions on the last block variable \(x_N\), i.e., \(r_N\equiv 0\) and \(A_N=I\) or is full row rank. In this subsection, we show how to remove these restrictions and consider the more general problem
where \(x_i\in \mathbb {R}^{n_i}\) and \(A_i\in \mathbb {R}^{m\times n_i}\), \(i=1,\ldots ,N\).
Before proceeding, we make the following assumption on (4.1).
Assumption 4.1
Denote \(n = n_1+\cdots +n_N\). For any compact set \(S\subseteq \mathbb {R}^n\), and any sequence \(\lambda ^j\in \mathbb {R}^m\) with \(\Vert \lambda ^j\Vert \rightarrow \infty \), \(j=1,2,\ldots \), the following limit
holds uniformly for all \((x_1,\ldots ,x_N)\in S\), where \(A = [A_1,\ldots ,A_N]\).
Remark that the above implies A to have full row-rank. Furthermore, if f is continuously differentiable and \(\partial r_i (S) := \bigcup _{x \in S} \partial r_i(x) \) is a compact set for any compact set S, and A has full row rank, then Assumption 4.1 trivially holds. On the other hand, for popular non-convex regularization functions, such as SCAD, MCP and Capped \(\ell _1\)-norm, it can be shown that the corresponding set \(\partial r_i (S)\) is indeed compact set for any compact set S, and so Assumption 4.1 holds in all these cases.
We introduce the following problem that is closely related to (4.1):
where \(\epsilon >0\) is the target tolerance, and \(\mu (\epsilon )\) is a function of \(\epsilon \) which will be specified later. Now, proximal ADMM-m is ready to be used for solving (4.2) because \(A_{N+1}=I\) and y is unconstrained. We have the following iteration complexity result for proximal ADMM-m to obtain an \(\epsilon \)-stationary solution of (4.1); proximal ADMM-g can be analyzed similarly.
Theorem 4.2
Consider problem (4.1) under Setting 2 in Table 1. Suppose that Assumption 4.1 holds, and the objective in (4.1), i.e., \(f+\sum _{i=1}^Nr_i\), has a bounded level set. Furthermore, suppose that f has a Lipschitz continuous gradient with Lipschitz constant L, and A is of full row rank. Now let the sequence \(\{(x_1^k,\ldots ,x_{N}^k, y^k, \lambda ^k)\}\) be generated by proximal ADMM-m for solving (4.2) with initial iterates \(y^0=\lambda ^0=0\), and \((x_1^0,\ldots ,x_N^0)\) such that \(\sum _{i=1}^NA_ix_i^0=b\). Assume that the target tolerance \(\epsilon \) satisfies
Then in no more than \(O(1/\epsilon ^4)\) iterations we will reach an iterate \((x_1^{{\hat{K}}+1}, \ldots , x_{N}^{{\hat{K}}+1}, y^{{\hat{K}}+1})\) that is an \(\epsilon \)-stationary solution for (4.2) with Lagrange multiplier \(\lambda ^{{\hat{K}}+1}\). Moreover, \((x_1^{{\hat{K}}+1}, \ldots , x_{N}^{{\hat{K}}+1})\) is an \(\epsilon \)-stationary solution for (4.1) with Lagrange multiplier \(\lambda ^{\hat{K}+1}\).
Proof
Denote the penalty parameter as \(\beta (\epsilon )\). The augmented Lagrangian function of (4.2) is given by
Now we set
From (4.3) we have \(\mu (\epsilon ) > L\). This implies that the Lipschitz constant of \(f(x_1,x_2,\ldots , x_N) + \frac{\mu (\epsilon )}{2} \Vert y \Vert ^2\), which is the smooth part of the objective in (4.2), is equal to \(\mu (\epsilon )\). Then from the optimality conditions of Step 2 of Algorithm 3, we have \(\mu (\epsilon ) y^{k-1} - \lambda ^k - \mu (\epsilon ) (y^{k-1}-y^k) = 0\), which further implies that \(\mu (\epsilon )y^k=\lambda ^k, \forall k\ge 1\).
Similar to Lemma 3.16, we can prove that \(\mathcal {L}_{\beta (\epsilon )}(x_1^k,\ldots ,x_N^k,y^k,\lambda ^k)\) monotonically decreases. Specifically, since \(\mu (\epsilon )y^k=\lambda ^k\), combining (3.28), (3.29) and the equality in (3.30) yields,
where the last inequality is due to (4.4).
Similar to Lemma 3.17, we can prove that \(\mathcal {L}_{\beta (\epsilon )}(x_1^k,\ldots ,x_N^k,y^k,\lambda ^k)\) is bounded from below, i.e., the exists a constant \(\mathcal {L}^*=f^* + \sum _{i=1}^N r_i^*\) such that
Actually the following inequalities lead to the above fact:
where the second equality is from \(\mu (\epsilon )y^k=\lambda ^k\), and the last inequality is due to (4.4). Moreover, denote \(\mathcal {L}^0\equiv \mathcal {L}_{\beta (\epsilon )}(x_1^0,\ldots ,x_N^0,y^0,\lambda ^0)\), which is a constant independent of \(\epsilon \).
Furthermore, for any integer \(K\ge 1\), summing (4.5) over \(k=0,\ldots ,K\) yields
where \(\theta _k:=\sum _{i=1}^N\Vert x_i^k-x_i^{k+1}\Vert ^2+ \Vert y^k-y^{k+1}\Vert ^2\). Note that (4.7) and (4.6) imply that
Similar to (3.24), it can be shown that for \(i=1,\ldots ,N\),
Set \(K= 1/\epsilon ^4\) and denote \(\hat{K} = \mathop \mathrm{argmin}_{0\le k\le K}\theta _k\). Then we know \(\theta _{\hat{K}} = O(\epsilon ^4)\). As a result,
Note that (4.6) also implies that \(f(x_1^k,\ldots ,x_N^k)+\sum _{i=1}^Nr_i(x_i^k)\) is upper-bounded by a constant. Thus, from the assumption that the level set of the objective is bounded, we know \((x_1^k,\ldots ,x_N^k)\) is bounded. Then Assumption 4.1 implies that \(\lambda ^k\) bounded, which results in \(\Vert y^k\Vert = O(\epsilon )\). Therefore, from (4.10) we have
which combining with (4.9) yields that \((x_1^{\hat{K}+1}, \ldots , x_N^{\hat{K}+1})\) is an \(\epsilon \)-stationary solution for (4.1) with Lagrange multiplier \(\lambda ^{\hat{K}+1}\), according to Definition 3.6.\(\square \)
Remark 4.3
Without Assumption 4.1, we can still provide an iteration complexity of proximal ADMM-m, but the complexity bound is worse than \(O(1/\epsilon ^4)\). To see this, note that because \(\mathcal {L}_{\beta (\epsilon )}(x_1^k,\ldots ,x_N^k,y^k,\lambda ^k)\) monotonically decreases, the first inequality in (4.6) implies that
Therefore, by setting \(K=1/\epsilon ^6\), \(\mu (\epsilon )=1/\epsilon ^2\) and \(\beta (\epsilon )=3/\epsilon ^2\) instead of (4.4), and combining (4.9) and (4.11), we conclude that \((x_1^{\hat{K}+1}, \ldots , x_N^{\hat{K}+1})\) is an \(\epsilon \)-stationary solution for (4.1) with Lagrange multiplier \(\lambda ^{\hat{K}+1}\), according to Definition 3.6.
4.2 Proximal BCD (block coordinate descent)
In this section, we apply a proximal block coordinate descent method to solve the following variant of (1.1) and present its iteration complexity:
where f is differentiable, \(r_i\) is nonsmooth, and \(\mathcal {X}_i\subset \mathbb {R}^{n_i}\) is a closed convex set for \(i=1,2,\ldots ,N\). Note that f and \(r_i\) can be nonconvex functions. Our proximal BCD method for solving (4.12) is described in Algorithm 4.
Similar to the settings in Table 1, depending on the properties of \(r_i\) and \(\mathcal {X}_i\), the \(\epsilon \)-stationary solution for (4.12) is as follows.
Definition 4.4
\((x_1^*,\ldots ,x_N^*,\lambda ^*)\) is called an \(\epsilon \)-stationary solution for (4.12), if
-
(i)
\(r_i\) is Lipschitz continuous, \(\mathcal {X}_i\) is convex and compact, and for any \(x_i\in \mathcal {X}_i\), \(i=1,\ldots ,N\), it holds that (\(g_i={\partial } r_i(x_i^*)\) denotes a generalized subgradient of \(r_i\))
$$\begin{aligned} \left( x_i-x_i^*\right) ^\top \left[ \nabla _i f(x_1^*,\ldots , x^*_N) + g_i \right] \ge - \epsilon ; \end{aligned}$$ -
(ii)
or, if \(r_i\) is lower semi-continuous, \(\mathcal {X}_i = \mathbb {R}^{n_i}\) for \(i=1,\ldots ,N\), it holds that
$$\begin{aligned} \mathop \mathrm{dist}\left( -\nabla _i f(x_1^*,\ldots , x^*_N), {\partial } r_i(x_i^*)\right) \le \epsilon . \end{aligned}$$
We now show that the iteration complexity of Algorithm 4 can be obtained from that of proximal ADMM-g. By introducing an auxiliary variable \(x_{N+1}\) and an arbitrary vector \(b\in \mathbb {R}^m\), problem (4.12) can be equivalently rewritten as
It is easy to see that applying proximal ADMM-g to solve (4.14) (with \(x_{N+1}\) being the last block variable) reduces exactly to Algorithm 4. Hence, we have the following iteration complexity result of Algorithm 4 for obtaining an \(\epsilon \)-stationary solution of (4.12).
Theorem 4.5
Suppose the sequence \(\{(x_1^k,\ldots ,x_N^k)\}\) is generated by proximal BCD (Algorithm 4). Denote
Letting
with \(\tau \) being defined in (3.18), and \(\hat{K} := \min \nolimits _{1\le k \le K} \sum _{i=1}^N \left( \Vert x_i^k - x_i^{k+1} \Vert ^2 \right) \), we have that \((x_1^{\hat{K}}, \ldots , x_N^{\hat{K}})\) is an \(\epsilon \)-stationary solution for problem (4.12).
Proof
Note that \(A_1 = \cdots = A_N = 0\) and \(A_{N + 1} = I\) in problem (4.14). By applying proximal ADMM-g with \(\beta > \max \left\{ 18L, \; \max \nolimits _{1\le i\le N}\left\{ \frac{6L^2}{ \sigma _{\min }(H_i) }\right\} \right\} \), Theorem 3.12 holds. In particular, (3.24) and (3.25) are valid in different settings with \(\beta \sqrt{N} \max \nolimits _{i+1 \le j \le N+1} \left[ \Vert A_j\Vert _2\right] \Vert A_i\Vert _2 = 0\) for \(i = 1,\ldots , N\), which leads to the choices of \(\kappa _5\) and \(\kappa _6\) in the above. Moreover, we do not need to consider the optimality with respect to \(x_{N+1}\) and the violation of the affine constraints, thus \(\kappa _1\) and \(\kappa _2\) in Theorem 3.12 are excluded in the expression of K, and the conclusion follows.\(\square \)
5 Numerical experiments
5.1 Robust tensor PCA model
We consider the following nonconvex and nonsmooth model of robust tensor PCA with \(\ell _1\) norm regularization for third-order tensor of dimension \(I_1 \times I_2 \times I_3\). Given an initial estimate R of the CP-rank, we aim to solve the following problem:
where \(A \in \mathbb {R}^{I_1\times R}\), \(B \in \mathbb {R}^{I_2\times R}\), \(C \in \mathbb {R}^{I_3\times R}\). The augmented Lagrangian function of (5.1) is given by
The following identities are useful for our presentation later:
where \({Z}_{(i)}\) stands for the mode-i unfolding of tensor \(\mathcal {Z}\) and \(\odot \) stands for the Khatri-Rao product of matrices.
Note that there are six block variables in (5.1), and we choose \(\mathcal {B}\) as the last block variable. A typical iteration of proximal ADMM-g for solving (5.1) can be described as follows (we chose \(H_i=\delta _i I\), with \(\delta _i>0, i=1,\ldots ,5\)):
where \(\circ \) is the matrix Hadamard product and \(\mathcal {S}\) stands for the soft shrinkage operator. The updates in proximal ADMM-m are almost the same as proximal ADMM-g except \({B}_{(1)}\) is updated as
On the other hand, note that (5.1) can be equivalently written as
which can be solved by the classical BCD method as well as our proximal BCD (Algorithm 4). In addition, we can apply GCG (Algorithm 1) to solve a variant of (5.1). Note that GCG requires a compact constraint set and thus it does not apply to (5.1) directly. As a result, we consider the following variant of (5.1), where the new quadratic regularization terms in the objective are added to help construct the compact constraint sets.
The new parameter \(\rho _1\) can be identified by the following observation:
which implies that \(\rho _1 = \sqrt{\frac{2\alpha _{\mathcal {N}}}{\alpha _A}} \Vert \mathcal {T}\Vert _F\). Other parameters \(\rho _2,\ldots ,\rho _5\) can be computed in the same manner.
In the following we shall compare the numerical performance of GCG, BCD, proximal BCD, proximal ADMM-g and proximal ADMM-m for solving (5.1). We let \(\alpha = 2/ \max \{\sqrt{I_1}, \sqrt{I_2}, \sqrt{I_3} \}\) and \(\alpha _{\mathcal {N}}=1\) in model (5.1). We apply proximal ADMM-g and proximal ADMM-m to solve (5.1), apply BCD and proximal BCD to solve (5.2), and apply GCG to solove (5.3) with \(\alpha _A = \alpha _B = \alpha _C = 10\) and \(\alpha _{{Z}=1}\). In all the four algorithms we set the maximum iteration number to be 2000, and the algorithms are terminated either when the maximum iteration number is reached or when \(\theta _k\) as defined in (3.20) is less than \(10^{-6}\). The parameters used in the two ADMM variants are specified in Table 2.
In the experiment, we randomly generate 20 instances for fixed tensor dimension and CP-rank. Suppose the low-rank part \(\mathcal {Z}^0\) is of rank \(R_{ CP}\). It is generated by
where vectors \(a^{i,r}\) are generated from standard Gaussian distribution for \(i=1,2,3\), \(r = 1,\ldots ,R_{ CP}\). Moreover, a sparse tensor \(\mathcal {E}^0\) is generated with cardinality of \(0.001\cdot I_1 I_2 I_3\) such that each nonzero component follows from standard Gaussian distribution. Finally, we generate noise \(\mathcal {B}^0 = 0.001* \hat{\mathcal {B}}\), where \(\hat{\mathcal {B}}\) is a Gaussian tensor. Then we set \(\mathcal {T} = \mathcal {Z}^0 + \mathcal {E}^0 +\mathcal {B}^0\) as the observed data in (5.1). A proper initial guess R of the true rank \(R_{ CP}\) is essential for the success of our algorithms. We can borrow the strategy in matrix completion [54], and start from a large R (\(R \ge R_{ CP}\)) and decrease it aggressively once a dramatic change in the recovered tensor \(\mathcal {Z}\) is observed. We report the average performance of 20 instances of the four algorithms with initial guess \(R = R_{ CP}\), \(R = R_{ CP} + 1\) and \(R = R_{ CP} + \lceil 0.2*R_{ CP} \rceil \) in Tables 3, 4 and 5, respectively.
In Tables 3, 4 and 5, “Err.” denotes the averaged relative error \(\frac{\Vert \mathcal {Z}^*-\mathcal {Z}^0 \Vert _F}{\Vert \mathcal {Z}^0 \Vert _F}\) of the low-rank tensor over 20 instances, where \(\mathcal {Z}^*\) is the solution returned by the corresponding algorithm; “Iter.” denotes the averaged number of iterations over 20 instances; “#” records the number of solutions (out of 20 instances) that have relative error less than 0.01.
Tables 3, 4 and 5 suggest that BCD mostly converges to a local solution rather than the global optimal solution, GCG easily gets stuck at a local solution in a few iterations for this particular problem, while the other three methods are much better in finding the global optimum.
It is interesting to note that the results presented in Table 5 are better than that of Tables 4 and 3 when a larger basis is allowed in tensor factorization. Moreover, in this case, the proximal BCD usually consumes less number of iterations than the two ADMM variants.
5.2 Computing the leading sparse principal component of tensor
In this subsection, we consider the problem (1.4) of finding the leading sparse principal component of a given tensor. To apply the GCG method in the previous section, we adopt \(\Vert \cdot \Vert _1\) as regularizer, and arrive at the following formulation
The subproblem in GCG is in the form of \(\min _{\Vert y\Vert _2^2 \le 1} \{ -y^{\top }b + \rho \Vert y\Vert _1 \}\), which has a closed form solution
where \(z(j)= sign (b(j))\max \{ |b(j)| - \rho , 0 \}\;\forall \; j=1,2,\ldots , n\).
One undesirable property of the formulation (5.4) is that we may possibly get a zero solution, i.e. \(x_i=0\) for some i, which leads to \(\mathcal {T} (x_1, x_2, \ldots , x_d)=0\). To prevent this from happening, we also apply the BCD method and proximal BCD method to the following equality constrained problem:
and compare the results with those returned by our proposed algorithms in Table 6.
In the tests, we let \(\alpha =0.85\), and set the maximum iteration number to be 2000. For each fixed dimension, we randomly generate 10 instances which are the fourth order tensors and the corresponding problems are solved by the three methods, starting from the same initial point. In Table 6, ‘Val.’ refers to the value \(\mathcal {T} (x_1, x_2, \ldots , x_d)\). From this table, we see that GCG is capable of finding a nonzero local optimum within a few hundred steps in most cases, with reasonably sparsity. The three approaches in Table 6 are comparable to each other in terms of the value \(\mathcal {T} (x_1, x_2, \ldots , x_d)\), but BCD consumes the maximum 2000 iterations in quite a few instances, while GCG finds the best local optimum in a few instances (e.g. instances 6 and 9 for \(n=8\), and instances 5 and 8 for \(n=20\)).
References
Allen, G.: Sparse higher-order principal components analysis. In: The 15th International Conference on Artificial Intelligence and Statistics (2012)
Ames, B., Hong, M.: Alternating direction method of multipliers for penalized zero-variance discriminant analysis. Comput. Optim. Appl. 64(3), 725–754 (2016). https://doi.org/10.1007/s10589-016-9828-y
Attouch, H., Bolte, J., Redont, P., Soubeyran, A.: Proximal alternating minimization and projection methods for nonconvex problems: an approach based on the Kurdyka–Łojasiewicz inequality. Math. Oper. Res. 35(2), 438–457 (2010)
Bach, F.: Duality between subgradient and conditional gradient methods. SIAM J. Optim. 25(1), 115–129 (2015)
Beck, A., Shtern, S.: Linearly convergent away-step conditional gradient for nonstrongly convex functions. Math. Program. 164(1–2), 1–27 (2017)
Bian, W., Chen, X.: Worst-case complexity of smoothing quadratic regularization methods for non-Lipschitzian optimization. SIAM J. Optim. 23, 1718–1741 (2013)
Bian, W., Chen, X., Ye, Y.: Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization. Math. Program. 149, 301–327 (2015)
Bolte, J., Daniilidis, A., Lewis, A.: The Łojasiewicz inequality for nonsmooth subanalytic functions with applications to subgradient dynamical systems. SIAM J. Optim. 17, 1205–1223 (2006)
Bolte, J., Daniilidis, A., Lewis, A., Shiota, M.: Clarke subgradients of stratifiable functions. SIAM J. Optim. 18, 556–572 (2007)
Bolte, J., Daniilidis, A., Ley, O., Mazet, L.: Characterizations of Łojasiewicz inequalities: subgradient flows, talweg, convexity. Trans. Am. Math. Soc. 362(6), 3319–3363 (2010)
Bolte, J., Sabach, S., Teboulle, M.: Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Math. Program. 146, 459–494 (2014)
Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)
Bredies, K.: A forward-backward splitting algorithm for the minimization of non-smooth convex functionals in Banach space. Inverse Probl. 25(1), 711–723 (2009)
Bredies, K., Lorenz, D.A., Maass, P.: A generalized conditional gradient method and its connection to an iterative shrinkage method. Comput. Optim. Appl. 42(2), 173–193 (2009)
Candès, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted \(\ell _1\) minimization. J. Fourier Anal. Appl. 14(5–6), 877–905 (2008)
Cartis, C., Gould, N.I.M., Toint, PhL: On the complexity of steepest descent, Newton’s and regularized Newton’s methods for nonconvex unconstrained optimization. SIAM J. Optim. 20(6), 2833–2852 (2010)
Cartis, C., Gould, N.I.M., Toint, P.L.: Adaptive cubic overestimation methods for unconstrained optimization. Part II: worst-case function-evaluation complexity. Math. Program. Ser. A 130(2), 295–319 (2011)
Cartis, C., Gould, N.I.M., Toint, P.L.: An adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity. IMA J. Numer. Anal. 32, 1662–1695 (2012)
Chen, X., Ge, D., Wang, Z., Ye, Y.: Complexity of unconstrained \(l_2\)-\(l_p\) minimization. Math. Program. 143, 371–383 (2014)
Curtis, F., Robinson, D.P., Samadi, M.: A trust region algorithm with a worst-case iteration complexity of \({\cal{O}} (\epsilon ^{-3/2})\) for nonconvex optimization. Math. Program. 162, 1–32 (2017)
Devolder, O., François, G., Nesterov, Yu.: First-order methods of smooth convex optimization with inexact oracle. Math. Program. Ser. A 146, 37–75 (2014)
Dutta, J., Deb, K., Tulshyan, R., Arora, R.: Approximate KKT points and a proximity measure for termination. J. Glob. Optim. 56, 1463–1499 (2013)
Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 96(456), 1348–1360 (2001)
Frank, M., Wolfe, P.: An algorithm for quadratic programming. Nav. Res. Logist. Q. 3, 95–110 (1956)
Freund, R.M., Grigas, P.: New analysis and results for the Frank–Wolfe method. Math. Program. 155, 199–230 (2016)
Gao, X., Jiang, B., Zhang, S.: On the information-adaptive variants of the ADMM: an iteration complexity perspective. J. Sci. Comput. 76, 327–363 (2018)
Ge, D., He, R., He, S.: A three criteria algorithm for \(l_2-l_p\) minimization problem with linear constraints. Math. Program. 166(1), 131–158 (2017)
Ghadimi, S., Lan, G., Zhang, H.: Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization. Math. Program. 155(1), 1–39 (2016)
Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: ICML, pp. 37–45 (2013)
Harchaoui, Z., Juditsky, A., Nemirovski, A.: Conditional gradient algorithms for norm-regularized smooth convex optimization. Math. Program. 152, 75–112 (2015)
Hong, M.: A distributed, asynchronous and incremental algorithm for nonconvex optimization: an ADMM based approach. IEEE Trans. Control Netw. Syst. 5(3), 935–945 (2018)
Hong, M.: Decomposing linearly constrained nonconvex problems by a proximal primal dual approach: algorithms, convergence, and applications. arXiv:1604.00543 (2016)
Hong, M., Luo, Z.-Q., Razaviyayn, M.M.: Convergence analysis of alternating direction method of multipliers for a family of nonconvex problems. SIAM J. Optim. 26(1), 337–364 (2016)
Jaggi, M.: Revisiting Frank–Wolfe: projection-free sparse convex optimization. In: ICML (2013)
Jiang, B., Yang, F., Zhang, S.: Tensor and its Tucker core: the invariance relationships. Numer. Linear Algebra Appl. 24(3), e2086 (2017)
Kurdyka, K.: On gradients of functions definable in o-minimal structures. Ann. Inst. Fourier 146, 769–783 (1998)
Lan, G., Zhou, Y.: Conditional gradient sliding for convex optimization. SIAM J. Optim. 26(2), 1379–1409 (2016)
Li, G., Pong, T.K.: Global convergence of splitting methods for nonconvex composite optimization. SIAM J. Optim. 25(4), 2434–2460 (2015)
Lin, T., Ma, S., Zhang, S.: Global convergence of unmodified 3-block ADMM for a class of convex minimization problems. J. Sci. Comput 76, 69–88 (2018)
Lin, T., Ma, S., Zhang, S.: Iteration complexity analysis of multi-block ADMM for a family of convex minimization without strong convexity. J. Sci. Comput. 69(1), 52–81 (2016)
Liu, Y., Ma, S., Dai, Y., Zhang, S.: A smoothing SQP framework for a class of composite \(\ell _q\) minimization over polyhedron. Math. Program. Ser. A 158(1), 467–500 (2016)
Łojasiewicz, S.: Une propriété topologique des sous-ensembles analytiques réels, Les Équations aux Dérivées Partielles. Éditions du centre National de la Recherche Scientifique, Paris (1963)
Lacoste-Julien, S.: Convergence rate of Frank–Wolfe for non-convex objectives. Preprint arXiv:1607.00345 (2016)
Lafond, J., Wai, H.-T., Moulines, E.: On the Online Frank–Wolfe algorithms for convex and non-convex optimizations. Preprint arXiv:1510.01171
Luss, R., Teboulle, M.: Conditional gradient algorithms for rank one matrix approximations with a sparsity constraint. SIAM Rev. 55, 65–98 (2013)
Martınez, J.M., Raydan, M.: Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization. J. Glob. Optim. 68, 367–385 (2017)
Mu, C., Zhang, Y., Wright, J., Goldfarb, D.: Scalable robust matrix recovery: Frank–Wolfe meets proximal methods. SIAM J. Sci. Comput. 38(5), 3291–3317 (2016)
Nesterov, Y.: Introductory Lectures on Convex Optimization. Applied Optimization. Kluwer Academic Publishers, Boston, MA (2004)
Ngai, H.V., Luc, D.T., Théra, M.: Extensions of Fréchet \(\epsilon \)-subdifferential calculus and applications. J. Math. Anal. Appl. 268, 266–290 (2002)
Rockafellar, R.T., Wets, R.: Variational Analysis. Volume 317 of Grundlehren der Mathematischen Wissenschafte. Springer, Berlin (1998)
Shen, Y., Wen, Z., Zhang, Y.: Augmented Lagrangian alternating direction method for matrix separation based on low-rank factorization. Optim. Methods Softw. 29(2), 239–263 (2014)
Wang, F., Cao, W., Xu, Z.: Convergence of multiblock Bregman ADMM for nonconvex composite problems. Preprint arXiv:1505.03063 (2015)
Wang, Y., Yin, W., Zeng, J.: Global convergence of ADMM in nonconvex nonsmooth optimization. J. Sci. Comput. 1–35 (2018)
Wen, Z., Yin, W., Zhang, Y.: Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Math. Program. Comput. 4(4), 333–361 (2012)
Xu, Y.: Alternating proximal gradient method for sparse nonnegative Tucker decomposition. Math. Program. Comput. 7(1), 39–70 (2015)
Yang, L., Pong, T.K., Chen, X.: Alternating direction method of multipliers for a class of nonconvex and nonsmooth problems with applications to background/foreground extraction. SIAM J. Imaging Sci. 10, 74–110 (2017)
Yu, Y., Zhang, X., Schuurmans, D.: Generalized conditional gradient for sparse estimation. Preprint arXiv:1410.4828v1 (2014)
Zhang, C.-H.: Nearly unbiased variable selection under minimax concave penalty. Ann. Stat. 38(2), 894–942 (2010)
Zhang, T.: Analysis of multi-stage convex relaxation for sparse regularization. J. Mach. Learn. Res. 11, 1081–1107 (2010)
Zhang, T.: Multi-stage convex relaxation for feature selection. Bernoulli 19(5B), 2277–2293 (2013)
Acknowledgements
We would like to thank Professor Renato D. C. Monteiro and two anonymous referees for their insightful comments, which helped improve this paper significantly.
Author information
Authors and Affiliations
Corresponding author
Additional information
Bo Jiang: Research of this author was supported in part by NSFC Grants 11771269 and 11831002, and Program for Innovative Research Team of Shanghai University of Finance and Economics. Shiqian Ma: Research of this author was supported in part by a startup package in Department of Mathematics at UC Davis. Shuzhong Zhang: Research of this author was supported in part by the National Science Foundation (Grant CMMI-1462408), and in part by Shenzhen Fundamental Research Fund under Grant No. KQTD2015033114415450.
Rights and permissions
About this article
Cite this article
Jiang, B., Lin, T., Ma, S. et al. Structured nonconvex and nonsmooth optimization: algorithms and iteration complexity analysis. Comput Optim Appl 72, 115–157 (2019). https://doi.org/10.1007/s10589-018-0034-y
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10589-018-0034-y
Keywords
- Structured nonconvex optimization
- \(\epsilon \)-Stationary
- Iteration complexity
- Conditional gradient method
- Alternating direction method of multipliers
- Block coordinate descent method