Abstract
The goal of this paper is to study approaches to bridge the gap between first-order and second-order type methods for composite convex programs. Our key observations are: (1) Many well-known operator splitting methods, such as forward–backward splitting and Douglas–Rachford splitting, actually define a fixed-point mapping; (2) The optimal solutions of the composite convex program and the solutions of a system of nonlinear equations derived from the fixed-point mapping are equivalent. Solving this kind of system of nonlinear equations enables us to develop second-order type methods. These nonlinear equations may be non-differentiable, but they are often semi-smooth and their generalized Jacobian matrix is positive semidefinite due to monotonicity. By combining with a regularization approach and a known hyperplane projection technique, we propose an adaptive semi-smooth Newton method and establish its convergence to global optimality. Preliminary numerical results on \(\ell _1\)-minimization problems demonstrate that our second-order type algorithms are able to achieve superlinear or quadratic convergence.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
This paper aims to solve a composite convex optimization problem in the form
where f and h are extended real-valued convex functions. Problem (1.1) arises from a wide variety of applications, such as signal recovery, image processing, machine learning, data analysis, and etc. For example, it becomes the sparse optimization problem when f or h equals to the \(\ell _1\)-norm, which attracts a significant interest in signal or image processing in recent years. If f is a loss function associated with linear predictors and h is a regularization function, problem (1.1) is often referred as the regularized risk minimization problem in machine learning and statistics. When f or h is an indicator function onto a convex set, problem (1.1) represents a general convex constrained optimization problem.
Recently, a series of first-order methods, including the forward–backward splitting (FBS) (also known as proximal gradient) methods, Nesterov’s accelerated methods, the alternative direction methods of multipliers (ADMM), the Douglas–Rachford splitting (DRS) and Peaceman–Rachford splitting (PRS) methods, have been extensively studied and widely used for solving problem (1.1). The readers are referred to, for example, [3, 6] and references therein, for a review on some of these first-order methods. One main feature of these methods is that they first exploit the underlying problem structures, then construct subproblems that can be solved relatively efficiently. These algorithms are rather simple yet powerful since they are easy to be implemented in many interesting applications and they often converge fast to a solution with moderate accuracy. However, a notorious drawback is that they may suffer from a slow tail convergence and hence a significantly large number of iterations is needed in order to achieve a high accuracy.
A few Newton-type methods for some special instances of problem (1.1) have been investigated to alleviate the inherent weakness of the first-order type methods. Most existing Newton-type methods for problem (1.1) with a differentiable function f and a simple function h whose proximal mapping can be cheaply evaluated are based on the FBS method to some extent. The proximal Newton method [18, 26] can be interpreted as a generalization of the proximal gradient method. It updates in each iteration by a composition of the proximal mapping with a Newton or quasi-Newton step. The semi-smooth Newton methods proposed in [4, 16, 22] solve the nonsmooth formulation of the optimality conditions corresponding to the FBS method. In [43], the augmented Lagrangian method is applied to solve the dual formulation of general linear semidefinite programming problems, where each augmented Lagrangian function is minimized by using the semi-smooth Newton-CG method. Similarly, a proximal point algorithm is developed to solve the dual problems of a class of matrix spectral norm approximation in [5], where the subproblems are again handled by the semi-smooth Newton-CG method.
In this paper, we study a few second-order type methods for problem (1.1) in a general setting even if f is nonsmooth and h is an indicator function. Our key observations are that many first-order methods, such as the FBS and DRS methods, can be written as fixed-point iterations and the optimal solutions of (1.1) are also the solutions of a system of nonlinear equations defined by the corresponding fixed-point mapping. Consequently, the concept is to develop second-order type algorithms based on solving the system of nonlinear equations. These nonlinear equations are often non-differentiable, but they are semi-smooth due to the properties of the proximal mappings. Hence, we propose a regularized semi-smooth Newton method to solve the system of nonlinear equations. Although the semi-smooth Newton method had been extensively studied in 1990s [9, 25, 27, 28, 34, 37], our usage is still different in several aspects. Firstly, our regularization parameter is updated by a self-adaptive strategy similar to the trust region algorithms. The regularization term is important since the generalized Jacobian matrix corresponding to monotone equations may only be positive semidefinite. Secondly, our approach keeps to use the semi-smooth Newton step as much as possible if the residual is reduced sufficiently comparing to the last Newton step. Otherwise, a hyperplane projection technique proposed by [32, 33] is called to ensure global convergence to an optimal solution of problem (1.1). When certain conditions are satisfied, we prove that the semi-smooth Newton steps are always performed close to the optimal solutions. Consequently, fast local convergence rate is established. Of course, the projection step may be replaced by other globalization techniques for certain problems. Thirdly, the computational cost can be further reduced if the Jacobian matrix is approximated by the limited memory BFGS (L-BFGS) method for some cases.
Our main contribution is the study of some relationships between the first-order and second-order type methods. Our semi-smooth Newton methods are able to solve the general convex composite problem (1.1) as long as a fixed-point mapping is well defined. In particular, our methods are applicable to constrained convex programs, such as constrained \(\ell _1\)-minimization problem. In contrast, the Newton-type methods in [4, 16, 18, 22, 26] are designed for unconstrained problems. Unlike the methods in [5, 43] applying the semi-Newton method to a sequence of subproblems, our target is a single system of nonlinear equations. Although solving the Newton system is a major challenge, the computational cost usually can be controlled reasonably well when certain structures can be utilized. Since the hyperplane projection technique in [32, 33] often slows down the numerical convergence of the Newton steps, we only activate this step when it is necessary and the numerical efficiency is still determined by the Newton steps. Our preliminary numerical results show that our proposed methods are able to reach superlinear or quadratic convergence rates on typical \(\ell _1\)-minimization problems.
The rest of this paper is organized as follows. In Sect. 2, we review a few popular operator splitting methods, derive their equivalent fixed-point iterations and discuss their semi-smooth and monotone properties. We propose a semi-smooth Newton method and establish its convergence results in Sect. 3. Numerical results on a number of applications are presented in Sect. 4. Finally, we conclude this paper in Sect. 5.
1.1 Notations
Let I be the identity operator or identity matrix of suitable size. Given a convex function \(f:\mathbb {R}^n\rightarrow (-\infty ,+\infty ]\) and a scalar \(t>0\), the proximal mapping of f is defined by
The Fenchel conjugate function \(f^*\) of f is
A function f is said to be closed if its epigraph is closed, or equivalently f is lower semicontinuous. A mapping \(F:\mathbb {R}^n\rightarrow \mathbb {R}^n\) is said to be monotone, if
The mapping F is called nonexpansive (contractive) if F is Lipschitz continuous with constant \(L\le 1\) (\(L<1\)), and firmly nonexpansive if
The mapping F is called strongly monotone with modulus \(c>0\) if
It is said that F is cocoercive with modulus \(\beta >0\) if
The operator T is called \(\alpha -averaged\) \((\alpha \in (0,1])\) if there exists a nonexpansive operator R such that \(T = (1-\alpha )I + \alpha R\).
2 Preliminaries
In this section, we review the fixed-point characterizations of some operator splitting algorithms and discuss some important properties of the associated fixed-point mappings. This section consists of three subsections. Section 2.1 reviews some operator splitting algorithms for problem (1.1), including FBS, DRS, and ADMM. These algorithms are well studied in the literature, see [2, 6, 7, 12] for example. Most of the operator splitting algorithms can also be interpreted as fixed-point algorithms derived from certain optimality conditions. Then, we present that the fixed-point mappings derived from the operator splitting algorithms are semi-smooth and monotone in Sects. 2.2 and 2.3, respectively.
2.1 Operator Splitting and Fixed-Point Algorithms
In problem (1.1), let h be a continuously differentiable function. The FBS algorithm is the iteration
where \(t>0\) is the step size. Define the following operator
Then FBS can be viewed as a fixed-point iteration
The DRS algorithm solves (1.1) by the following update:
The algorithm is traced back to [10, 11, 20] to solve partial differential equations (PDEs). The fixed-point iteration characterization of DRS is in the form of
where
Consider a linear constrained program:
where \(A_1\in \mathbb {R}^{m\times n_1}\) and \(A_2\in \mathbb {R}^{m\times n_2}\). The dual problem of (2.9) is given by
where
Assume that \(f_1\) and \(f_2\) are convex. It is widely known that the DRS iteration for dual problem (2.10) is the ADMM [14, 15]. It is regarded as a variant of augmented Lagrangian method and has attracted much attention in numerous fields. A recent survey paper [3] describes the applications of the ADMM to statistics and machine learning. The ADMM is equivalent to the following fixed-point iteration
where \(T_{\text {DRS}}\) is the DRS fixed-point mapping for problem (2.10).
2.2 Semi-Smoothness of Proximal Mapping
We now discuss the semi-smoothness of proximal mappings. This property often implies that the fixed-point mappings corresponding to operator splitting algorithms are semi-smooth or strongly semi-smooth.
Let \({\mathcal {O}}\subseteq \mathbb {R}^n\) be an open set and \(F:{\mathcal {O}}\rightarrow \mathbb {R}^m\) be a locally Lipschitz continuous function. Rademacher’s theorem says that F is almost everywhere differentiable. Let \(D_F\) be the set of differentiable points of F in \({\mathcal {O}}\). We next introduce the concepts of generalized differential.
Definition 2.1
Let \(F:{\mathcal {O}}\rightarrow \mathbb {R}^m\) be locally Lipschitz continuous at \(x\in {\mathcal {O}}\). The B-subdifferential of F at x is defined by
The set \(\partial F(x)=co (\partial _BF(x))\) is called Clarke’s generalized Jacobian, where \(co \) denotes the convex hull.
The notion of semi-smoothness plays a key role on establishing locally superlinear convergence of the nonsmooth Newton-type method. Semi-smoothness was originally introduced by Mifflin [21] for real-valued functions and extended to vector-valued mappings by Qi and Sun [28].
Definition 2.2
Let \(F:{\mathcal {O}}\rightarrow \mathbb {R}^m\) be a locally Lipschitz continuous function. We say that F is semi-smooth at \(x\in {\mathcal {O}}\) if
-
(a)
F is directionally differentiable at x; and
-
(b)
for any \(d\in {\mathcal {O}}\) and \(J\in \partial F(x+d)\),
$$\begin{aligned} \Vert F(x+d)-F(x)-Jd\Vert _2=o(\Vert d\Vert _2)\quad \text{ as }\ d\rightarrow 0. \end{aligned}$$
Furthermore, F is said to be strongly semi-smooth at \(x\in {\mathcal {O}}\) if F is semi-smooth and for any \(d\in {\mathcal {O}}\) and \(J\in \partial F(x+d)\),
The examples of semi-smooth functions include the smooth functions, all convex functions (thus norm), and the piecewise differentiable functions. Differentiable functions with Lipschitz gradients are strongly semi-smooth. For every \(p\in [1,\infty ]\), the norm \(\Vert \cdot \Vert _{p}\) is strongly semi-smooth. Piecewise affine functions are strongly semi-smooth. Examples of semi-smooth functions are thoroughly studied in [12, 37].
The basic properties of proximal mapping is well documented in textbooks such as [2, 29]. The proximal mapping \(\mathbf {prox}_{f}\), corresponding to a proper, closed and convex function \(f:\mathbb {R}^n\rightarrow \mathbb {R}\), is single-valued, maximal monotone and nonexpansive. Moreover, the proximal mappings of many interesting functions are (strongly) semi-smooth. It is worth mentioning that the semi-smoothness of proximal mapping does not hold in general [31].
We next review some existing results on the semi-smoothness of proximal mappings of various interesting functions. The proximal mapping of \(\ell _1\)-norm \(\Vert x\Vert _1\), which is the well-known soft-thresholding operator, is component-wise separable and piecewise affine. Hence, the operator \(\mathbf {prox}_{\Vert \cdot \Vert _1}\) is strongly semi-smooth. According to the Moreau’s decomposition, the proximal mapping of \(\ell _{\infty }\) norm (the conjugate of \(\ell _1\) norm) is also strongly semi-smooth. For \(k\in \mathbb {N}\), a function with k continuous derivatives is called a \(\mathcal {C}^k\) function. A function \(f:{\mathcal {O}}\rightarrow \mathbb {R}^m\) defined on the open set \({\mathcal {O}}\subseteq \mathbb {R}^n\) is called piecewise \(\mathcal {C}^k\) function, \(k\in [1,\infty ]\), if f is continuous and if at every point \(\bar{x}\in {\mathcal {O}}\) there exists a neighborhood \(V\subset {\mathcal {O}}\) and a finite collection of \(\mathcal {C}^k\) functions \(f_i:V\rightarrow \mathbb {R}^m, i=1,\ldots ,N\), such that
For a comprehensive study on piecewise \(\mathcal {C}^k\) functions, the readers are referred to [30]. From [37, Proposition 2.26], if f is a piecewise \(\mathcal {C}^1\) (piecewise smooth) function, then f is semi-smooth; if f is a piecewise \(\mathcal {C}^2\) function, then f is strongly semi-smooth. As described in [26, Section 5], in many applications the proximal mappings are piecewise \(\mathcal {C}^1\) and thus semi-smooth. Metric projection, which is the proximal mapping of an indicator function, plays an important role in the analysis of constrained programs. The projection over a polyhedral set is piecewise linear [29, Example 12.31] and hence strongly semi-smooth. The projections over symmetric cones are proved to be strongly semi-smooth in [35].
2.3 Monotonicity of Fixed-Point Mappings
This subsection focuses on the discussion of the monotonicity of the fixed-point mapping \(F:=I-T\), where \(T:\mathbb {R}^n\rightarrow \mathbb {R}^n\) is a fixed-point operator. Later, we will show that the monotone and nonexpansive property of F play critical roles in our proposed method.
Proposition 2.3
-
(i)
Suppose that \(\nabla h\) is cocoercive with \(\beta >0\), then \(F_{\text {FBS}}=I-T_{\text {FBS}}\) is monotone and \(\frac{2\beta }{4\beta -\gamma }\)-averaged if \(0<t\le 2\beta \).
-
(ii)
Suppose that \(\nabla h\) is strongly monotone with \(c>0\) and Lipschitz with \(L>0\), then \(F_{\text {FBS}}\) is strongly monotone if \(0<t<2c/L^2\).
-
(iii)
Suppose that \(h\in C^2\), \(H(x):=\nabla ^2h(x)\) is positive semidefinite for any \(x\in \mathbb {R}^n\) and \(\bar{\lambda }=\max _x\lambda _{\text {max}}(H(x))<\infty \). Then, \(F_{\text {FBS}}\) is monotone if \(0<t\le 2/\bar{\lambda }\).
-
(iv)
The fixed-point mapping \(F_{\text {DRS}}:=I-T_{\text {DRS}}\) is monotone and firmly nonexpansive hence \(\frac{1}{2}\)-averaged.
-
(v)
If T is \(\alpha \)-averaged \((\alpha \in (0,1])\), then \(F = I - T\) is \(\frac{1}{2\alpha }\)-cocoercive.
Proof
Items (i) and (ii) are well known in the literature, see [8, 44] for example.
(iii) From the mean value theorem, for any \(x,y\in \mathbb {R}^n\), one has that
where \(\bar{H}(x,y)=\int _0^1H(y+s(x-y))\text{ d }s\) and thus \(\lambda _{max}(\bar{H}(x,y))\le \bar{\lambda }\). Hence, we obtain
which implies that \(\nabla h\) is cocoercive with \(1/\bar{\lambda }\). Hence, the monotonicity is obtained from item (i) .
(iv) It has been shown that the operator \(T_{\text {DRS}}\) is firmly nonexpansive, see [20]. Therefore, \(F_{\text {DRS}}\) is firmly nonexpansive and hence monotone [2, Proposition 4.2].
(v) The proof simply follows from the definition of the \(\alpha \)-average of T. \(\square \)
Items (i) and (ii) demonstrate that \(F_{\text {FBS}}\) is monotone as long as the step size t is properly selected. It is also shown in [44] that, when f is an indicator function of a convex closed set, the step size interval in items (i) and (ii) can be enlarged to \((0,4\beta ]\) and \((0,4c/L^2)\), respectively. Finally, we introduce an useful lemma on the positive semidefinite property of the subdifferential of the monotone mapping, see [17, Proposition 2.3].
Lemma 2.4
For a Lipschitz continuous mapping \(F:\mathbb {R}^n\rightarrow \mathbb {R}^n\), F is monotone if and only if each element of \(\partial _BF(x)\) is positive semidefinite for any \(x\in \mathbb {R}^n\).
3 Semi-Smooth Newton Method for Nonlinear Monotone Equations
The purpose of this section is to design a Newton-type method for solving the system of nonlinear equations
where \(F:\mathbb {R}^n\rightarrow \mathbb {R}^n\) is semi-smooth and monotone. In particular, we are interested in \(F(z)=z-T(z)\), where T(z) is a fixed-point mapping corresponding to certain first-order type algorithms. Let \(Z^*:=\{z\in \mathbb {R}^n:F(z)=0\}\) denote the solution set to nonlinear equations (3.1). It is known that \(Z^*\) is a closed convex set if the mapping F is maximal monotone.
3.1 A Regularized Semi-Smooth Newton Method with Projection Steps
The system of monotone equations has various applications [1, 19, 24, 32, 46]. Inspired by a pioneer work [32], a class of iterative methods for solving nonlinear (smooth) monotone equations were proposed in recent years [1, 19, 46]. In [32], the authors proposed a globally convergent Newton method by exploiting the structure of monotonicity, whose primary advantage is that the whole sequence of the distances from the iterates to the solution set is decreasing. The method is extended in [45] to solve monotone equations without nonsingularity assumption.
The main concept in [32] is introduced as follows starting from an initial point \(z^0\) . For an iterate \(z^k\), let \(d^k\) be a direction such that
where \(u^k=z^k+d^k\) is an intermediate iterate. By monotonicity of F, for any \(z^*\in Z^*\) one has
Therefore, the hyperplane
strictly separates \(z^k\) from the solution set \(Z^*\). Based on this fact, it was developed in [32] that the next iterate is set by
It is easy to show that the point \(z^{k+1}\) is the projection of \(z^k\) onto the hyperplane \(H_k\). The hyperplane projection step is critical to construct a globally convergent method for solving the system of nonlinear monotone equations. By applying the same technique, we develop a globally convergent method for solving semi-smooth monotone equations (3.1).
It has been demonstrated in Lemma 2.4 that each element of the B-subdifferential of a monotone and semi-smooth mapping is positive semidefinite. Hence, for an iterate \(z^k\), by choosing an element \(J_k\in \partial _BF(z^k)\), it is natural to apply a regularized Newton method. It computes
where \(F^k=F(z^k)\), \(\mu _k=\lambda _k\Vert F^k\Vert _2\) and \(\lambda _k>0\) is a regularization parameter. The regularization term \(\mu _k I\) is chosen such that \(J_k+\mu _k I\) is invertible. From a computational view, it is practical to solve the linear system (3.2) inexactly. Define
At each iteration, we seek a step \(d^k\) by solving (3.2) approximately such that
where \(0<\tau <1\) is some positive constant. Then a trial point is obtained as
If \(\Vert F(u^k)\Vert _2\) is sufficiently decreased, we take a Newton step. Specifically, let \(\xi _0=\Vert F(z^0)\Vert _2\). When \(\Vert F(u^k)\Vert _2 \le \nu \xi _{k}\) with \(0<\nu <1\), we call the Newton step is successful and set
Otherwise, we take a safeguard step as follows. Define a ratio
Select some parameters \(0<\eta _1\le \eta _2<1\) and \(1<\gamma _1 <\gamma _2\). If \(\rho _k\ge \eta _1\), the iteration is said to be successful. Otherwise, the iteration is unsuccessful. Moreover, for a successful iteration, we take a hyperplane projection step when the residual of the projection step is non-increasing and take a fixed-point iteration when it is increasing. In summary, we set
where
The parameters \(\xi _{k+1}\) and \(\lambda _{k+1}\) are updated as
where \(\underline{\lambda }>0\) is a small positive constant. These parameters determine how aggressively the regularization parameter is increased when an iteration is unsuccessful or it is decreased when \(\rho _k\ge \eta _2\). The complete approach to solve (3.1) is summarized in Algorithm 1.
3.2 Global Convergence
We make the following assumption.
Assumption 3.1
The set \(Z^*\) of optimal solutions is nonempty. The function \(F:\mathbb {R}^n\rightarrow \mathbb {R}^n\) is monotone and the corresponding fixed point operator \(T = I -F\) is \(\alpha \)-averaged with \(\alpha \in (0,1]\).
The assumption that T is \(\alpha \)-averaged implies that F is Lipschitz continuous with modulus \(L\le 2\alpha \). Hence, we have \(\Vert J_k\Vert \le L\) for any \(k\ge 0\) and any \(J_k\in \partial _{B}F(z^k)\). Since the Lipschitz continuity of F is sufficient to obtain global convergence, the semi-smoothness of F actually is not required in this subsection.
The following lemma demonstrates that the distance from \(z^k\) to \(Z^*\) decreases in a projection step. The proof follows directly from [32, Lemma 2.1], and it is omitted.
Lemma 3.2
For any \(z^*\in Z^*\) and any projection step, indexed by say k, we have that
Denote the index sets of Newton steps, projection steps, fixed-point steps and successful iterations, respectively, by
and
We next show that if there are only finitely many successful iterations, the later iterates are optimal solutions.
Lemma 3.3
Suppose that Assumption 3.1 holds and the index set \(\mathcal {K}_{S}\) is finite. Then \(z^k=z^*\) for all sufficiently large k and \(F(z^*)=0\).
Proof
Denote the index of the last successful iteration by \(k_0\). By the definition of “successful” and “unsuccessful”, we have that \(\rho _k<\eta _1\) for all \(k>k_0\). Therefore, from (3.7) and (3.9), one has that \(z^{k_0+i}=z^{k_0+1}:=z^*\), for all \(i\ge 1\) and additionally \(\lambda _k\rightarrow \infty \). We proceed by contradiction. Suppose that \(\phi :=\Vert F(z^*)\Vert _2>0\). For all \(k> k_0\), it follows from (3.3) that
which, together with \(\lambda _k\rightarrow \infty \), \(\Vert r^k\Vert _2\le \tau \) and the fact that \(J_k\) is positive semidefinite, imply that \(d^k\rightarrow 0\), and hence \(u^k\rightarrow z^*\).
We now show that when \(\lambda _k\) is large enough, the ratio \(\rho _k\) is not smaller than \(\eta _2\). For this purpose, we consider an iteration with index \(k> k_0\) sufficiently large such that
Then, it yields that
The second line above is from that \(J_k\) is positive semidefinite and the residual condition (3.4). The last inequality is obtained by \(\Vert F^k\Vert _2=\Vert F(z^*)\Vert _2=\phi \) for \(k>k_0\). Further, we obtain
where the first inequality is derived by (3.11) and the second inequality comes from L-Lipschitz continuity of F. Hence, we have \(\rho _k\ge \eta _2\), which generates a successful iteration and yields a contradiction. This completes the proof. \(\square \)
We are now ready to prove the main global convergence result.
Theorem 3.4
If Assumption 3.1 holds, then we have \(\lim _{k\rightarrow \infty } \Vert F(z^k)\Vert _2=0\).
Proof
If the index set \(\mathcal {K}_S\) is finite, the convergence is directly from Lemma 3.3. Thus, we only need to consider the situation that the index set \(\mathcal {K}_S\) is infinite. The main task is to prove that the full sequence \(\{\Vert F(z^k)\Vert _2\}_{k \ge 0}\) converges to zero in the following two cases.
Case (i): \(K_N\) is finite.
Without loss of generality, we ignore \(\mathcal {K}_N\) and assume that \(\mathcal {K}_S=\mathcal {K}_P \cup \mathcal {K}_{F}\) in this case. Let \(z^*\) be any point in solution set \(Z^*\). For any \(k \in \mathcal {K}_{F}\), we have that \(z^{k+1}=z^k-\beta F(z^k)\) and
where the first inequality is due to F is \(\frac{1}{2\alpha }\)-cocoercive and the item (v) in Proposition 2.3, and the last inequality is because of the choice of \(\beta \). By Lemma 3.2, for any \(k\in \mathcal {K}_{P}\), it yields that
Therefore, the whole sequence \(\{\Vert z^k-z^*\Vert _2\}_{k \ge 0}\) is non-increasing and convergent, and the sequence \(\{z^k\}_{k \ge 0}\) is bounded. By (3.3) and (3.4), it follows that
which implies that \(\Vert d^k\Vert _2\le 1/[(1-\tau )\underline{\lambda }]\). This inequality shows that \(\{d^k\}_{k \ge 0}\) is bounded, and \(\{u^k\}_{k \ge 0}\) is also bounded. By using the continuity of F, there exists a constant \(c_1>0\) such that
Using (3.7) and (3.8), for any \(k\in \mathcal {K}_{P}\), we obtain that
We next consider either
If \(\liminf \nolimits _{k\rightarrow \infty }\Vert F^k\Vert _2=0\), the continuity of F and the boundedness of \(\{z^k\}_{k \ge 0}\) imply that the sequence \(\{z^k\}_{k \ge 0}\) has some accumulation point \(\hat{z}\) such that \(F(\hat{z})=0\). Since \(z^*\) is an arbitrary point in \(Z^*\), we can choose \(z^*=\hat{z}\) in (3.12) and (3.13) . Then \(\{z^k\}_{k \ge 0}\) converges to \(\hat{z}\) and \(\{\Vert F(z^k)\Vert _2\}_{k \ge 0}\) converges to zero.
If \(\liminf \nolimits _{k\rightarrow \infty }\Vert F^k\Vert _2=c_2>0\), by using the continuity of F and the boundedness of \(\{z^k\}_{k \ge 0}\) again, there exist constants \(c_3>c_4>0\) such that
We now show that \(\{\lambda _k\}_{k \ge 0}\) is bounded above. If \(\lambda _k\) is large enough such that \(\Vert d^k\Vert _2\le 1\) and
then by a similar proof as in Lemma 3.3 we have that \(\rho _k\ge \eta _2\) and consequently \(\lambda _{k+1}<\lambda _k\). Hence, it turns out that \(\{\lambda _k\}_{k \ge 0}\) is bounded from above, by say \(\bar{\lambda }>0\). Using (3.3), (3.4), \(\Vert J_k\Vert \le L\) and the upper bound of \(\{\lambda _k\}\), we have
Hence, it follows from (3.13), (3.14) and (3.15) that there exists a constant \(c_5>0\) such that for any \(k \in \mathcal {K}_{P}\),
Then, by (3.12) and (3.16), we conclude that \(\{\Vert F(z^k)\Vert _2\}_{k \ge 0}\) converges to zero, which yields a contradiction to the assumption \(c_2 > 0\). The convergence of \(\{z^k\}_{k\ge 0}\) and \(\{\Vert F(z^k)\Vert _2\}_{k \ge 0}\) is established in the case (i).
Case (ii): \(K_N\) is infinite.
Let \((k_i)_{i \ge 0}\) enumerate all elements of the set \(\{k + 1 : k \in \mathcal {K}_N \}\) in increasing order. Since \(\Vert F(z^{k_i})\Vert _2\le \nu \Vert F(z^{k_{i-1}})\Vert _2\) and \(0<\nu <1\) for any \(i\ge 1\), we have that the sequence \(\{\Vert F(z^{k_i})\Vert _2\}\) converges to zero. For any \(k\in \mathcal {K}_{P}\), we have \(\Vert F(z^{k+1})\Vert _2\le \Vert F(z^{k})\Vert _2\) from the updating rule (3.7). For any \(k \in \mathcal {K}_F\), we have that \(z^{k+1}=(I-\beta F)(z^k)=[(1-\alpha \beta )I+\alpha \beta R](z^k)\) with \(\alpha \beta <1\). Since R is nonexpansive, from [7, Theorem 1], we have that \(\Vert (R-I)(z^{k+1})\Vert _2\le \Vert (R-I)(z^k)\Vert _2\). By noticing that \(F(z^k)=\alpha (I-R)(z^k)\), it follows that \(\Vert F(z^{k+1})\Vert _2 \le \Vert F(z^k)\Vert _2\), for any \(k \in \mathcal {K}_F\). Moreover, for any \(k\notin \mathcal {K}_{N}\), there exists an index i such that \(k_i<k+1<k_{i+1}\), and hence \(\Vert F(z^{k+1})\Vert _2 \le \Vert F(z^{k_i})\Vert _2\). Therefore, the whole sequence \(\{\Vert F(z^k)\Vert \}_{k \le 0}\) converges to 0, which completes the proof. \(\square \)
Some comments to Theorem 3.4 are given as follows.
-
(i)
In our arguments, we do not assume that \(\{z^k\}\) is bounded. In fact, the subsequence \(\{z^k\}_{k \in \mathcal {K}_P \cup \mathcal {K}_F}\) is bounded because of (3.12) and (3.13). Hence, the existence of accumulation points is up to the boundedness of the subsequence \(\{z^k\}_{k \in \mathcal {K}_N}\). Then any accumulation point of \(\{z^k\}\) converges to some point \(\bar{z}\) such that \(F(\bar{z})=0\).
-
(ii)
If the global error bound condition is satisfied at \(\bar{z}\), i.e., there exists a constant \(c>0\) such that
$$\begin{aligned} c\Vert z-\bar{z}\Vert \le \Vert F(z)\Vert _2,\quad \forall z\in \mathbb {R}^n, \end{aligned}$$(3.17)the whole sequence \(\{z^k\}\) converges to \(\bar{z}\). Consider the case that the mapping F(x) is derived from the fixed point mapping with respect to the FBS, where the function h in (2.2) is strongly convex. Then the optimal solution \(z^*\) is unique and the global error bound condition is satisfied at \(z^*\) [36, Theorem 4].
As is already shown, the global convergence of our Algorithm is essentially guaranteed by the projection step and the fixed-point step. However, by noticing that (3.8) is in the form of \(v^k=z^k-\alpha _kF(u^k)\) with \(\alpha _k=\left\langle {F(u^k)},{z^k-u^k}\right\rangle /\Vert F(u^k)\Vert _2^2>0\), the projection step is indeed an extragradient step [12]. Since the asymptotic convergence rate of the extragradient step is often not faster than that of the Newton step, a slow convergence may be observed if the projection step and fixed-point step are always performed. Hence, our modification (3.7) is practically meaningful. Moreover, we will next prove that the projection step and fixed-point step will never be performed when the iterate is close enough to a solution under some generalized nonsingular conditions.
3.3 Fast Local Convergence
By the global convergence analysis, we can assume that \(z^*\) is an accumulation point of the sequence \(\{z^k\}\) generated by Algorithm 1 such that \(F(z^*)=0\). We also make the following assumption.
Assumption 3.5
For the mapping F defined in (3.1), the mapping F is semi-smooth and BD-regular at \(z^*\), which means that all elements in \(\partial _BF(z^*)\) are nonsingular.
The semi-smoothness of the mapping F often holds since the proximal mapping is (strongly) semi-smooth in many interesting applications. The BD-regularity is a common assumption in the analysis of the local convergence of nonsmooth methods. When F is BD-regular at \(z^*\), from [27, Proposition 2.5] and [25, Proposition 3], there exists some constants \(c_6>0,\kappa >0\) and a neighborhood \(N(z^*,\varepsilon _0)\) such that for any \(y\in N(z^*,\varepsilon _0)\) and \(J\in \partial _BF(y)\), it holds that:
-
(1)
\(z^*\) is an isolated solution;
-
(2)
J is nonsingular and \(\Vert J^{-1}\Vert \le c_6\);
-
(3)
the local error bound condition holds for F(z) on \(N(z^*,\varepsilon _0)\), that is \(\Vert y-z^*\Vert _2\le \kappa \Vert F(y)\Vert _2\).
Note that, the conditions that \(z^*\) is isolated and \(\Vert z^k-z^*\Vert _2\le \kappa \Vert F^k\Vert _2\) (for k large enough) indicate that the whole sequence \(\{z^k\}\) converges to \(z^*\).
We next show that Algorithm 1 turns into a Newton-type method in a neighborhood of \(z^*\) and achieves a locally fast convergence.
Theorem 3.6
Suppose that Assumption 3.5 holds and the regularization parameter \(\lambda _k\) is bounded above by \(\bar{\lambda }\) for any \(z^k\in N(z^*,\varepsilon _1)\) with \(\varepsilon _1:=\min \{\varepsilon _0,1/(2Lc_6\tau \bar{\lambda })\}\). Then for sufficiently large k, we have \(\Vert F(u^{k})\Vert _2\le \nu \Vert F(z^k)\Vert _2\) and \(z^{k+1}=u^k\), and \(\{z^k\}\) converges superlinearly to \(z^*\). If F is strongly semi-smooth at \(z^*\), then \(\{z^k\}\) converges quadratically to \(z^*\).
Proof
It follows from the L-Lipschitz continuity of F that \(\Vert F^k\Vert _2\le L\Vert z^k-z^*\Vert _2\le L\varepsilon _1\) for any \(z^k\in N(z^*,\varepsilon _1)\). Hence the definition of \(\varepsilon _1\) implies that
From (3.2), (3.4), (3.18) and \(\Vert J_k^{-1}\Vert \le c_6\), we obtain for a Newton step that
where the last inequality is derived from a rearrangment of the second inequality. A direct calculation gives
Using \(\Vert (J_k+\mu _k I)^{-1}\Vert \le c_6\), \(\mu _k=\lambda _k\Vert F^k\Vert _2\) and \(\Vert r^k\Vert _2\le \tau \lambda _k\Vert F^k\Vert _2\Vert d^k\Vert _2\), we have
By the L-Lipschitz continuity of F and (3.19), we obtain
The semi-smoothness of F at \(z^*\) implies
which, together with (3.20) and (3.21) shows that \(\Vert u^k-z^*\Vert _2=o(\Vert z^k-z^*\Vert _2)\). Hence, we have \(L\Vert u^{k}-z^*\Vert _2\le \frac{\nu }{\kappa }\Vert z^k-z^*\Vert _2\) for sufficiently large k. The local error bound condition implies that \(\Vert z^k-z^*\Vert _2\le \kappa \Vert F(z^k)\Vert _2\) for sufficiently large k. Together with the Lipschitz continuity of F, we obtain for sufficiently large k that
Then the updating rule (3.5) yields \(\Vert F(u^{k})\Vert _2\le \nu \Vert F(z^k)\Vert _2\) and \(z^{k+1}=u^k\).
When F is strongly semi-smooth, the quadratic convergence \(\Vert u^{k}-z^*\Vert _2=O(\Vert z^k-z^*\Vert _2^2)\) is established because of the property \(\Vert F^k-F(z^*)-J_k(z^k-z^*)\Vert _2=O(\Vert z^k-z^*\Vert ^2_2)\). The proof is completed. \(\square \)
It is clear that the BD-regular condition plays a key role in the above discussion. Although the BD-regular condition is strong and may fail in some situations, there are some possible ways to resolve this issue. As is shown in [22, Section 4.2], suppose that there exists a nonsingular element in \(\partial _BF(z^*)\) and other elements in \(\partial _BF(z^*)\) may be singular. By exploiting the structure of \(\partial _BF(z)\), one can carefully choose a nonsingular generalized Jacobian when z is close enough to \(z^*\). Hence, if \(z^*\) is isolated, one can still obtain the fast local convergence results by a similar proof as above. Another way is inspired by the literature on the Levenberg–Marquardt (LM) method. The LM method is a regularized Gauss–Newton method to deal with some possibly singular systems. It has been shown in [13] that the LM method preserves a superlinear or quadratic local convergence rate under certain local error bound condition, which is weaker than the nonsingular condition. Therefore, it remains a future research topic to investigate local convergence of our algorithm under the local error bound condition.
3.4 Regularized L-BFGS Method with Projection Steps
In this subsection, we propose a regularized L-BFGS method with projection steps by simply replace the Newton step in Algorithm 1 with a regularized L-BFGS step to avoid solving the linear system (3.2). The L-BFGS method is an adaption of the classical BFGS method, which tries to use a minimal storage. A globally convergent BFGS method with projection steps is proposed in [46] for solving smooth monotone equations. The convergence of our regularized L-BFGS method can be analyzed in a similar way as our regularized Newton method by combining the convergence analysis in [46]. We only describe the L-BFGS update in the following and omit the convergence analysis.
For an iterate \(z^k\), we compute the direction by
where \(H_k\) is the L-BFGS approximation to the Jacobian matrix.
Choosing an initial matrix \(H_k^{0}\) and setting \(\delta F^k = F^{k+1} - F^{k}\), the Jacobian matrix can be approximated by the recent m pairs \(\{\delta F^i, d^i\}, i = k-1, k -2, \ldots , k-m\), i.e., using the standard formula [23] as
where \(\mathcal {D}_k = [d^{k-m}, \ldots , d^{k-1} ]\), \(\mathcal {F}_k = [\delta F^{k-m}, \ldots , \delta F^{k-1} ]\), \(L_k\) is a lower-triangular matrix with entries
and \(S_k\) is a diagonal matrix with entries
Then we can compute the inverse regularized Jacobian matrix
where \(\bar{H}_k = H_k^0 + \mu _k I\), \(C_k = [H_k^0\mathcal {D}_k \quad \mathcal {F}_k]\), \(R_k\) is defined by \(R_k = V_k - C_k^T \bar{H}_k^{-1} C_k\) and
Specifically, if k is smaller than m, we use the classical BFGS method to approximate inverse regularized Jacobian matrix, which just let \(d^j = \delta F^j= 0\) for \(j < 0\) in the formula (3.23).
We should point out that the usage of L-BFGS may not be suitable because the Jacobian matrix may not be symmetric. It seems that no quasi-Newton method can provide a nonsymmetric but positive semidefinite approximation. Hence, we only take care of the positive semidefiniteness rather than nonsymmetric property. Fortunately, our numerical experiments show that it works in the \(\ell _1\)-regularized optimization problem. Of course, one may also try nonsymmetric type of quasi-Newton method.
4 Numerical Results
In this section, we conduct proof-of-concept numerical experiments on our proposed schemes for the fixed-point mappings induced from the FBS and DRS methods by applying them to \(\ell _1\)-norm minimization problem. All numerical experiments are performed in Matlab on workstation with a Intel(R) Xeon(R) CPU E5-2680 v3 and 128GB memory.
4.1 Applications to the FBS Method
Consider the \(\ell _1\)-regularized optimization problem of the form
where h is continuously differentiable. Let \(f(x)=\mu \Vert x\Vert _1\). The system of nonlinear equations corresponding to the FBS method is \( F(x) =x- \mathbf {prox}_{tf}(x- t \nabla h(x))=0.\) The generalized Jacobian matrix of F(x) is
where \(M(x)\in \partial \mathbf {prox}_{tf}( x - t \nabla h(x))\) and \(\partial ^2 h(x)\) is the generalized Hessian matrix of h(x). Specifically, the proximal mapping corresponding to f(x) is the so-called shrinkage operator defined as
Hence, one can take a Jacobian matrix M(x) which is a diagonal matrix with diagonal entries being
Similar to [22], we introduce the index sets
The Jacobian matrix can be represented by
Using the above special structure of Jacobian matrix J(x), we can reduce the complexity of the regularized Newton step (3.2). Let \(\mathcal {I} = \mathcal {I}(x^k)\) and \({\mathcal {O}}= {\mathcal {O}}(x^k)\). Then, we have
which yields
The second equation in the above linear system is then solved by the conjugate gradient (CG) method.
4.1.1 Numerical Comparison
In this subsection, we compare our proposed methods with different solvers for solving problem (4.1) with \(h(x)=\frac{1}{2} \Vert Ax-b\Vert _2^2\). The solvers used for comparison include ASSN, SSNP, ALSB, FPC-AS [39], SpaRSA [40] and SNF [22]. ASSN is the proposed semi-smooth Newton method with projection steps (Algorithm 1 ) and SSNP is the method which only uses the projection steps. ASLB(i) is a variant of the line search based method by combining the L-BFGS method and hyperplane projection technique. The number in bracket is the size of memory. FPC-AS is a first-order method that uses a fixed-point iteration under Barzilai–Borwein steps and continuation strategy. SpaRSA resembles FPC-AS, which is also a first-order methods and uses Barzilai–Borwein steps and continuation strategy. SNF is a semi-smooth Newton type method which uses the filter strategy and is one of state-of-the-art second-order methods for \(\ell _1\)-regularized optimization problem (4.1) and SNF(aCG) is the SNF solver with an adaptive parameter strategy in the CG method. The parameters of FPC-AS, SpaRSA and SNF are the same as [22].
The test problems are from [22], which are constructed as follows. Firstly, we randomly generate a sparse solution \(\bar{x} \in \mathbb {R}^n\) with k nonzero entries, where \(n = 512^2 = 262144\) and \(k = [n/40] = 5553\). The k different indices are uniformly chosen from \(\{1,2,\ldots ,n\}\) and we set the magnitude of each nonzero element by \(\bar{x}_i = \eta _1(i)10^{d \eta _2(i)/20}\), where \(\eta _1(i)\) is randomly chosen from \(\{-1,1\}\) with probability 1 / 2, respectively, \(\eta _2(i)\) is uniformly distributed in [0, 1] and d is a dynamic range which can influence the efficiency of the solvers. Then we choose \(m = n/8 = 32768\) random cosine measurements, i.e., \(Ax = (\mathrm {dct}(x))_J\), where J contains m different indices randomly chosen form \(\{1,2,\ldots ,n\}\) and \(\mathrm {dct}\) is the discrete cosine transform. Finally, the input data is specified by \(b = A\bar{x} + \epsilon \), where \(\epsilon \) is a Gaussian noise with a standard deviation \(\bar{\sigma } = 0.1\).
To compare fairly, we set an uniform stopping criterion. For a certain tolerance \(\epsilon \), we obtain a solution \(x_{newt}\) using ASSN such that \(\Vert F(x_{newt})\Vert \le \epsilon \). Then we terminate all methods by the relative criterion
where f(x) is the objective function and \(x^{*}\) is a highly accurate solution obtained by ASSN under the criterion \(||F(x)|| \le 10^{-14}\).
We solve the test problems under different tolerances \(\epsilon \in \{10^{-0}, 10^{-1},10^{-2},10^{-4},10^{-6} \}\) and dynamic ranges \(d \in \{20,40,60,80 \}\). Since the evaluations of \(\mathrm {dct}\) dominate the overall computation, we mainly use the total numbers of A- and \(A^T\)-calls \(N_A\) to compare the efficiency of different solvers. Tables 1, 2, 3, and 4 show the averaged numbers of \(N_A\) and CPU time over 10 independent trials. These tables show that ASSN and ASLB are competitive to other methods. For the low accuracy, SpaRSA and FPC-AS show a fast convergence rate. ASSN and ASLB are both faster than or close to FPC-AS and SpaRSA regardless of \(N_A\) and CPU time in most cases. In the meanwhile, ASSN and ASLB are competitive to the second-order methods under moderate accuracy. The CPU time and \(N_A\) of ASSN and ASLB are less than the Newton type solver SNF in almost all cases, especially for the large dynamic range. ASLB with a memory size \(m = 1\) shows the fastest speed in low accuracy. It is necessary to emphasize that L-BFGS with \(m = 1\) is equal to the Hestenes–Stiefel and Polak–Ribière conjugate gradient method with exact line search [23]. Compared with ASSN, SSNP has a slower convergent rate, which implies that our adaptive strategy on switching Newton and projection steps is helpful.
In particular, ASSN and ASLB have a better performance for high accuracy. Figures 1 and 2 illustrate the residual history with respect to the total number of A- and \(A^T\)-calls \(N_A\) and the total number of iterations. Since two first-order methods have a close performance and ASLB(1) performs better than ASLB(2), we omit the the figure of FPC-AS and ASLB(2). These figures also show that ASSN and ASLB have a better performance than SNF and SNF(aCG) independent of dynamic ranges. In particular, quadratic convergence is observable from ASSN in these examples.
4.2 Applications to the DRS Method
Consider the Basis-Pursuit (BP) problem
where \(A\in \mathbb {R}^{m\times n}\) is of full row rank and \(b\in \mathbb {R}^m\). Let \(f(x)=\mathrm {1}_\Omega (Ax-b)\) and \(h(x)=\Vert x\Vert _1\), where the set \(\Omega =\{0\}\). The system of nonlinear equations corresponding to the DRS fixed-point mapping is
For the simplicity of solving the subproblems in the DRS method, we make the assumption that \(AA^\top =I\). Then it can be derived that the proximal mapping with respect to f(x) is
A generalized Jacobian matrix \(D \in \partial \mathbf {prox}_{tf}((2\mathbf {prox}_{th}(z)-z))\) is taken as follows
The proximal mapping with respect to h(x) is
One can take a generalized Jacobian matrix \(M(z)\in \partial \mathbf {prox}_{th}(z)\) as a diagonal matrix with diagonal entries
Hence, a generalized Jacobian matrix of F(z) is in the form of
Let \(W=(I-2M(z))\) and \(H=W+M(z)+\mu I\). Using the binomial inverse theorem, we obtain the inverse matrix
For convenience, we write the diagonal entries of matrix W and H as
Then \(WH^{-1} = \frac{1}{1+\mu }I - S\), where S is a diagonal matrix with diagonal entries
Hence, \(I- AW H^{-1}A^\top = (1-\frac{1}{1+\mu })I + ASA^\top \). Define the index sets
and \(A_{\mathcal {I}(x)}\) denote the matrix containing the column \(\mathcal {I}(x)\) of A, then we have
The above property implies the positive definiteness of \(I- AW H^{-1}A^\top \) and can be used to reduce the computational complexity if the submatrix \(A_{\mathcal {I}(x)} A_{\mathcal {I}(x)} ^\top \) is easily available.
4.2.1 Numerical Comparison
In this subsection, we compare our methods with two first-order solvers: ADMM [42] and SPGL1 [38]. The ASLB solver is not included since its performance is not comparable with other approaches. Our test problems are almost the same as the last subsection and the only difference is that we set \(b = A\bar{x}\) without adding noise. We use the residual criterion \(\Vert F(z)\Vert \le \epsilon \) as the stopping criterion for ADMM and ASSN. Because the computation of residual of SPGL1 needs extra cost, we use its original criterion and list the relative error “rerr” to compare with ADMM and ASSN. The relative error with respect to the true solution \(x^*\) is denoted by
We revise the ADMM in yall1Footnote 1 by adjusting the rules of updating the penalty parameter and choosing the best parameters so that it can solve all examples in our numerical experiments. The parameters are set to the default values in SPGL1. Since the matrix A is only available as an operator, the property (4.8) cannot be applied in ASSN.
We solve the test problems under different tolerances \(\epsilon \in \{10^{-2},10^{-4},10^{-6} \}\) and dynamic ranges \(d \in \{20,40,60,80 \}\). Similar to the last subsection, we mainly use the total numbers of A- and \(A^T\)-calls \(N_A\) and CPU time to compare the efficiency among different solvers. We also list the relative error so that we can compare ADMM, ASSN with SPGl1. These numerical results are reported in Tables 5, 6, 7 and 8. The performance of ASSN is close to ADMM for tolerance \(10^{-2}\) and is much better for tolerance \(10^{-4}\) and \(10^{-6}\) independent of dynamic ranges. For all test problems, SPGL1 can only obtain a low accurate solution. It may be improved if the parameters are further tuned.
Figures 3 and 4 illustrate the residual history with respect to the total number of A- and \(A^T\)-calls \(N_A\) and the total number of iterations. SPGL1 is omitted since it cannot converge for a high accuracy. The figures show that ASSN has a similar convergent rate as ADMM in the initial stage but it achieves a faster convergent rate later, in particular, for a high accuracy.
5 Conclusion
The purpose of this paper is to study second-order type methods for solving composite convex programs based on fixed-point mappings induced from many operator splitting approaches such as the FBS and DRS methods. The semi-smooth Newton method is theoretically guaranteed to converge to a global solution from an arbitrary initial point and achieve a fast convergent rate by using an adapt strategy on switching the projection steps and Newton steps. Our proposed algorithms are suitable to constrained convex programs when a fixed-point mapping is well-defined. It may be able to bridge the gap between first-order and second-order type methods. They are indeed promising from our preliminary numerical experiments on a number of applications. In particular, quadratic or superlinear convergence is attainable in some examples of Lasso regression and basis pursuit, although the BD-regular condition required for fast local convergence has not been established rigorously. The Jacobian of the \(\ell _1\)-regularized problem (4.1) is (4.3). The term \(\partial ^2h(x))_{\mathcal {I}(x)\mathcal {I}(x)}\) behaves well in our numerical experiments. Similarly, the structure of \(ASA^\top \) in (4.8) for the Jacobian of the basis-pursuit problem (4.4) also helps. Moreover, the number of the A- and \(A^T\)-calls of ASSN when solving the Newton systems shows that the CG method works well in general. It further indicates that the BD-regular condition may hold on these examples.
One of the key issues for solving the monotone nonsmooth equations is the convergence to global optimality. Note that, if h(x) in problem (1.1) is an indicator function of a closed convex set C, the nonsmooth equations with respect to FBS are in the form of \(F(z)=z-\Pi _{C}(x-\nabla f(x))=0\), which are equivalent to a variational inequality (VI) problem. Various globally convergent projection-type or extragradient-type methods have been proposed to solve the VI, see the survey paper [41] and references therein. They may provide us plenty of alternative choices other than the hyperplane projection step in this paper.
There are a number of future directions worth pursuing from this point on, including theoretical analysis and a comprehensive implementation of these second-order algorithms. To improve the performance in practice, the second-order methods can be activated until the first-order type methods reach a good neighborhood of the global optimal solution. Since solving the corresponding system of linear equations is computationally dominant, it is important to explore the structure of the linear system and design certain suitable preconditioners.
Notes
Downloadable from http://yall1.blogs.rice.edu.
References
Ahookhosh, M., Amini, K., Bahrami, S.: Two derivative-free projection approaches for systems of large-scale nonlinear monotone equations. Numer. Algorithms 64, 21–42 (2013)
Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, New York (2011)
Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3, 1–122 (2011)
Byrd, R.H., Chin, G.M., Nocedal, J., Oztoprak, F.: A family of second-order methods for convex \(\ell _1\)-regularized optimization. Math. Program. 159, 435–467 (2016)
Chen, C., Liu, Y.J., Sun, D., Toh, K.C.: A semismooth Newton-CG based dual PPA for matrix spectral norm approximation problems. Math. Program. 155, 435–470 (2016)
Combettes, P.L., Pesquet, J.-C.: Proximal splitting methods in signal processing. In: Bauschke, H.H., Burachik, R., Combettes, P.L., Elser, V., Luke, D.R., Wolkowicz, H. (eds.) Fixed-Point Algorithms for Inverse Poblems in Science and Engineering, Volume 49 of Springer Optim. Appl., pp. 185–212. Springer, New York (2011)
Davis, D., Yin, W.: Convergence rate analysis of several splitting schemes. In: Glowinski, R., Osher, S.J., Yin, W. (eds.) Splitting Methods in Communication, Imaging, Science, and Engineering, Sci. Comput., pp. 115–163. Springer, Cham (2016)
Davis, D., Yin, W.: Faster convergence rates of relaxed Peaceman–Rachford and ADMM under regularity assumptions. Math. Oper. Res. 42, 783–805 (2017)
De Luca, T., Facchinei, F., Kanzow, C.: A semismooth equation approach to the solution of nonlinear complementarity problems. Math. Program. 75, 407–439 (1996)
Douglas, J., Rachford, H.H.: On the numerical solution of heat conduction problems in two and three space variables. Trans. Am. Math. Soc. 82, 421–439 (1956)
Eckstein, J., Bertsekas, D.P.: On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55, 293–318 (1992)
Facchinei, F., Pang, J.-S.: Finite-Dimensional Variational Inequalities and Complementarity Problems, vol. II. Springer, New York (2003)
Fan, J.Y., Yuan, Y.X.: On the quadratic convergence of the Levenberg–Marquardt method without nonsingularity assumption. Computing 74, 23–39 (2005)
Gabay, D., Mercier, B.: A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput. Math. Appl. 2, 17–40 (1976)
Glowinski, R., Marrocco, A.: Sur l’approximation, par éléments finis d’ordre un, et la résolution, par pénalisation-dualité, d’une classe de problèmes de Dirichlet non linéaires. Rev. Française Automat. Informat. Recherche Opérationnelle Sér. Rouge Anal. Numér. 9, 41–76 (1975)
Griesse, R., Lorenz, D.A.: A semismooth Newton method for Tikhonov functionals with sparsity constraints. Inverse Probl. 24, 035007 (2008)
Jiang, H., Qi, L.: Local uniqueness and convergence of iterative methods for nonsmooth variational inequalities. J. Math. Anal. Appl. 196, 314–331 (1995)
Lee, J.D., Sun, Y., Saunders, M.A.: Proximal Newton-type methods for minimizing composite functions. SIAM J. Optim. 24, 1420–1443 (2014)
Li, Q., Li, D.H.: A class of derivative-free methods for large-scale nonlinear monotone equations. IMA J. Numer. Anal. 31, 1625–1635 (2011)
Lions, P.-L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979)
Mifflin, R.: Semismooth and semiconvex functions in constrained optimization. SIAM J. Control Optim. 15, 959–972 (1977)
Milzarek, A., Ulbrich, M.: A semismooth Newton method with multidimensional filter globalization for \(l_1\)-optimization. SIAM J. Optim. 24, 298–333 (2014)
Nocedal, J., Wright, S.J.: Numerical Optimization, Springer Series in Operations Research and Financial Engineering, 2nd edn. Springer, New York (2006)
Ortega, J.M., Rheinboldt, W.C.: Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, New York (1970)
Pang, J.-S., Qi, L.Q.: Nonsmooth equations: motivation and algorithms. SIAM J. Optim. 3, 443–465 (1993)
Patrinos, P., Stella, L., Bemporad, A..: Forward–backward truncated Newton methods for convex composite optimization. http://arxiv.org/abs/1402.6655, 02 (2014)
Qi, L.Q.: Convergence analysis of some algorithms for solving nonsmooth equations. Math. Oper. Res. 18, 227–244 (1993)
Qi, L.Q., Sun, J.: A nonsmooth version of Newton’s method. Math. Program. 58, 353–367 (1993)
Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis. Springer, Berlin (1998)
Scholtes, S.: Introduction to Piecewise Differentiable Equations, Springer Briefs in Optimization. Springer, New York (2012)
Shapiro, A.: Directionally nondifferentiable metric projection. J. Optim. Theory Appl. 81, 203–204 (1994)
Solodov, M.V., Svaiter, B.F.: A globally convergent inexact Newton method for systems of monotone equations. In: Fukushima, M., Qi, L. (eds.) Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods (Lausanne, 1997), vol. 22, pp. 355–369. Kluwer Academic Publishers, Dordrecht (1999)
Solodov, M.V., Svaiter, B.F.: A hybrid projection-proximal point algorithm. J. Convex Anal. 6, 59–70 (1999)
Sun, D., Han, J.: Newton and quasi-Newton methods for a class of nonsmooth equations and related problems. SIAM J. Optim. 7, 463–480 (1997)
Sun, D., Sun, J.: Semismooth matrix-valued functions. Math. Oper. Res. 27, 150–169 (2002)
Tseng, P., Yun, S.: A coordinate gradient descent method for nonsmooth separable minimization. Math. Program. 117, 387–423 (2009)
Ulbrich, M.: Semismooth Newton Methods for Variational Inequalities and Constrained Optimization Problems in Function spaces, Society for Industrial and Applied Mathematics SIAM), Philadelphia, PA. Mathematical Optimization Society, Philadelphia (2011)
van den Berg, E., Friedlander, M.P.: Probing the Pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 31, 890–912 (2008)
Wen, Z., Yin, W., Goldfarb, D., Zhang, Y.: A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization, and continuation. SIAM J. Sci. Comput. 32, 1832–1857 (2010)
Wright, S.J., Nowak, R.D., Figueiredo, M.A.T.: Sparse reconstruction by separable approximation. IEEE Trans. Signal Process. 57, 2479–2493 (2009)
Xiu, N., Zhang, J.: Some recent advances in projection-type methods for variational inequalities. J. Comput. Appl. Math. 152, 559–585 (2003)
Yang, J., Zhang, Y.: Alternating direction algorithms for \(\ell _{1}\)-problems in compressive sensing. SIAM J. Sci. Comput. 33, 250–278 (2011)
Zhao, X.Y., Sun, D., Toh, K.C.: A Newton-CG augmented Lagrangian method for semidefinite programming. SIAM J. Optim. 20, 1737–1765 (2010)
Zhao, Y.-B., Li, D.: Monotonicity of fixed point and normal mappings associated with variational inequality and its application. SIAM J. Optim. 11, 962–973 (2001)
Zhou, G., Toh, K.C.: Superlinear convergence of a Newton-type algorithm for monotone equations. J. Optim. Theory Appl. 125, 205–221 (2005)
Zhou, W.J., Li, D.H.: A globally convergent BFGS method for nonlinear monotone equations without any merit functions. Math. Comput. 77, 2231–2240 (2008)
Acknowledgements
The authors would like to thank Professor Defeng Sun for the valuable discussions on semi-smooth Newton methods, and Professor Michael Ulbrich and Dr. Andre Milzarek for sharing their code SNF. In particular, the authors appreciate Dr. Andre Milzarek for reading the manuscript carefully and the help on improving the convergence analysis. The authors are grateful to the associate editor and two anonymous referees for their valuable comments and suggestions.
Author information
Authors and Affiliations
Corresponding author
Additional information
X. Xiao: Research supported by the Fundamental Research Funds for the Central Universities under the Grant DUT16LK30. Z. Wen: Research supported in part by NSFC Grant 91730302, 11421101, and by the National Basic Research Project under the Grant 2015CB856002. L. Zhang: Research partially supported by NSFC Grants 11571059 and 91330206.
Rights and permissions
About this article
Cite this article
Xiao, X., Li, Y., Wen, Z. et al. A Regularized Semi-Smooth Newton Method with Projection Steps for Composite Convex Programs. J Sci Comput 76, 364–389 (2018). https://doi.org/10.1007/s10915-017-0624-3
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10915-017-0624-3