Abstract
In this paper we present a variant of the proximal forward-backward splitting iteration for solving nonsmooth optimization problems in Hilbert spaces, when the objective function is the sum of two nondifferentiable convex functions. The proposed iteration, which will be called Proximal Subgradient Splitting Method, extends the classical subgradient iteration for important classes of problems, exploiting the additive structure of the objective function. The weak convergence of the generated sequence was established using different stepsizes and under suitable assumptions. Moreover, we analyze the complexity of the iterates.
Article PDF
Similar content being viewed by others
Avoid common mistakes on your manuscript.
References
Alber, Y.I., Iusem, A.N., Solodov, M.V.: On the projected subgradient method for nonsmooth convex optimization in a Hilbert space. Math Program 81, 23–37 (1998)
Bauschke, H.H., Borwein, J.: On projection algorithms for solving convex feasibility problems. SIAM Rev 38, 367–426 (1996)
Bauschke, H.H., Combettes, P.L.: Convex analysis and monotone operator theory in Hilbert spaces. Springer, New York (2011)
Bauschke, H.H., Koch, V.R., Phan, H.M.: Stadium norm and Douglas-Rachford splitting: a new approach to road design optimization. Operations Research (2016) in press
Beck, A., Teboulle, M.: Fast gradient-based algorithms for constrained total variation image denoising and deblurring. IEEE Trans Image Process 18, 2419–2434 (2009)
Beck, A., Teboulle, M.: Gradient-Based Algorithms with Applications to Signal Recovery Problems. In: Palomar, D., Eldar, Y. (eds.) Convex Optimization in Signal Processing and Communications, pp. 42–88. University Press, Cambribge (2010)
Bello Cruz, J.Y.: A subgradient method for vector optimization problems. SIAM J Optim 23, 2169–2182 (2013)
Bello Cruz, J.Y., Iusem, A.N.: A strongly convergent method for nonsmooth convex minimization in Hilbert spaces. Numer Funct Anal Optim 32, 1009–1018 (2011)
Bello Cruz, J.Y., Iusem, A.N.: Convergence of direct methods for paramonotone variational inequalities. Comput Optim Appl 46, 247–263 (2010)
Bello Cruz, J.Y., Nghia, T.T.A.: On the convergence of the proximal forward-backward splitting method with linesearches. Technical report, 2015, Available in arXiv:1501.02501.pdf
Bot, R., Csetnek, E.R.: Forward-Backward and Tseng’s type penalty schemes for monotone inclusion problems. Set-Valued and Variational Analysis 22, 313–331 (2014)
Candes, E.J., Tao, T.: Decoding by linear programming. IEEE Trans Inf Theory 51, 4203–4215 (2005)
Chavent, G., Kunisch, K.: Convergence of Tikhonov regularization for constrained ill-posed inverse problems. Inverse Prob 10, 63–76 (1994)
Chen, G.H.-G., Rockafellar, R.T.: Convergence rates in forward-backward splitting. SIAM J Optim 7, 421–444 (1997)
Combettes, P.L.: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 53, 475–504 (2004)
Combettes, P.L.: Quasi-Fejérian analysis of some optimization algorithms. Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications. Studies in Computational Mathematics 8 115–152 North-Holland Amsterdam (2001)
Combettes, P.L., Pesquet, J.-C.: A Douglas-Rachford splitting approach to nonsmooth convex variational signal recovery. IEEE Journal of selected topics in signal precessing 1, 564–574 (2007)
Combettes, P.L., Pesquet, J.-C.: Proximal splitting methods in signal processing. In: Fixed-Point Algorithms for Inverse Problems. Science and Engineering. Springer Optimization and Its Applications 49 185–212 Springer New York (2011)
Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward-backward splitting. Multiscale Model Simul 4, 1168–1200 (2005)
Combettes, P.L., Vũ, B.C.: Variable metric forward-backward splitting with applications to monotone inclusions in duality. Optimization 63, 1289–1318 (2014)
D’Antonio, G., Frangioni, A.: Convergence analysis of deflected conditional approximate subgradient methods. SIAM J Optim 20, 357–386 (2009)
Ermoliev, Y.U.M.: On the method of generalized stochastic gradients and quasi-Fejér sequences. Cybernetics 5, 208–220 (1969)
Figueiredo, M., Novak, R., Wright, S.J.: Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J Sel Top Sign Proces 1, 586–597 (2007)
Geman, S., Geman, D.: Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. IEEE Trans Pattern Anal Mach Intell 6, 721–741 (1984)
Held, M., Wolfe, P., Crowder, H.: Validation of subgradient optimization. Math Program 6, 66–68 (1974)
James, G.M., Radchenko, P., Lv, J.: DASSO: connections between the Dantzig selector and lasso. J R Stat Soc Ser B Stat Methodol 71, 127–142 (2009)
Kim, S., Ahn, H., Cho, S.-C.: Variable target value subgradient method. Math Program 49, 359–369 (1991)
Kiwiel, K.C.: Convergence of approximate and incremental subgradient methods for convex optimization. SIAM J Optim 14, 807–840 (2006)
Kiwiel, K.C.: The efficiency of subgradient projection methods for convex optimization, Parts I: general level methods. SIAM J Control Optim 34, 660–676 (1996)
Larson, T., Patriksson, M., Stromberg, A.-B.: Conditional subgradient optimization - Theory and application. Eur J Oper Res 88, 382–403 (1996)
Mosci, S., Rosasco, L., Santoro, M., Verri, A., Villa, S., Sebag, M.: Solving structured sparsity regularization with proximal methods. In: Balczar, J, Bonchi, F, Gionis, A (eds.) Machine Learning and Knowledge Discovery in Databases, 6322 of Lecture Notes in Computer Science, Springer, 2010, 418–433
Neal, P., Boyd, S.: Proximal Algorithms. Foundations and Trends in Optimization 1, 127–239 (2014)
Nedic, A., Bertsekas, D.P.: Incremental subgradient methods for nondifferentiable optimization. SIAM J Optim 12, 109–138 (2001)
Nesterov, Y.U.: Subgradient methods for huge-scale optimization problems. Math Program 146, 275–297 (2014)
Nesterov, Y.U.: Gradient methods for minimizing composite functions. Math Program 140, 125–161 (2013)
Nesterov, Y.U.: Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM J Optim 22, 341–362 (2012)
Nesterov, Y.U.: Introductory lectures on convex optimization: A Basic Course. Kluwer Academic Publishers Norwel (2004)
Polyak, B.T.: Minimization of unsmooth functionals U.S.S.R. Comput Math Math Phys 9, 14–29 (1969)
Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J Control Optim 14, 877–898 (1976)
Sagastizb̈al, C.: Composite proximal bundle method. Math Program 140, 189–233 (2013)
Sherali, H.D., Choi, G., Tuncbilek, C.H.: A variable target value method for nondifferentiable optimization. A variable target value method for nondifferentiable optimization, 1–8 (1997)
Svaiter, B.F.: A class of Fejér convergent algorithms, approximate resolvents and the hybrid Proximal-Extragradient method. J Optim Theory Appl 162, 133–153 (2014)
Tropp, J.: Just relax: convex programming methods for identifying sparse signals. IEEE Trans Inf Theory 51, 1030–1051 (2006)
Zhu, D.L., Marcotte, P.: Co-coercivity and its role in the convergence of iterative schemes for solving variational inequalities. SIAM J Optim 6, 714–726 (1996)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Bello Cruz, J.Y. On Proximal Subgradient Splitting Method for Minimizing the sum of two Nonsmooth Convex Functions. Set-Valued Var. Anal 25, 245–263 (2017). https://doi.org/10.1007/s11228-016-0376-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11228-016-0376-5
Keywords
- Convex problems
- Nonsmooth optimization problems
- Proximal forward-backward splitting iteration
- Subgradient method