1 Introduction

1.1 Motivation

Our work is inspired by the recent trend of seeking efficient ways for solving problems with hybrid regularizations or mixed penalty functions in fields such as machine learning, image restoration, signal processing and many others. We are about to present two instructive examples (for motivations, see, e.g., [2, 6, 7]).

Example 1

(Matrix completion) Our first motivating example is matrix completion problem, where we want to reconstruct the original matrix \(y\in {\mathbf {R}}^{n\times n}\), known to be both sparse and low-rank, given noisy observations of part of the entries. Specifically, our observation is \(b=P_\varOmega y+\xi \), where \(\varOmega \) is a given set of cells in an \(n\times n\) matrix, \(P_\varOmega y\) is the restriction of \(y\in {\mathbf {R}}^{n\times n}\) onto \(\varOmega \), and \(\xi \) is a random noise. A natural way to recover \(y\) from \(b\) is to solve the optimization problem

$$\begin{aligned} {\hbox {Opt}}=\min \limits _{y\in {\mathbf {R}}^{n\times n}}\left\{ \frac{1}{2}\Vert P_\varOmega y-b\Vert _2^2 +\lambda \Vert y\Vert _1+\,\mu \Vert y\Vert _{\mathrm{nuc}}\right\} \end{aligned}$$
(1)

where \(\mu ,\lambda >0\) are regularization parameters. Here \(\Vert y\Vert _2= \sqrt{{\hbox {Tr}}(y^Ty)}\) is the Frobenius norm, \(\Vert y\Vert _1=\sum _{i,j=1}^n|y_{ij}|\) is the \(\ell _1\)-norm, and \(\Vert y\Vert _{\mathrm{nuc}}=\sum _{i=1}^n\sigma _i(y)\) (\(\sigma _i(y)\) are the singular values of \(y\)) is the nuclear norm of a matrix \(y\in {\mathbf {R}}^{n\times n}\).

Example 2

(Image recovery) Our second motivating example is image recovery problem, where we want to recover an image \(y\in {\mathbf {R}}^{n\times n}\) from its noisy observations \(b=Ay+\xi \), where \(Ay\) is a given affine mapping (e.g. the restriction operator \(P_\varOmega \) defined as above, or some blur operator), and \(\xi \) is a random noise. Assume that the image can be decomposed as \(y=y_\mathrm{L}+y_\mathrm{S}+y_\mathrm{sm}\) where \(y_\mathrm{L}\) is of low rank, \(y_\mathrm{sm}\) is the matrix of contamination by a “smooth background signal”, and \(y_\mathrm{S}\) is a sparse matrix of “singular corruption.” Under this assumption in order to recover \(y\) from \(b\), it is natural to solve the optimization problem

$$\begin{aligned} {\hbox {Opt}}&= \min \limits _{y_\mathrm{L},y_\mathrm{S},y_\mathrm{sm}\in {\mathbf {R}}^{n\times n}}\left\{ \Vert A(y_\mathrm{L}+y_\mathrm{S}+y_\mathrm{sm}) -b\Vert _2+\mu _1\Vert y_\mathrm{L}\Vert _{\mathrm{nuc}}\right. \nonumber \\&\quad \left. +\,\mu _2 \Vert y_\mathrm{S}\Vert _1+\mu _3 \Vert y_\mathrm{sm}\Vert _{\mathrm{TV}}\right\} \end{aligned}$$
(2)

where \(\mu _1,\mu _2,\mu _3>0\) are regularization parameters. Here \(\Vert y\Vert _\mathrm{TV}\) is the total variation of an image \(y\):

$$\begin{aligned} \Vert y\Vert _{{\mathrm{TV}}}&= \Vert \nabla _i y\Vert _1+\Vert \nabla _j y\Vert _1,\\ (\nabla _iy)_{ij}&= y_{i+1,j}-y_{i,j},\; [i;j]\in {\mathbf {Z}}^2:\;1\le i<n-1,1\le j<n,\\ (\nabla _jy)_{ij}&= y_{i,j+1}-y_{i,j},\;[i;j]\in {\mathbf {Z}}^2:\;1\le i<n,1\le j<n-1. \end{aligned}$$

These and other examples motivate addressing the following multi-term composite minimization problem

$$\begin{aligned} \min \limits _{y\in Y} \left\{ \sum _{k=1}^K\left[ \psi _k(A_ky+b_k)+\varPsi _k(A_ky+b_k)\right] \right\} , \end{aligned}$$
(3)

and, more generally, the semi-separable problem

$$\begin{aligned} \min \limits _{[y^1;\ldots ;y^K] \in Y_1\times \cdots \times Y_K} \left\{ \sum _{k=1}^K\left[ \psi _k(y^k)+\varPsi _k(y^k)\right] : \;\sum _{k=1}^K A_ky^k =b\right\} . \end{aligned}$$
(4)

Here for \(1\le k\le K\) the domains \(Y_k\) are closed and convex, \(\psi _k(\cdot )\) are convex Lipschitz-continuous functions, and \(\varPsi _k(\cdot )\) are convex functions which are “simple and fit \(Y_k\)”.Footnote 1

The problem of multi-term composite minimization (3) has been considered (in a somewhat different setting) in [22] for \(K=2\). When \(K=1\), problem (3) becomes the usual composite minimization problem:

$$\begin{aligned} \min _{u\in U} \left\{ \psi (u)+\varPsi (u)\right\} \end{aligned}$$
(5)

which is well studied in the case where \(\psi (\cdot )\) is a smooth convex function and \(\varPsi (\cdot )\) is a simple non-smooth function. For instance, it was shown that the composite versions of Fast Gradient Method originating in Nesterov’s seminal work [21] and further developed by many authors (see, e.g., [3, 4, 8, 25, 27] and references therein), as applied to (5), work as if there were no nonsmooth term at all and exhibit the \(O(1/t^2)\) convergence rate, which is the optimal rate attainable by first order algorithms of large-scale smooth convex optimization. Note that these algorithms cannot be directly applied to problems (3) with \(K>1\).

The problem with semi-separable structures (4) for \(K=2\), has also been extensively studied using the augmented Lagrangian approach (see, e.g., [5, 11, 12, 16, 23, 24, 26, 28] and references therein). In particular, much work was carried out on the alternating directions method of multipliers (ADMM, see [5] for an overview), which optimizes the augmented Lagrangian in an alternating fashion and exhibits an overall \(O(1/t)\) convergence rate. Note that the available accuracy bounds for those algorithms involve optimal values of Lagrange multipliers of the equality constraints (cf. [23]). Several variants of this method have been developed recently to adjust to the case for \(K>2\) (see, e.g.[10]), however, most of these algorithms require to solve iteratively subproblems of type (5) especially with the presence of non-smooth terms in the objective.

1.2 Our contribution

In this paper, we do not assume smoothness of functions \(\psi _k\), but instead, we suppose that \(\psi _k\) are saddle point representable:

$$\begin{aligned} \psi _k(y^k)=\sup _{z^k\in Z_k}\left[ \phi _k(y^k,z^k)-{\overline{\varPsi }}_k(z^k)\right] ,\;\;1\le k\le K, \end{aligned}$$
(6)

where \(\phi _k(\cdot ,\cdot )\) are smooth functions which are convex–concave (i.e., convex in their first and concave in the second argument), \(Z_k\) are convex and compact, and \({\overline{\varPsi }}_k(\cdot )\) are simple convex functions on \(Z_k\). Let us consider, for instance, the multi-term composite minimization problem (3). Under (6), the primal problem (3) allows for the saddle point reformulation:

$$\begin{aligned} \min \limits _{y\in Y}~\max \limits _{[z^1;\ldots ;z^k]\in Z_1\times \cdots \times Z_K}\left\{ \sum _{k=1}^K\left[ \phi _k(A_ky+b_k,z^k)-{\overline{\varPsi }}_k(z^k)+\varPsi _k(A_ky+b_k)\right] \right\} \end{aligned}$$
(7)

Note that when there are no \(\varPsi _k,{\overline{\varPsi }}_k\)’s, problem (7) becomes a convex–concave saddle point problem with smooth cost function, studied in [14]. In particular, it was shown in [14] that Mirror Prox (MP) algorithm originating from [17], when applied to the saddle point problem (7), exhibits the “theoretically optimal” convergence rate \(O(1/t)\). Our goal in this paper is to develop novel \(O(1/t)\)-converging first order algorithms for problem (7) (and also the related saddle point reformulation of the problem in (4)), which appears to be the best rate known, under circumstances, from the literature (and established there in essentially less general setting than the one considered below).

Our key observation is that composite problem (3), (6) can be reformulated as a smooth linearly constrained saddle point problem by simply moving the nonsmooth terms into the problem domain. Namely, problem (3) , (6) can be written as

$$\begin{aligned}&{\mathop {\mathop {\min }\limits _{y\in Y,\; [y^k;\tau ^k]\in Y_k^+}}\limits _{ 1\le k\le K}}~ {\mathop {\mathop {\max }\limits _{[z^k;\sigma ^k]\in Z_k^+}}\limits _{ 1\le k\le K}} \bigg \{\sum \limits _{k=1}^K\left[ \phi _k(y^k,z^k)-\sigma ^k+\tau ^k\right] : y^k\\&\qquad =A_ky+b_k,\; k=1, \ldots ,K\bigg \}\\&Y_k^+=\left\{ [y^k;\tau ^k]: y^k\in Y_k, \tau ^k\ge \varPsi _k(y^k)\right\} ,Z_k^+\\&\quad \quad =\left\{ [z^k;\sigma ^k]: z^k\in Z_k, \sigma ^k\ge {\overline{\varPsi }}_k(z^k)\right\} ,\;k=1, \ldots ,K. \end{aligned}$$

We can further approximate the resulting problem by penalizing the equality constraints, thus passing to

$$\begin{aligned}&{\mathop {\mathop {\min }\limits _{{y\in Y,\; [y^k;\tau ^k]\in Y_k^+}}}\limits _{1\le k\le K}}~ {\mathop {\mathop {\max }\limits _{[z^k;\sigma ^k]\in Z_k^+}}\limits _{ 1\le k\le K}} \left\{ \sum \limits _{k=1}^K\left[ \phi _k(y^k,z^k)-\sigma ^k+\tau ^k +\rho _k\Vert y^k-A_ky-b_k\Vert _2\right] \right\} \nonumber \\&\quad ={\mathop {\mathop {\min }\limits _{y\in Y,\; [y^k;\tau ^k]\in Y_k^+}}\limits _{ 1\le k\le K}}~ {\mathop {\mathop {\max }\limits _{w^k\in W_k,\;[z^k;\sigma ^k]\in Z_k^+}}\limits _{ 1\le k\le K}} \left\{ \sum \limits _{k=1}^K\left[ \phi _k(y^k,z^k)-\sigma ^k\right. \right. \nonumber \\&\qquad \left. \left. +\,\tau ^k +\rho _k\langle y^k-A_ky-b_k,w^k\rangle \right] \right\} , \end{aligned}$$
(8)

where \(\rho _k>0\) are penalty parameters and \(W_k=\{w^k:\Vert w^k\Vert _2\le 1\}, k=1,\ldots ,K\).

We solve the convex–concave saddle point problem (8) with smooth cost function by \(O(1/t)\)-converging Mirror Prox algorithm. It is worth to mention that if the functions \(\phi _k\), \(\varPsi _k\) are Lipschitz continuous on the domains \(A_kY+b_k\), and \(\rho _k\) are selected properly, the saddle point problem is exactly equivalent to the problem of interest.

The monotone operator \(F\) associated with the saddle point problem in (8) has a special structure: the variables can be split into two blocks \(u\) (all \(y\)-, \(z\)- and \(w\)-variables) and \(v\) (all \(\tau \)- and \(\sigma \)-variables) in such a way that the induced partition of \(F\) is \(F=[F_u(u);F_v]\) with the \(u\)-component \(F_u\) depending solely on \(u\) and constant \(v\)-component \(F_v\). We demonstrate below that in this case the basic MP algorithm admits a “composite” version which works essentially “as if” there were no \(v\)-component at all. This composite version of MP will be the working horse of all subsequent developments.

The main body of this paper is organized as follows. In Sect. 2 we present required background on variational inequalities with monotone operators and convex–concave saddle points. In Sect. 3 we present and justify the composite MP algorithm. In Sects. 4 and 5, we apply our approach to problems (3), (6) and (4), (6). In Sect. 4.4, we illustrate our approach (including numerical results) as applied to the motivating examples. All proofs missing in the main body of the paper are relegated to the Appendix.

2 Preliminaries: variational inequalities and accuracy certificates

Execution protocols and accuracy certificates. Let \(X\) be a nonempty closed convex set in a Euclidean space \(E\) and \(F(x):X\rightarrow E\) be a vector field.

Suppose that we process \((X,F)\) by an algorithm which generates a sequence of search points \(x_t\in X\), \(t=1,2,\ldots \), and computes the vectors \(F(x_t)\), so that after \(t\) steps we have at our disposal \(t\)-step execution protocol \(\mathcal{{I}}_t=\{x_\tau ,F(x_\tau )\}_{\tau =1}^t\). By definition, an accuracy certificate for this protocol is simply a collection \(\lambda ^t=\{\lambda ^t_\tau \}_{\tau =1}^t\) of nonnegative reals summing up to 1. We associate with the protocol \(\mathcal{{I}}_t\) and accuracy certificate \(\lambda ^t\) two quantities as follows:

  • Approximate solution \(x^t(\mathcal{{I}}_t,\lambda ^t):=\sum _{\tau =1}^t \lambda ^t_\tau x_\tau \), which is a point of \(X\);

  • Resolution \({\hbox {Res}}(X'\big |\mathcal{{I}}_t,\lambda ^t)\) on a subset \(X'\ne \emptyset \) of \(X\) given by

    $$\begin{aligned} {\hbox {Res}}(X'\big |\mathcal{{I}}_t,\lambda ^t) = \sup \limits _{x\in X'}\sum _{\tau =1}^t\lambda ^t_\tau \langle F(x_\tau ),x_\tau -x\rangle . \end{aligned}$$
    (9)

The role of those notions in the optimization context is explained next.Footnote 2 Variational inequalities. Assume that \(F\) is monotone, i.e.,

$$\begin{aligned} \langle F(x)-F(y),x-y\rangle \ge 0, \;\;\forall x,y\in X \end{aligned}$$
(10)

and let our goal be to approximate a weak solution to the variational inequality (v.i.) \(\hbox {vi}(X,F)\) associated with \((X,F)\); weak solution is defined as a point \(x_*\in X\) such that

$$\begin{aligned} \langle F(y),y-x_*\rangle \ge 0\,\,\forall y\in X. \end{aligned}$$
(11)

A natural (in)accuracy measure of a candidate weak solution \(x\in X\) to \(\hbox {vi}(X,F)\) is the dual gap function

$$\begin{aligned} {\epsilon _{\mathrm{VI}}}(x\big |X,F) = \sup _{y\in X} \langle F(y),x-y\rangle \end{aligned}$$
(12)

This inaccuracy is a convex nonnegative function which vanishes exactly at the set of weak solutions to the vi\((X,F)\) .

Proposition 1

For every \(t\), every execution protocol \(\mathcal{{I}}_t=\{x_\tau \in X,F(x_\tau )\}_{\tau =1}^t\) and every accuracy certificate \(\lambda ^t\) one has \(x^t:=x^t(\mathcal{{I}}_t,\lambda ^t)\in X\). Besides this, assuming \(F\) monotone, for every closed convex set \(X'\subset X\) such that \(x^t\in X'\) one has

$$\begin{aligned} {\epsilon _{\mathrm{VI}}}\Big (x^t\big |X',F\Big )\le {\hbox {Res}}\Big (X'\big |\mathcal{{I}}_t,\lambda ^t\Big ). \end{aligned}$$
(13)

Proof

Indeed, \(x^t\) is a convex combination of the points \(x_\tau \in X\) with coefficients \(\lambda ^t_\tau \), whence \(x^t\in X\). With \(X'\) as in the premise of Proposition, we have

$$\begin{aligned} \forall y\in X': \langle F(y),x^t-y\rangle&= \sum _{\tau =1}^t\lambda ^t_\tau \langle F(y),x_\tau -y\rangle \le \sum _{\tau =1}^t \lambda ^t_\tau \langle F(x_\tau ),x_\tau -y\rangle \\&\le {\hbox {Res}}(X'\big |\mathcal{{I}}_t,\lambda ^t), \end{aligned}$$

where the first \(\le \) is due to monotonicity of \(F\). \(\square \)

Convex–concave saddle point problems Now let \(X=X_1\times X_2\), where \(X_i\) is a closed convex subset in Euclidean space \(E_i\), \(i=1,2\), and \(E=E_1\times E_2\), and let \(\varPhi (x^1,x^2):X_1\times X_2\rightarrow {\mathbf {R}}\) be a locally Lipschitz continuous function which is convex in \(x^1\in X_1\) and concave in \(x^2\in X_2\). \(X_1,X_2,\varPhi \) give rise to the saddle point problem

$$\begin{aligned} \hbox {SadVal}=\min _{x^1\in X_1}\max _{x^2\in X_2}\varPhi (x^1,x^2), \end{aligned}$$
(14)

two induced convex optimization problems

$$\begin{aligned} {\hbox {Opt}}(P)&= {\min }_{x^1\in X_1}\left[ \overline{\varPhi }(x^1)={\sup }_{x^2\in X_2}\varPhi (x^1,x^2)\right] (P)\nonumber \\ {\hbox {Opt}}(D)&= {\max }_{x^2\in X_2}\left[ \underline{\varPhi }(x^2)={\inf }_{x^1\in X_1}\varPhi (x^1,x^2)\right] (D) \end{aligned}$$
(15)

and a vector field \(F(x^1,x^2)=[F_1(x^1,x^2);F_2(x^1,x^2)]\) specified (in general, non-uniquely) by the relations

$$\begin{aligned} \forall (x^1,x^2)\in X_1\times X_2: F_1(x^1,x^2)\in \partial _{x^1} \varPhi (x^1,x^2),\,F_2(x^1,x^2)\in \partial _{x^2}[-\varPhi (x^1,x^2)]. \end{aligned}$$

It is well known that \(F\) is monotone on \(X\), and that weak solutions to the vi\((X,F)\) are exactly the saddle points of \(\varPhi \) on \(X_1\times X_2\). These saddle points exist if and only if \((P)\) and \((D)\) are solvable with equal optimal values, in which case the saddle points are exactly the pairs \((x^1_*,x^2_*)\) comprised by optimal solutions to \((P)\) and \((D)\). In general, \({\hbox {Opt}}(P)\ge {\hbox {Opt}}(D)\), with equality definitely taking place when at least one of the sets \(X_1,X_2\) is bounded; if both are bounded, saddle points do exist. To avoid unnecessary complications, from now on, when speaking about a convex–concave saddle point problem, we assume that the problem is proper, meaning that \({\hbox {Opt}}(P)\) and \({\hbox {Opt}}(D)\) are reals; this definitely is the case when \(X\) is bounded.

A natural (in)accuracy measure for a candidate \(x=[x^1;x^2]\in X_1\times X_2\) to the role of a saddle point of \(\varPhi \) is the quantity

$$\begin{aligned} \epsilon \mathrm{sad}(x\big |X_1,X_2,\varPhi )&= \overline{\varPhi }(x^1) -\underline{\varPhi }(x^2)\nonumber \\&= [\overline{\varPhi }(x^1)-{\hbox {Opt}}(P)]+[{\hbox {Opt}}(D)-\underline{\varPhi }(x^2)]\nonumber \\&+\,\underbrace{[{\hbox {Opt}}(P)-{\hbox {Opt}}(D)]}_{\ge 0} \end{aligned}$$
(16)

This inaccuracy is nonnegative and is the sum of the duality gap \({\hbox {Opt}}(P)-{\hbox {Opt}}(D)\) (always nonnegative and vanishing when one of the sets \(X_1,X_2\) is bounded) and the inaccuracies, in terms of respective objectives, of \(x^1\) as a candidate solution to \((P)\) and \(x^2\) as a candidate solution to \((D)\).

The role of accuracy certificates in convex–concave saddle point problems stems from the following observation: \(\square \)

Proposition 2

Let \(X_1,X_2\) be nonempty closed convex sets, \(\varPhi :X:=X_1\times X_2\rightarrow {\mathbf {R}}\) be a locally Lipschitz continuous convex–concave function, and \(F\) be the associated monotone vector field on \(X\).

Let \(\mathcal{{I}}_t=\{x_\tau =[x^1_\tau ;x^2_\tau ]\in X,{F}(x_\tau )\}_{\tau =1}^t\) be a \(t\)-step execution protocol associated with \((X,{F})\) and \(\lambda ^t=\{\lambda ^t_\tau \}_{\tau =1}^t\) be an associated accuracy certificate. Then \(x^t:=x^t(\mathcal{{I}}_t,\lambda ^t)=[x^{1,t};x^{2,t}] \in X\).

Assume, further, that \(X^\prime _1\subset X_1\) and \(X^\prime _2\subset X_2\) are closed convex sets such that

$$\begin{aligned} x^t\in X^\prime :=X^\prime _1\times X^\prime _2. \end{aligned}$$
(17)

Then

$$\begin{aligned} {\epsilon _{{\tiny \mathrm Sad}}}\left( x^t\big |X_1^\prime ,X_2^\prime ,\varPhi \right) =\sup _{x^2\in X_2'}\varPhi \left( x^{1,t},x^2\right) -\inf _{x^1\in X_1'}\varPhi \left( x^1,x^{2,t}\right) \le {\hbox {Res}}\left( X^\prime \big |\mathcal{{I}}_t,\lambda ^t\right) . \end{aligned}$$
(18)

In addition, setting \(\widetilde{\varPhi }(x^1)=\sup _{x^2\in X_2^\prime }\varPhi \left( x^1,x^2\right) \), for every \(\widehat{x}^1\in X^\prime _1\) we have

$$\begin{aligned} \widetilde{\varPhi }\left( x^{1,t}\right) -\widetilde{\varPhi }\left( \widehat{x}^1\right) \le \widetilde{\varPhi }\left( x^{1,t}\right) -\varPhi (\widehat{x}^1,x^{2,t})\le {\hbox {Res}}(\{\widehat{x}^1\}\times X^\prime _2\big |\mathcal{{I}}_t,\lambda ^t). \end{aligned}$$
(19)

In particular, when the problem \({\hbox {Opt}}=\min _{x^1\in X^\prime _1}\widetilde{\varPhi }(x^1)\) is solvable with an optimal solution \(x^1_*\), we have

$$\begin{aligned} \widetilde{\varPhi }(x^{1,t})-{\hbox {Opt}}\le {\hbox {Res}}\left( \{x^1_*\}\times X^\prime _2\big |\mathcal{{I}}_t,\lambda ^t\right) . \end{aligned}$$
(20)

Proof

The inclusion \(x^t\in X\) is evident. For every set \(Y\subset X\) we have

$$\begin{aligned}&\forall [p;q]\in Y:\\&{\hbox {Res}}(Y\big |\mathcal{{I}}_t,\lambda ^t) \ge \sum _{\tau =1}^t\lambda ^t_\tau \left[ \langle F_1(x^1_\tau ),x^1_\tau -p\rangle + \langle F_2(x^2_\tau ),x^2_\tau -q\rangle \right] \\&\ge \sum _{\tau =1}^t\lambda ^t_\tau \left[ [\varPhi \left( x^1_\tau ,x^2_\tau \right) -\varPhi \left( p,x^2_\tau \right) ] + \left[ \varPhi \left( x^1_\tau ,q \right) -\varPhi \left( x^1_\tau ,x^2_\tau \right) \right] \right] \\&\text {[by \,the\, origin\, of} F \text {and\, since} \varPhi \text {is\, convex--concave]}\\&=\sum _{\tau =1}^t\lambda ^t_\tau \left[ \varPhi \left( x^1_\tau ,q\right) -\varPhi \left( p,x^2_\tau \right) \right] \ge \varPhi \left( x^{1,t},q\right) -\varPhi \left( p,x^{2,t}\right) \\&\text {[by\, origin\, of} x^t \text {and \,since} \varPhi \text {is \,convex--concave]}\\ \end{aligned}$$

Thus, for every \(Y\subset X\) we have

$$\begin{aligned} \sup _{[p;q]\in Y} \left[ \varPhi \left( x^{1,t},q\right) -\varPhi \left( p,x^{2,t}\right) \right] \le {\hbox {Res}}(Y\big |\mathcal{{I}}_t,\lambda ^t). \end{aligned}$$
(21)

Now assume that (17) takes place. Setting \(Y=X':=X^\prime _1\times X^\prime _2\) and recalling what \({\epsilon _{{\tiny \mathrm Sad}}}\) is, (21) yields (18). With \(Y=\{\widehat{x}^1\}\times X^\prime _2\), (21) yields the second inequality in (19); the first inequality in (19) is evident due to \(x^{2,t}\in X^\prime _2\).\(\square \)

3 Composite Mirror Prox algorithm

3.1 The situation

Let \(U\) be a nonempty closed convex domain in a Euclidean space \(E_u\), \(E_v\) be a Euclidean space, and \(X\) be a nonempty closed convex domain in \(E=E_u\times E_v\). We denote vectors from \(E\) by \(x=[u;v]\) with blocks \(u,v\) belonging to \(E_u\) and \(E_v\), respectively.

We assume that

A1::

\(E_u\) is equipped with a norm \(\Vert \cdot \Vert \), the conjugate norm being \(\Vert \cdot \Vert _*\), and \(U\) is equipped with a distance-generating function (d.g.f.) \(\omega (\cdot )\) (that is, with a continuously differentiable convex function \(\omega (\cdot ):U\rightarrow {\mathbf {R}}\)) which is compatible with \(\Vert \cdot \Vert \), meaning that \(\omega \) is strongly convex, modulus 1, w.r.t. \(\Vert \cdot \Vert \). Note that d.g.f. \(\omega \) defines the Bregman distance

$$\begin{aligned} V_u(w):=\omega (w)-\omega (u)-\langle \omega '(u),w-u\rangle \ge {1\over 2}\Vert w-u\Vert ^2,\,u,w\in U, \end{aligned}$$
(22)

where the concluding inequality follows from strong convexity, modulus 1, of the d.g.f. w.r.t. \(\Vert \cdot \Vert \). In the sequel, we refer to the pair \(\Vert \cdot \Vert ,\,\omega (\cdot )\) as to proximal setup for \(U\).

A2::

the image \(PX\) of \(X\) under the projection \(x=[u;v]\mapsto Px:=u\) is contained in \(U\).

A3::

we are given a vector field \(F(u,v):X\rightarrow E\) on \(X\) of the special structure as follows:

$$\begin{aligned} F(u,v)=[F_u(u);F_v], \end{aligned}$$

with \(F_u(u)\in E_u\) and \(F_v\in E_v\). Note that \(F\) is independent of \(v\). We assume also that

$$\begin{aligned} \forall u,u'\in U: \Vert F_u(u)-F_u(u')\Vert _*\le L\Vert u-u'\Vert +M \end{aligned}$$
(23)

with some \(L<\infty \), \(M<\infty \).

A4::

the linear form \(\langle F_v,v\rangle \) of \([u;v]\in E\) is bounded from below on \(X\) and is coercive on \(X\) w.r.t. \(v\): whenever \([u_t;v_t]\in X\), \(t=1,2, \ldots \) is a sequence such that \(\{u_t\}_{t=1}^\infty \) is bounded and \(\Vert v_t\Vert _2\rightarrow \infty \) as \(t\rightarrow \infty \), we have \(\langle F_v,v_t\rangle \rightarrow \infty \), \(t\rightarrow \infty \).

Our goal in this section is to show that in the situation in question, proximal type processing \(F\) (say, \(F\) is monotone on \(X\), and we want to solve the variational inequality given by \(F\) and \(X\)) can be implemented “as if” there were no \(v\)-components in the domain and in \(F\).

A generic application we are aiming at is as follows. We want to solve a “composite” saddle point problem

$$\begin{aligned} \hbox {SadVal}=\min _{u_1\in U_1}\max _{u_2\in U_2} \left[ \phi (u_1,u_2)+\varPsi _1(u_1)-\varPsi _2(u_2)\right] , \end{aligned}$$
(24)

where

  • \(U_1\subset E_1\) and \(U_2\subset E_2\) are nonempty closed convex sets in Euclidean spaces \(E_1,E_2\)

  • \(\phi \) is a smooth (with Lipschitz continuous gradient) convex–concave function on \(U_1\times U_2\)

  • \(\varPsi _1:U_1\rightarrow {\mathbf {R}}\) and \(\varPsi _2:U_2\rightarrow {\mathbf {R}}\) are convex functions, perhaps nonsmooth, but “fitting” the domains \(U_1\), \(U_2\) in the following sense: for \(i=1,2\), we can equip \(E_i\) with a norm \(\Vert \cdot \Vert _{(i)}\), and \(U_i\) - with a compatible with this norm d.g.f. \(\omega _i(\cdot )\) in such a way that optimization problems of the form

$$\begin{aligned} \min _{u_i\in U_i}\left[ \alpha \omega _i(u_i)+\beta \varPsi _i(u_i) +\langle \xi ,u_i\rangle \right] \qquad {[\alpha >0,\beta >0]} \end{aligned}$$
(25)

are easy to solve.

Our ultimate goal is to solve (24) “as if” there were no (perhaps) nonsmooth terms \(\varPsi _i\). With our approach, we intend to “get rid” of the nonsmooth terms by “moving” them into the description of problem’s domains. To this end, we act as follows:

  • For \(i=1,2\), we set \(X_i=\{x_i=[u_i;v_i]\in E_i\times {\mathbf {R}}: u_i\in U_i,v_i\ge \varPsi _i(u_i)\}\) and set

    $$\begin{aligned} U&:= U_1\times U_2\subset E_u:=E_1\times E_2, E_v={\mathbf {R}}^2,\\ X&= \Big \{x=[u=[u_1;u_2];v=[v_1;v_2]]:u_i\in U_i,v_i\ge \varPsi _i(u_i),i=1,2\Big \}\\&\subset E_u\times E_v, \end{aligned}$$

    thus ensuring that \(PX\subset U\), where \(P[u;v]=u\);

  • We rewrite the problem of interest equivalently as

    $$\begin{aligned} \hbox {SadVal}=\min _{x^1=[u_1;v_1]\in X_1}\max _{x^2=[u_2;v_2]\in X_2}\left[ \varPhi (u_1,v_1;u_2,v_2)=\phi (u_1,u_2)+v_1-v_2\right] \end{aligned}$$
    (26)

    Note that \(\varPhi \) is convex–concave and smooth. The associated monotone operator is

    $$\begin{aligned} F(u&= [u_1;u_2],v=[v_1;v_2])\\&= \left[ F_u(u)= [\nabla _{u_1}\phi (u_1,u_2);-\nabla _{u_2}\phi (u_1,u_2)];F_v=[1;1]\right] \end{aligned}$$

    and is of the structure required in A3. Note that \(F\) is Lipschitz continuous, so that (23) is satisfied with properly selected \(L\) and with \(M=0\).

We intend to process the reformulated saddle point problem (26) with a properly modified state-of-the-art MP saddle point algorithm [17]. In its basic version and as applied to a variational inequality with Lipschitz continuous monotone operator (in particular, to a convex–concave saddle point problem with smooth cost function), this algorithm exhibits \(O(1/t)\) rate of convergence, which is the best rate achievable with First Order saddle point algorithms as applied to large-scale saddle point problems (even those with bilinear cost function). The basic MP would require to equip the domain \(X=X_1\times X_2\) of (26) with a d.g.f. \(\omega (x_1,x_2)\) resulting in an easy-to-solve auxiliary problems of the form

$$\begin{aligned} \min _{x=[u_1;u_2;v_1;v_2]\in X}\left[ \omega (x) +\langle \xi ,x\rangle \right] , \end{aligned}$$
(27)

which would require to account in \(\omega \), in a nonlinear fashion, for the \(v\)-variables (since \(\omega \) should be a strongly convex in both \(u\)- and \(v\)-variables). While it is easy to construct \(\omega \) from our postulated “building blocks” \(\omega _1\), \(\omega _2\) leading to easy-to-solve problems (25), this construction results in auxiliary problems (27) somehow more complicated than problems (25). To overcome this difficulty, below we develop a “composite” MP algorithm taking advantage of the special structure of \(F\), as expressed in A3, and preserving the favorable efficiency estimates of the prototype. The modified MP operates with the auxiliary problems of the form

$$\begin{aligned} \min _{x=[u_1;u_2;v_1;v_2]\in X_1\times X_2}\sum _{i=1}^2\left[ \alpha _i \omega _i(u_i)+\beta _i v_i+\langle \xi _i,u_i\rangle \right] , \quad [\alpha _i>0,\beta _i>0] \end{aligned}$$

that is, with pairs of uncoupled problems

$$\begin{aligned} \min _{[u_i;v_i]\in X_i} \left[ \alpha _i \omega _i(u_i)+\beta _i v_i+\langle \xi _i,u_i\rangle \right] ,\,i=1,2; \end{aligned}$$

recalling that \(X_i=\{[u_i;v_i]:u_i\in U_i,v_i\ge \varPsi _i(u_i)\}\), these problems are nothing but the easy-to-solve problems (25).

3.2 Composite Mirror Prox algorithm

Given the situation described in Sect. 3.1, we define the associated prox-mapping: for \(\xi =[\eta ;\zeta ]\in E\) and \(x=[u;v]\in X\),

$$\begin{aligned}&P_x(\xi )\in \mathop {\hbox {Argmin}}_{[s;w]\in X} \left\{ \langle \eta -\omega '(u),s\rangle +\langle \zeta ,w\rangle +\omega (s)\right\} \nonumber \\&\qquad \quad \equiv \mathop {\hbox {Argmin}}_{[s;w]\in X} \left\{ \langle \eta ,s\rangle +\langle \zeta ,w\rangle +V_u(s)\right\} \end{aligned}$$
(28)

Observe that \(P_x([\eta ;\gamma F_v])\) is well defined whenever \(\gamma >0\)—the required \(\mathop {\hbox {Argmin}}\) is nonempty due to the strong convexity of \(\omega \) on \(U\) and assumption A4 (for verification, see item \(0^{\circ }\) in Appendix 1). Now consider the process as follows:

$$\begin{aligned} x_1&:= [u_1;v_1]\in X;\nonumber \\ y_{\tau }&:= [u'_{\tau };v'_{\tau }]=P_{x_\tau }(\gamma _\tau F(x_\tau ))=P_{x_\tau }(\gamma _\tau [ F_u(u_\tau );F_v])\nonumber \\ x_{\tau +1}&:= [u_{\tau +1};v_{\tau +1}]=P_{x_\tau }(\gamma _\tau F(y_\tau ))=P_{x_\tau }(\gamma _\tau [ F_u(u'_\tau );F_v]), \end{aligned}$$
(29)

where \(\gamma _\tau >0\); the latter relation, due to the above, implies that the recurrence (29) is well defined.

Theorem 1

In the setting of Sect. 3.1, assuming that A1A4 hold, consider the Composite Mirror Prox recurrence 29 (CoMP) with stepsizes \(\gamma _\tau >0\), \(\tau =1,2, \ldots \) satisfying the relation:

$$\begin{aligned} \delta _\tau :=\gamma _\tau \langle F_u(u'_\tau )-F_u(u_\tau ),u'_\tau -u_{\tau +1}\rangle -V_{u'_\tau }(u_{\tau +1}) -V_{u_\tau }(u'_{\tau })\le \gamma _\tau ^2M^2. \end{aligned}$$
(30)

Then the corresponding execution protocol \(\mathcal{{I}}_t=\{y_\tau ,F(y_\tau )\}_{\tau =1}^t\) admits accuracy certificate \(\lambda ^t=\{\lambda ^t_\tau =\gamma _\tau /\sum _{i=1}^t\gamma _i\}\) such that for every \(X'\subset X\) it holds

$$\begin{aligned} {\hbox {Res}}(X'\big |\mathcal{{I}}_t,\lambda ^t)\le {\varTheta [X']+M^2\sum _{\tau =1}^t\gamma _\tau ^2\over \sum _{\tau =1}^t\gamma _\tau },\;\;\;\varTheta [X']=\sup _{[u;v]\in X'}V_{u_1}(u). \end{aligned}$$
(31)

Relation (30) is definitely satisfied when \(0<\gamma _\tau \le ({\sqrt{2}L})^{-1}\), or, in the case of \(M=0\), when \(\gamma _\tau \le L^{-1}\).

Invoking Propositions 1, 2, we arrive at the following

Corollary 1

Under the premise of Theorem 1, for every \(t=1,2, \ldots \), setting

$$\begin{aligned} x^t=[u^t;v^t]={1\over \sum _{\tau =1}^t\gamma _\tau }\sum _{\tau =1}^t\gamma _\tau y_\tau . \end{aligned}$$

we ensure that \(x^t\in X\) and that

(i) In the case when \(F\) is monotone on \(X\), we have

$$\begin{aligned} {\epsilon _{\mathrm{VI}}}(x^t\big |X,F)\le \left[ {\sum }_{\tau =1}^t\gamma _\tau \right] ^{-1} \left[ \varTheta [X]+M^2{\sum }_{\tau =1}^t\gamma _\tau ^2\right] . \end{aligned}$$
(32)

(ii) Let \(X=X_1\times X_2\), and let \(F\) be the monotone vector field associated with the saddle point problem (14) with convex–concave locally Lipschitz continuous cost function \(\varPhi \). Then

$$\begin{aligned} {\epsilon _{{\tiny \mathrm Sad}}}(x^t\big |X_1,X_2,\varPhi )\le \left[ {\sum }_{\tau =1}^t\gamma _\tau \right] ^{-1} \left[ \varTheta [X]+M^2{\sum }_{\tau =1}^t\gamma _\tau ^2\right] . \end{aligned}$$
(33)

In addition, assuming that problem \((P)\) in (15) is solvable with optimal solution \(x^1_*\) and denoting by \(x^{1,t}\) the projection of \(x^t\in X=X_1\times X_2\) onto \(X_1\), we have

$$\begin{aligned} \overline{\varPhi }(x^{1,t})-{\hbox {Opt}}(P)\le \left[ {\sum }_{\tau =1}^t\gamma _\tau \right] ^{-1}\left[ \varTheta [\{x^1_*\}\times X_2]+M^2{\sum }_{\tau =1}^t\gamma _\tau ^2\right] . \end{aligned}$$
(34)

Remark

When \(F\) is Lipschitz continuous (that is, (23) holds true with some \(L>0\) and \(M=0\)), the requirements on the stepsizes imposed in the premise of Theorem 1 reduce to \(\delta _\tau \le 0\) for all \(\tau \) and are definitely satisfied with the constant stepsizes \(\gamma _\tau =1/L\). Thus, in the case under consideration we can assume w.l.o.g. that \(\gamma _\tau \ge 1/L\), thus ensuring that the upper bound on \({\hbox {Res}}(X'\big |\mathcal{{I}}_t,\lambda ^t)\) in (31) is \(\le \varTheta [X']Lt^{-1}\). As a result, (34) becomes

$$\begin{aligned} \overline{\varPhi }(x^{1,t})-{\hbox {Opt}}(P)\le \varTheta [\{x^1_*\}\times X_2]Lt^{-1}. \end{aligned}$$
(35)

3.3 Modifications

In this section, we demonstrate that in fact our algorithm admits some freedom in building approximate solutions, freedom which can be used to improve to some extent solutions’ quality. Modifications to be presented originate from [19]. We assume that we are in the situation described in Sect. 3.1, and assumptions A1A4 are in force. In addition, we assume that

A5::

The vector field \(F\) described in A3 is monotone, and the variational inequality given by \((X,F)\) has a weak solution:

$$\begin{aligned} \exists x_*=[u_*;v_*]\in X: \langle F(y),y-x_*\rangle \ge 0\,\,\forall y\in X \end{aligned}$$
(36)

Lemma 1

In the situation from Sect. 3.1 and under assumptions A1A5, for \(R\ge 0\) let us set

$$\begin{aligned} \widehat{\varTheta }(R)=\max _{u,u'\in U}\left\{ V_u(u'):\Vert u-u_1\Vert \le R, \Vert u'-u_1\Vert \le R\right\} \end{aligned}$$
(37)

(this quantity is finite since \(\omega \) is continuously differentiable on \(U\)), and let

$$\begin{aligned} \{x_\tau =[u_\tau ;v_\tau ]:\tau \le N+1,y_\tau :\tau \le N\} \end{aligned}$$

be the trajectory of the \(N\)-step MP algorithm (29) with stepsizes \(\gamma _\tau >0\) which ensure (30) for \(\tau \le N\). Then for all \(u\in U\) and \(t\le N+1\),

$$\begin{aligned} 0\le V_{u_t}(u)\le \widehat{\varTheta }(\max [R_N,\Vert u-u_1\Vert ]),\;\;\;R_N:=2\left( 2V_{u_1} (u_*)+M^2\sum _{\tau =1}^{N-1} \gamma _\tau ^2\right) ^{1/2}, \end{aligned}$$
(38)

with \(u_*\) defined in (36).

Proposition 3

In the situation of Sect. 3.1 and under assumptions A1A5, let \(N\) be a positive integer, and let \(\mathcal{{I}}_N=\{y_\tau ,F(y_\tau )\}_{\tau =1}^N\) be the execution protocol generated by \(N\)-step CoMP (29) with stepsizes \(\gamma _\tau \) ensuring (30). Let also \(\lambda ^N=\{\lambda _1, \ldots ,\lambda _N\}\) be a collection of positive reals summing up to 1 and such that

$$\begin{aligned} \lambda _1/\gamma _1\le \lambda _2/\gamma _2\le \cdots \le \lambda _N/\gamma _N. \end{aligned}$$
(39)

Then for every \(R\ge 0\), with \(X_R=\{x=[u;v]\in X: \Vert u-u_1\Vert \le R\}\) one has

$$\begin{aligned} {\hbox {Res}}(X_R|\mathcal{{I}}_N,\lambda ^N)\le {\lambda _N\over \gamma _N}\widehat{\varTheta }(\max [R_N,R]) +M^2\sum _{\tau =1}^N\lambda _\tau \gamma _\tau , \end{aligned}$$
(40)

with \(\widehat{\varTheta }(\cdot )\) and \(R_N\) defined by (37) and (38).

Invoking Propositions 1, 2, we arrive at the following modification of Corollary 1.

Corollary 2

Under the premise and in the notation of Proposition 3, setting

$$\begin{aligned} x^N=[u^N;v^N]=\sum _{\tau =1}^N\lambda _\tau y_\tau . \end{aligned}$$

we ensure that \(x^N\in X\). Besides this,

(i) Let \(X'\) be a closed convex subset of \(X\) such that \(x^N\in X'\) and the projection of \(X'\) on the \(u\)-space is contained in \(\Vert \cdot \Vert \)-ball of radius \(R\) centered at \(u_1\). Then

$$\begin{aligned} {\epsilon _{\mathrm{VI}}}(x^N\big |X',F)\le {\lambda _N\over \gamma _N}\widehat{\varTheta }(\max [R_N,R]) +M^2\sum _{\tau =1}^N\lambda _\tau \gamma _\tau . \end{aligned}$$
(41)

(ii) Let \(X=X_1\times X_2\) and \(F\) be the monotone vector field associated with saddle point problem (14) with convex–concave locally Lipschitz continuous cost function \(\varPhi \). Let, further, \(X^\prime _i\) be closed convex subsets of \(X_i\), \(i=1,2\), such that \(x^N\in X^\prime _1\times X^\prime _2\) and the projection of \(X^\prime _1\times X^\prime _2\) onto the \(u\)-space is contained in \(\Vert \cdot \Vert \)-ball of radius \(R\) centered at \(u_1\). Then

$$\begin{aligned} {\epsilon _{{\tiny \mathrm Sad}}}(x^N\big |X^\prime _1,X^\prime _2,\varPhi )\le {\lambda _N\over \gamma _N}\widehat{\varTheta }(\max [R_N,R]) +M^2{\sum }_{\tau =1}^N\lambda _\tau \gamma _\tau . \end{aligned}$$
(42)

4 Multi-term composite minimization

In this section, we focus on the problem (3), (6) of multi-term composite minimization.

4.1 Problem setting

We intend to consider problem (3), (6) in the situation as follows. For a nonnegative integer \(K\) and \(0\le k\le K\) we are given

  1. 1.

    Euclidean spaces \(E_k\) and \({\overline{E}}_k\) along with their nonempty closed convex subsets \(Y_k\) and \(Z_k\), respectively;

  2. 2.

    Proximal setups for \((E_k,Y_k)\) and \(({\overline{E}}_k, Z_k)\), that is, norms \(p_k(\cdot )\) on \(E_k\), norms \(q_k(\cdot )\) on \({\overline{E}}_k\), and d.g.f.’s \(\omega _k(\cdot ):Y_k\rightarrow {\mathbf {R}}\), \({\overline{\omega }}_k(\cdot ):Z_k\rightarrow {\mathbf {R}}\) compatible with \(p_k(\cdot )\) and \(q_k(\cdot )\), respectively;

  3. 3.

    Affine mappings \(y^0\mapsto A_k y^0+b_k:E_0\rightarrow E_k\), where \(y^0\mapsto A_0y^0+b_0\) is the identity mapping on \(E_0\);

  4. 4.

    Lipschitz continuous convex functions \(\psi _k(y^k):Y_k\rightarrow {\mathbf {R}}\) along with their saddle point representations

    $$\begin{aligned} \psi _k(y^k)=\sup _{z^k\in Z_k}{[}\phi _k(y^k,z^k)-{\overline{\varPsi }}_k(z^k){]},\;\;0\le k\le K, \end{aligned}$$
    (43)

    where \(\phi _k(y^k,z^k):Y_k\times Z_k\rightarrow {\mathbf {R}}\) are smooth (with Lipschitz continuous gradients) functions convex in \(y^k\in Y_k\) and concave in \(z^k\in Z_k\), and \({\overline{\varPsi }}_k(z^k):Z_k\rightarrow {\mathbf {R}}\) are Lipschitz continuous convex functions such that the problems of the form

    $$\begin{aligned} \min \limits _{z^k\in Z_k}\left[ {\overline{\omega }}_k(z^k)+\langle \xi ^k,z^k\rangle +\alpha {\overline{\varPsi }}_k(z^k)\right] \quad [\alpha >0] \end{aligned}$$
    (44)

    are easy to solve;

  5. 5.

    Lipschitz continuous convex functions \(\varPsi _k(y^k):Y_k\rightarrow {\mathbf {R}}\) such that the problems of the form

    $$\begin{aligned} \min \limits _{y^k\in Y_k}\left[ \omega _k(y^k)+\langle \xi ^k,y^k\rangle +\alpha \varPsi _k(y^k)\right] \quad [\alpha >0] \end{aligned}$$
    (45)

    are easy to solve;

  6. 6.

    For \(1\le k\le K\), the norms \(\pi _k^*(\cdot )\) on \(E_k\) are given, with conjugate norms \(\pi _k(\cdot )\), along with d.g.f.’s \(\widehat{\omega }_k(\cdot ):\;W_k:=\{w^k\in E_k:\pi _k(w^k)\le 1\}\rightarrow {\mathbf {R}}\) which are strongly convex, modulus 1, w.r.t. \(\pi _k(\cdot )\) such that the problems

    $$\begin{aligned} \min _{w^k\in W_k}\left[ \widehat{\omega }_k(w^k)+\langle \xi ^k,w^k\rangle \right] \end{aligned}$$
    (46)

    are easy to solve.

The outlined data define the sets

$$\begin{aligned} Y^+_k&= \left\{ [y^k;\tau ^k]: \;y^k\in Y_k,\tau ^k\ge \varPsi _k(y^k)\right\} \subset E_k^+:=E_k\times {\mathbf {R}},\,\,0\le k\le K,\\ Z^+_k&= \left\{ [z^k;\sigma ^k]: \;z^k\in Z_k,\sigma ^k\ge {\overline{\varPsi }}_k(z^k)\right\} \subset {\overline{E}}_k^+:={\overline{E}}_k\times {\mathbf {R}},\,\,0\le k\le K. \end{aligned}$$

The problem of interest (3), (6) along with its saddle point reformulation in the just defined situation read

$$\begin{aligned} {\hbox {Opt}}&=\min \limits _{y^0\in Y_0}\left\{ f(y^0) : = \sum \limits _{k=0}^K\left[ \psi _k(A_ky^0+b_k)+\varPsi _k(A_ky^0+b_k)\right] \right\} \end{aligned}$$
(47a)
$$\begin{aligned}&=\min \limits _{y^0\in Y_0}\left\{ f(y^0) =\max \limits _{\{z^k\in Z_k\}_{k=0}^K}\sum \limits _{k=0}^K\left[ \phi _k(A_ky^0+b_k,z^k)\right. \right. \nonumber \\&\quad \left. \left. +\,\varPsi _k(A_ky^0+b_k) -{\overline{\varPsi }}_k(z^k)\right] \right\} \end{aligned}$$
(47b)

which we rewrite equivalently as

$$\begin{aligned} {\hbox {Opt}}&={\mathop {\mathop {\min }\limits _{\{[y^k;\tau ^k]\}_{k=0}^K}}\limits _{ \in Y_0^+\times \cdots \times Y_K^+}}{\mathop {\mathop {\max }\limits _{\{[z^k;\sigma ^k]\}_{k=0}^K}}\limits _{ \in Z_0^+\times \cdots \times Z_K^+}} \left\{ \sum \limits _{k=0}^K\left[ \phi _k(y^k,z^k)+\tau ^k-\sigma ^k\right] \right. : \nonumber \\&\qquad \qquad \qquad y^k\left. =A_ky^0+b_k,\;1\le k\le K\right\} . \end{aligned}$$
(47c)

From now on we make the following assumptions

B1: We have \(A_kY_0+b_k\subset Y_k\), \(1\le k\le K\);

B2: For \(0\le k\le K\), the sets \(Z_k\) are bounded. Further, the functions \(\varPsi _k\) are below bounded on \(Y_k\), and the functions \(f_k=\psi _k+\varPsi _k\) are coercive on \(Y_k\): whenever \(y^k_t\in Y_k\), \(t=1,2, \ldots ,\) are such that \(p_k(y^k_t)\rightarrow \infty \) as \(t\rightarrow \infty \), we have \({f_k}(y^k_t)\rightarrow \infty \).

Note that B1 and B2 imply that the saddle point problem (47c) is solvable; let \(\left\{ [y^k_*;\tau ^k_*]\right\} _{0\le k\le K};\;\left\{ [z^k_*;\sigma ^k_*]\right\} _{0\le k\le K} \) be the corresponding saddle point.

4.2 Course of actions

Given \(\rho _k>0\), \(1\le k\le K\), we approximate (47c) by the problem

$$\begin{aligned} \widehat{{\hbox {Opt}}}&={\mathop {\mathop {\min }\limits _{\left\{ [y^k;\tau ^k]\right\} _{k=0}^K}}\limits _{ \in Y_0^+\times \cdots \times Y_K^+}} {\mathop {\mathop {\max }\limits _{\left\{ [z^k;\sigma ^k]\right\} _{k=0}^K}}\limits _{ \in Z_0^+\times \cdots \times Z_K^+}} \left\{ \sum \limits _{k=0}^K\left[ \phi _k\left( y^k,z^k\right) +\tau ^k-\sigma ^k\right] +\sum \limits _{k=1}^K \rho _k\pi _k^*\left( y^k-A_ky^0\right) \right\} \end{aligned}$$
(48a)
$$\begin{aligned}&={\mathop {\mathop {\min }\limits _{x^1\in X_1}}\limits _{ :=Y_0^+\times \cdots \times Y_K^+}} {\mathop {\mathop {\max }\limits _{x^2\in X_2}}\limits _{ := Z_0^+\times \cdots \times Z_K^+\times W_1\times \cdots W_K}} \varPhi \left( \underbrace{\left\{ [y^k;\tau ^k]\right\} _{k=0}^K}_{x^1},\;\underbrace{\left[ \left\{ [z^k;\sigma ^k]\right\} _{k=0}^K; \{w^k\}_{k=1}^K\right] }_{x^2}\right) \end{aligned}$$
(48b)

where

$$\begin{aligned} \varPhi (x^1,x^2)=\sum \limits _{k=0}^K\left[ \phi _k(y^k,z^k)+\tau ^k-\sigma ^k\right] +\sum \limits _{k=1}^K \rho _k\langle w^k,y^k-A_ky^0-b_k\rangle . \end{aligned}$$

Observe that the monotone operator \(F(x^1,x^2)=\left[ F_1(x^1,x^2);F_2(x^1,x^2)\right] \) associated with the saddle point problem in (48b) is given by

$$\begin{aligned} F_1(x^1,x^2)&= \bigg [\nabla _{y^0}\phi _0\left( y^0,z^0\right) -\sum \limits _{k=1}^K\rho _kA_k^Tw^k;1;\; \left\{ \nabla _{y^k}\phi _k\left( y^k,z^k\right) +\rho _kw^k;1\right\} _{k=1}^K\bigg ],\nonumber \\ F_2(x^1,x^2)&= \bigg [\left\{ -\nabla _{z^k}\phi _k\left( y^k,z^k\right) ;1\right\} _{k=0}^K;\; \left\{ -\rho _k[y^k-A_ky^0-b_k]\right\} _{k=1}^K\bigg ]. \end{aligned}$$
(49)

Now let us set

  • \(U=\left\{ \begin{array}{rl}&{}u=\left[ y^0;\ldots ;y^K;z^0;\ldots ;z^K;w^1;\ldots ;w^K\right] :y^k\in Y_k, \,z^k\in Z_k,\\ &{}\qquad \qquad \qquad \qquad \qquad \qquad \qquad 0\le k\le K,\pi _k(w^k)\le 1,\,1\le k\le K\end{array}\right\} ,\)

  • \(X=\left\{ \begin{array}{rl} &{} x=\left[ u=\left[ y^0;\ldots ;y^K;z^1;\ldots ;z^K;w^1;\ldots ;w^K \right] ; \right. \\ &{} \quad \quad \left. \;v=[\tau ^0;\ldots ;\tau ^K;\sigma ^0;\ldots ;\sigma ^K]\right] :\\ &{} u\in U, \,\tau ^k\ge \varPsi _k(y^k),\,\sigma ^k\ge {\overline{\varPsi }}_k(z^k),\,0\le \\ &{}\quad k\le K\end{array}\right\} \), so that \(PX\subset U\), cf. assumption A2 in Sect. 3.1.

The variational inequality associated with the saddle point problem in (48b) can be treated as the variational inequality on the domain \(X\) with the monotone operator

$$\begin{aligned} F(x=[u;v])=[F_u(u);F_v], \end{aligned}$$

where

$$\begin{aligned} F_u(\underbrace{[y^0;\ldots ;y^K;\,z^0;\ldots ;z^K;\,w^1;\ldots ;w^{K}]}_{u})&= \left[ \begin{array}{l}\nabla _y\phi _0(y^0,z^0)-\sum \limits _{k=1}^{K}\rho _kA_k^Tw^k\\ \left\{ \nabla _y\phi _k(y^k,z^k)+\rho _kw^k\right\} _{k=1}^K\\ \left\{ -\nabla _z\phi _k(y^k,z^k\right\} _{k=0}^K\\ \left\{ -\rho _k[y^k-A_ky^0-b_k]\right\} _{k=1}^K\end{array}\right] \nonumber \\ F_v(\underbrace{[\tau ^0;\ldots ;\tau ^K;\sigma ^0;\ldots ;\sigma ^K]}_{v})&= [1;\ldots ;1]. \end{aligned}$$
(50)

This operator meets the structural assumptions A3 and A4 from Sect. 3.1 (A4 is guaranteed by B2). We can equip \(U\) and its embedding space \(E_u\) with the proximal setup \(\Vert \cdot \Vert ,\;\omega (\cdot )\) given by

$$\begin{aligned} \Vert u \Vert&= \sqrt{\sum _{k=0}^K\left[ \alpha _kp_k^2(y^k) +\beta _kq_k^2(z^k)\right] +\sum _{k=1}^{K}\gamma _k\pi _k^2(w^k)},\nonumber \\ \omega (u )&= \sum _{k=0}^K\left[ \alpha _k\omega _k(y^k) +\beta _k{\overline{\omega }}_k(z^k)\right] +\sum _{k=1}^{K}\gamma _k\widehat{\omega }_k(w^k), \end{aligned}$$
(51)

where \(\alpha _k,\beta _k\), \(0\le k\le K\), and \(\gamma _k\), \(1\le k\le K\), are positive aggregation parametersFootnote 3. Observe that carrying out a step of the CoMP algorithm presented in Sect. 3.2 requires computing \(F\) at \(O(1)\) points of \(X\) and solving \(O(1)\) auxiliary problems of the form

$$\begin{aligned}&{\mathop {\mathop {\min }\limits _{[y^0;\ldots ;y^K;z^0;\ldots ;z^K],}}\limits _{ [;w^1;\ldots ;w^K;\tau ^0;\ldots ;\tau ^K;\sigma ^0;\ldots ;\sigma ^K]}} \left\{ \sum \limits _{k=0}^K \left[ a_k\omega _k(y^k) +\langle \xi _k,y^k\rangle +b_k\tau ^k\right] \right. \\&\qquad \left. +\sum \limits _{k=0}^K \left[ c_k{\overline{\omega }}_k(z^k) +\langle \eta _k,z^k\rangle +d_k\sigma ^k\right] +\sum \limits _{k=1}^{K}\left[ e_k\widehat{\omega }_k(w^k)+\langle \zeta _k,w^k\rangle \right] \right\} :\\&\qquad y^k\in Y_k,\tau ^k\ge \varPsi _k(y^k),\;z^k\in Z_k,\sigma ^k\ge {\overline{\varPsi }}_k(y^k),\;0\le k\le K,\;\\&\qquad \pi _k(w^k)\le 1,\,1\le k\le K, \end{aligned}$$

with positive \(a_k,...,e_k\), and we have assumed that these problems are easy to solve.

4.3 “Exact penalty”

Let us make one more assumption:

C: For \(1\le k\le K\),

  • \(\psi _{k}\) are Lipschitz continuous on \(Y_k\) with constants \(G_k\) w.r.t. \(\pi _k^*(\cdot )\),

  • \(\varPsi _{k}\) are Lipschitz continuous on \(Y_k\) with constants \(H_k\) w.r.t. \(\pi _k^*(\cdot )\).

Given a feasible solution \(\overline{x}=[\overline{x}^1; \overline{x}^2]\), \(\overline{x}^1:=\{[\overline{y}^k;\overline{\tau }^k]\in Y^+_k\}_{k=0}^K\) to the saddle point problem (48b), let us set

$$\begin{aligned} \widehat{y}^0=\overline{y}^0;\;\widehat{y}^k=A_k\overline{y}^0+b_k,\;1\le k\le K;\;\widehat{\tau }^k=\varPsi _k(\widehat{y}^k),\;0\le k\le K, \end{aligned}$$

thus getting another feasible (by assumption B1) solution \(\widehat{x}{=}\big [\widehat{x}^1 {=}\{[\widehat{y}^k;\widehat{\tau }^k]\}_{k=0}^K;\,\overline{x}^2\big ]\) to (48b). We call \(\widehat{x}^1\) correction of \(\overline{x}^1\). For \(1\le k\le K\) we clearly have

$$\begin{aligned} \psi _k\left( \widehat{y}^k\right)&\le \psi _k\left( \overline{y}^k\right) +G_k\pi _k^*\left( \widehat{y}^k-\overline{y}^k\right) =\psi _k\left( \overline{y}^k\right) +G_k\pi _k^*\left( \overline{y}^k-A_k\overline{y}^0-b_k\right) ,\\ \widehat{\tau }^k&= \varPsi _k(\widehat{y}^k)\le \varPsi _k\left( \overline{y}^k\right) +H_k\pi _k^*\left( \widehat{y}^k-\overline{y}^k\right) {\le } \overline{\tau }^k {+}H_k\pi _k^*\left( \overline{y}^k{-}A_k\overline{y}^0{-}b_k\right) , \end{aligned}$$

and \(\widehat{\tau }^0=\varPsi _0(\overline{y}^0)\le \overline{\tau }^0\). Hence for \( \overline{\varPhi }(x^1)=\max \limits _{x^2\in X_2} \varPhi (x^1,x^2)\) we have

$$\begin{aligned} \overline{\varPhi }(\widehat{x}^1){\le }\overline{\varPhi }(\overline{x}^1) +\sum _{k{=}1}^{K}[H_k+G_k]\pi _k^*(\overline{y}^{k}{-}A_k\overline{y}^0-b_k) -\sum _{k{=}1}^{K}\rho _k\pi _k^*(\overline{y}^{k}{-}A_k\overline{y}^0-b_k). \end{aligned}$$

We see that under the condition

$$\begin{aligned} \rho _k\ge G_k+H_k,\,\, 1\le k\le K, \end{aligned}$$
(52)

correction does not increase the value of the primal objective of (48b), whence the saddle point value \(\widehat{{\hbox {Opt}}}\) of (48b) is \(\ge \) the optimal value \({\hbox {Opt}}\) in the problem of interest (47a). Since the opposite inequality is evident, we arrive at the following

Proposition 4

In the situation of Sect. 4.1, let assumptions B1, B2, C and (52) hold true. Then

  1. (i)

    the optimal value \(\widehat{{\hbox {Opt}}}\) in (48a) coincides with the optimal value \({\hbox {Opt}}\) in the problem of interest (47a);

  2. (ii)

    consequently, if \(\overline{x}=[\overline{x}^1;\overline{x}^2]\) is a feasible solution of the saddle point problem in (48b), then the correction \(\widehat{x}^1=\{[\widehat{y}^k;\widehat{\tau }^k]\}_{k=0}^K\) of \(\overline{x}^1\) is a feasible solution to the problem of interest (47c), and

    $$\begin{aligned} f(\widehat{y}^0)-{\hbox {Opt}}\le {\epsilon _{{\tiny \mathrm Sad}}}(\overline{x}\big |X_1,X_2,\varPhi ), \end{aligned}$$
    (53)

    where \(\widehat{y}^0(=y^0(\widehat{x}^1))\) is the “\(y^0\)-component” of \(\widehat{x}^1\);

As a corollary, under the premise of Proposition 4, when applying to the saddle point problem (48b) the CoMP algorithm induced by the above setup and passing “at no cost” from the approximate solutions \(x^t=[x^{1,t};x^{2,t}]\) generated by CoMP to the corrections \(\widehat{x}^{1,t}\) of \(x^{1,t}\)’s, we get feasible solutions to the problem of interest (47a) satisfying the error bound

$$\begin{aligned} f({y}^0(\widehat{x}^{1,t}))-{\hbox {Opt}}\le {\varTheta [x_*^1\times X_2]L\over t},\,t=1,2, \ldots \end{aligned}$$
(54)

where \(L\) is the Lipschitz constant of \(F_u(\cdot )\) induced by the norm \(\Vert \cdot \Vert \) given by (51), and \(\varTheta [\cdot ]\) is induced by the d.g.f. given by the same (51) and the \(u=[y^0;\ldots ;y^K;z^0;\ldots ;z^K;w^1;\ldots ;w^K]\) -component of the starting point. Note that \(W_k\) and \(Z_k\) are compact, whence \(\varTheta [x_*^1\times X_2]\) is finite.

Remark

In principle, we can use the result of Proposition 4 “as is”, that is, to work from the very beginning with values of \(\rho _k\) satisfying (52); this option is feasible, provided that we know in advance the corresponding Lipschitz constants and they are not too large (which indeed is the case in some applications). This being said, when our objective is to ensure the validity of the bound (53), selecting \(\rho _k\)’s according to (52) could be very conservative. From our experience, usually it is better to adjust the penalization coefficients \(\rho _k\) on-line. Specifically, let \(\overline{\varPhi }(\overline{x}^1)=\sup _{x^2\in X_2}\varPhi (\overline{x}^1,x^2)\) (cf (15)). We always have \(\widehat{{\hbox {Opt}}}\le {\hbox {Opt}}\). It follows that independently of how \(\rho _k\) are selected, we have

$$\begin{aligned} f(\widehat{y}^0)-{\hbox {Opt}}\le \underbrace{[f(\widehat{y}^0)-\overline{\varPhi }(\overline{x}^1)]}_{\epsilon _1}+ \underbrace{\left[ \overline{\varPhi }(\overline{x}^1)-\widehat{{\hbox {Opt}}}\right] }_{\epsilon _2} \end{aligned}$$
(55)

for every feasible solution \(\overline{x}^1=\{[\overline{y}^k;\overline{\tau }^k]\}_{k=0}^K\) to (48b) and the same inequality holds for its correction \(\widehat{x}^1=\left\{ [\widehat{y}^k;\widehat{\tau }^k]\right\} _{k=0}^K\). When \(\overline{x}^1\) is a component of a good (with small \({\epsilon _{{\tiny \mathrm Sad}}}\)) approximate solution to the saddle point problem (48b), \(\epsilon _2\) is small. If \(\epsilon _1\) also is small, we are done; otherwise we can either increase in a fixed ratio the current values of all \(\rho _k\), or only of those \(\rho _k\) for which passing from \([\overline{y}^k;\overline{\tau }^k]\) to \([\widehat{y}^k;\widehat{\tau }^k]\) results in “significant” quantities

$$\begin{aligned}{}[\psi _k(\widehat{y}^k)+\widehat{\tau }^k]-[\psi _k(\overline{y}^k) +\overline{\tau }^k+\rho _k\pi _k^*(\overline{y}^k-A_k\overline{y}^0-b_k)] \end{aligned}$$

and solve the updated saddle point problem (48b).

4.4 Numerical illustrations

4.4.1 Matrix completion

Problem of interest In the experiments to be reported, we applied the just outlined approach to Example 1, that is, to the problem

$$\begin{aligned} {\hbox {Opt}}=\min \limits _{y^0\in {\mathbf {R}}^{n\times n}} \left[ \upsilon (y^0)=\underbrace{{1\over 2}\Vert P_\varOmega y^0 -b\Vert _2^2}_{\psi _0(y^0)} +\underbrace{\lambda \Vert y^0\Vert _1}_{\varPsi _0(y^0)}+\,\underbrace{\mu \Vert y^0\Vert _{\mathrm{nuc}}}_{\varPsi _1(y^0)}\right] . \end{aligned}$$
(56)

where \(\varOmega \) is a given set of cells in an \(n\times n\) matrix, and \(P_\varOmega y\) is the restriction of \(y\in {\mathbf {R}}^{n\times n}\) onto \(\varOmega \); this restriction is treated as a vector from \({\mathbf {R}}^M\), \(M={\hbox {Card}}(\varOmega )\). Thus, (56) is a kind of matrix completion problem where we want to recover a sparse and low rank \(n\times n\) matrix given noisy observations \(b\) of its entries in cells from \(\varOmega \). Note that (56) is a special case of (47b) with \(K=1\), \(Y_0=Y_1=E_0=E_1={\mathbf {R}}^{n\times n}\), the identity mapping \(y^0\mapsto A_1y^0\), and \(\phi _0(y^0,z^0)\equiv \psi _0(y^0)\), \(\phi _1\equiv 0\) (so that \(Z_k\) can be defined as singletons, and \({\overline{\varPsi }}_k(\cdot )\) set to 0, \(k=0,1\)).

Implementing the CoMP algorithm When implementing the CoMP algorithm, we used the Frobenius norm \(\Vert \cdot \Vert _F\) on \({\mathbf {R}}^{n\times n}\) in the role of \(p_0(\cdot )\), \(p_1(\cdot )\) and \(\pi _1(\cdot )\), and the function \({1\over 2}\Vert \cdot \Vert _F^2\) in the role of d.g.f.’s \(\omega _0(\cdot )\), \(\omega _1(\cdot )\), \(\widehat{\omega }_1(\cdot )\).

The aggregation weights in (51) were chosen as \(\alpha _0=\alpha _1=1/D\) and \(\gamma _1=1\), where \(D\) is a guess of the quantity \(D_*:=\Vert y^0_*\Vert _F\), where \(y^0_*\) is the optimal solution (56). With \(D=D_*\), our aggregation would roughly optimize the right hand side in (54), provided the starting point is the origin.

The coefficient \(\rho _1\) in (48b) was adjusted dynamically as explained at the end of Sect. 4.3. Specifically, we start with a small (0.001) value of \(\rho _1\) and restart the solution process, increasing by factor 3 the previous value of \(\rho _1\), each time when the \(x^1\)-component \(\overline{x}\) of current approximate solution and its correction \(\widehat{x}\) violate the inequality \(\upsilon (y^0(\widehat{x}))\le (1+\kappa )\overline{\varPhi }(\overline{x})\) for some small tolerance \(\kappa \) (we used \(\kappa \)=1.e\(-\)4), cf. (55).

The stepsizes \(\gamma _t\) in the CoMP algorithm were adjusted dynamically, specifically, as follows. At a step \(\tau \), given a current guess \(\gamma \) for the stepsize, we set \(\gamma _\tau =\gamma \), perform the step and check whether \(\delta _\tau \le 0\). If this is the case, we pass to step \(\tau +1\), the new guess for the stepsize being \(1.2\) times the old one. If \(\delta _\tau \) is positive, we decrease \(\gamma _\tau \) in a fixed proportion (in our implementation—by factor 0.8), repeat the step, and proceed in this fashion until the resulting value of \(\delta _\tau \) becomes nonpositive. When it happens, we pass to step \(\tau +1\), and use the value of \(\gamma _\tau \) we have ended up with as our new guess for the stepsize.

In all our experiments, the starting point was given by the matrix \(\widehat{y}:=P_\varOmega ^*b\) (“observations of entries in cells from \(\varOmega \) and zeros in all other cells”) according to \(y^0=y^1=\widehat{y}\), \(\tau ^0=\lambda \Vert \widehat{y}\Vert _1\), \(\tau ^1=\mu \Vert \widehat{y}\Vert _{\mathrm{nuc}}\), \(w^1=0\).

Lower bounding the optimal value When running the CoMP algorithm, we at every step \(t\) have at our disposal an approximate solution \(y^{0,t}\) to the problem of interest (59); \(y^{0,t}\) is nothing but the \(y^0\)-component of the approximate solution \(x^t\) generated by CoMP as applied to the saddle point approximation of (59) corresponding to the current value of \(\rho _1\), see (49). We have at our disposal also the value \(\upsilon (y^{0,t})\) of the objective of (56) at \(y^{0,t}\); this quantity is a byproduct of checking whether we should update the current value of \(\rho _1\).Footnote 4 As a result, we have at our disposal the best found so far value \(\upsilon ^t=\min _{1\le \tau \le t}\upsilon (y^{0,\tau })\), along with the corresponding value \(y^{0,t}_*\) of \(y^0\): \(\upsilon (y^{0,t}_*)=\upsilon ^t\). In order to understand how good is the best generated so far approximate solution \(y^{0,t}_*\) to the problem of interest, we need to upper bound the quantity \(\upsilon ^t-{\hbox {Opt}}\), or, which is the same, to lower bound \({\hbox {Opt}}\). This is a nontrivial task, since the domain of the problem of interest is unbounded, while the usual techniques for online bounding from below the optimal value in a convex minimization problem require the domain to be bounded. We are about to describe a technique for lower bounding \({\hbox {Opt}}\) utilizing the structure of (56).

Let \(y^0_*\) be an optimal solution to (56) (it clearly exists since \(\psi _0\ge 0\) and \(\lambda ,\mu >0\)). Assume that at a step \(t\) we have at our disposal an upper bound \(R=R_t\) on \(\Vert y^0_*\Vert _1\), and let

$$\begin{aligned} R^+=\max \left[ R,\Vert y^{0,t}\Vert _1\right] . \end{aligned}$$

Let us look at the saddle point approximation of the problem of interest

$$\begin{aligned} \widehat{{\hbox {Opt}}}&= \min \limits _{x^1=[y^0;\tau ^0;y^1;\tau ^1]\in \widehat{X}_1}\max \limits _{x^2\in X_2}\left[ \varPhi (x^1,x^2):=\psi _0(y^0){+}\tau ^0{+}\tau ^1{+}\rho _1\langle y^1-y^0,x^2\rangle \right] ,\nonumber \\ X_1&= \left\{ [y^0;\tau ^0;y^1;\tau ^1]:\tau ^0 \ge {\lambda }\Vert y^0\Vert _1,\tau ^1\ge \mu \Vert y^1\Vert _{\mathrm{nuc}}\right\} ,\nonumber \\&\,X_2=\left\{ x^2:\Vert x^2\Vert _F\le 1\right\} . \end{aligned}$$
(57)

associated with current value of \(\rho _1\), and let

$$\begin{aligned} \widehat{X}_1=\left\{ [y^0;\tau ^0;y^1;\tau ^1]\in X_1:\tau ^0\le {\lambda } R^+,\tau ^1\le \mu R^+\right\} . \end{aligned}$$

Observe that the point \(x^{1,*}=[y^0_*;{\lambda }\Vert y^0_*\Vert _1;y^0_*;\mu \Vert y^0_*\Vert _{\mathrm{nuc}}]\) belongs to \(\widehat{X}_1\) (recall that \(\Vert \cdot \Vert _{\mathrm{nuc}}\le \Vert \cdot \Vert _1\)) and that

$$\begin{aligned} {\hbox {Opt}}=\upsilon (y^0_*) \ge \overline{\varPhi }(x^{1,*}),\,\,\overline{\varPhi }(x^1)=\max _{x^2\in X_2}\varPhi (x^1,x^2). \end{aligned}$$

It follows that

$$\begin{aligned} \widehat{{\hbox {Opt}}}:=\min _{x^1\in \widehat{X}_1}\overline{\varPhi }(x^1)\le {\hbox {Opt}}. \end{aligned}$$

Further, by Proposition 2 as applied to \(X^\prime _1=\widehat{X}_1\) and \(X^\prime _2=X_2\) we haveFootnote 5

$$\begin{aligned} \overline{\varPhi }(x^{1,t})-\widehat{{\hbox {Opt}}}\le {\hbox {Res}}(\widehat{X}_1\times X_2\big |\mathcal{{I}}_t,\lambda ^t), \end{aligned}$$

where \(\mathcal{{I}}_t\) is the execution protocol generated by CoMP as applied to the saddle point problem (57) (i.e., since the last restart preceding step \(t\) till this step), and \(\lambda ^t\) is the associated accuracy certificate. We conclude that

$$\begin{aligned} \ell _t:=\overline{\varPhi }(x^{1,t})-{\hbox {Res}}(\widehat{X}_1\times X_2\big |\mathcal{{I}}_t,\lambda ^t)\le \widehat{{\hbox {Opt}}}\le {\hbox {Opt}}, \end{aligned}$$

and \(\ell _t\) is easy to compute (since the resolution is just the maximum of a readily given by \(\mathcal{{I}}_t,\lambda ^t\) affine function over \(\widehat{X}_1\times X_2\)). Setting \(\upsilon _t=\max _{\tau \le t} \ell _\tau \), we get nondecreasing with \(t\) lower bounds on \({\hbox {Opt}}\). Note that this component of our lower bounding is independent of the particular structure of \(\psi _0\).

It remains to explain how to get an upper bound \(R\) on \(\Vert y^0_*\Vert _1\), and this is where the special structure of \(\psi _0(y)={1\over 2}\Vert P_\varOmega y-b\Vert _2^2\) is used. Recalling that \(b\in {\mathbf {R}}^M\), let us set

$$\begin{aligned} \vartheta (r) =\min _{v\in {\mathbf {R}}^M}\left\{ {1\over 2}\Vert v-b\Vert _2^2:\Vert v\Vert _1\le r\right\} ,\,\,r\ge 0, \end{aligned}$$

It is immediately seen that replacing the entries in \(b\) by their magnitudes, \(\vartheta (\cdot )\) remains intact, and that for \(b\ge 0\) we have

$$\begin{aligned} \vartheta (r)=\min _{v\in {\mathbf {R}}^M}\left\{ {1\over 2}\Vert v-b\Vert _2^2: v\ge 0,\sum _iv_i\le r\right\} , \end{aligned}$$

so that \(\vartheta (\cdot )\) is an easy to compute nonnegative and nonincreasing convex function of \(r\ge 0\). Now, by definition of \(P_\varOmega \), the function \(\vartheta ^+(\Vert y^0\Vert _1)\) where

$$\begin{aligned} \vartheta ^+(r)= {\lambda } r + \vartheta (r) \end{aligned}$$

is a lower bound on \(\upsilon (y^0)\). As a result, given an upper bound \(\upsilon ^t\) on \({\hbox {Opt}}=\upsilon (y_*)\), the easy-to-compute quantity

$$\begin{aligned} R_t:=\max \left\{ r: \vartheta ^+(r)\le \upsilon ^t\right\} \end{aligned}$$

is an upper bound on \(\Vert y^0_*\Vert _1\). Since \(\upsilon ^t\) is nonincreasing in \(t\), \(R_t\) is nonincreasing in \(t\) as well.

Generating the data In the experiments to be reported, the data of (56) were generated as follows. Given \(n\), we build “true” \(n\times n\) matrix \(y_{\#}=\sum _{i=1}^ke_if_i^T\), with \(k=\lfloor n/4\rfloor \) and vectors \(e_i,f_i\in {\mathbf {R}}^n\) sampled, independently of each other, as follows: we draw a vector from the standard Gaussian distribution \(\mathcal{N}(0,I_n)\), and then zero out part of the entries, with probability of replacing a particular entry with zero selected in such a way that the sparsity of \(y_{\#}\) is about a desired level (in our experiments, we wanted \(y_{\#}\) to have about 10% of nonzero entries). The set \(\varOmega \) of “observed cells” was built at random, with probability 0.25 for a particular cell to be in \(\varOmega \). Finally, \(b\) was generated as \(P_\varOmega (y_{\#}+\sigma \xi )\), where the entries of \(\xi \in {\mathbf {R}}^{n\times n}\) were independently of each other drawn from the standard Gaussian distribution, and

$$\begin{aligned} \sigma =0.1{\sum _{i,j}|[y_{\#}]_{ij}|\over n^2}. \end{aligned}$$

We used \({\lambda }=\mu =10\sigma \).Footnote 6 Finally, our guess for the Frobenius norm of the optimal solution to (56) is defined as follows. Note that the quantity \(\Vert b\Vert _2^2-M\sigma ^2\) is an estimate of \(\Vert P_\varOmega y_{\#}\Vert _2^2\). We define the estimate \(D\) of \(D_*:=\Vert y_*\Vert _F\) “as if” the optimal solution were \(y_{\#}\), and all entries of \(y_{\#}\) were of the same order of magnitude

$$\begin{aligned} D=\sqrt{{n^2\over M}\max [\Vert b\Vert _2^2-M\sigma ^2,1]},\,\,M={\hbox {Card}}(\varOmega ). \end{aligned}$$

Numerical results The results of the first series of experiments are presented in Table 1. The comments are as follows.

Table 1 Composite Mirror Prox algorithm on problem (56) with \(n\times n\) matrices
Table 2 Composite Mirror Prox algorithm on problem (56) with \(n\times n\) matrices and known optimal value \({\hbox {Opt}}\)
Table 3 Number of steps and CPU time for Composite Mirror Prox algorithm and ADMM algorithm to achieve relative error \(\epsilon = 10^{-4}\) on problem (56)

In the “small” experiment \((n=128\), the largest \(n\) where we were able to solve (56) in a reasonable time by CVX [13] using the state-of-the-art mosek [1] Interior-Point solver and thus knew the “exact” optimal value), CoMP exhibited fast convergence: relative accuracies 1.1e\(-\)3 and 6.2e\(-\)6 are achieved in 64 and 4,096 steps (1.2 and 74.9 s, respectively, as compared to 4,756.7 s taken by CVX).

In larger experiments (\(n=512\) and \(n=1,024\), meaning design dimensions 262,144 and 1,048,576, respectively), the running times look moderate, and the convergence pattern of the CoMP still looks promising.Footnote 7 Note that our lower bounding, while somehow working, is very conservative: it overestimates the “optimality gap” \(\upsilon ^t-\upsilon _t\) by 2–3 orders of magnitude for moderate and large values of \(t\) in the \(128\times 128\) experiment. More accurate performance evaluation would require a less conservative lower bounding of the optimal value (as of now, we are not aware of any alternative).

In the second series of experiments, the data of (56) were generated in such a way that the true optimal solution and optimal value to the problem were known from the very beginning. To this end we take as \(\varOmega \) the collection of all cells of an \(n\times n\) matrix, which, via optimality conditions, allows to select \(b\) making our “true” matrix \(y_{\#}\) the optimal solution to (56). The results are presented in Table 2.

In the third series of experiments, we compared our algorithm with the basic version of ADMM as presented in [5]; this version is capable to handle straightforwardly the matrix completion with noisy observations of part of the entries.Footnote 8 The data in these experiments were generated in the same way as in the aforementioned experiments with known optimal solutions. The results are presented in Table 3. We see that ADMM is essentially faster than our algorithm, suggesting that ADMM, when applicable in its basic form, typically outperforms CoMP. However, this is not the case when ADMM is not directly applicable; we consider one example of the sort in the next section.

It should be mentioned that in these experiments the value of \(\rho _1\) resulting in negligibly small, as compared to \(\epsilon _2\), values of \(\epsilon _1\) in (55) was found in the first 10–30 steps of the algorithm, with no restarts afterwards.

Remark

For the sake of simplicity, so far we were considering problem (56), where minimization is carried out over \(y^0\) running through the entire space \({\mathbf {R}}^{n\times n}\) of \(n\times n\) matrices. What happens if we restrict \(y^0\) to reside in a given closed convex domain \(Y_0\)?

It is immediately seen that the construction we have presented can be straightforwardly modified for the cases when \(Y_0\) is a centered at the origin ball of the Frobenius or \(\Vert \cdot \Vert _1\) norm, or the intersection of such a set with the space of symmetric \(n\times n\) matrices. We could also handle the case when \(Y_0\) is the centered at the origin nuclear norm ball (or intersection of this ball with the space of symmetric matrices, or with the cone of positive semidefinite symmetric matrices), but to this end one needs to “swap the penalties”—to write the representation (47c) of problem (56) as

$$\begin{aligned}&{\mathop {\mathop {\min }\limits _{\{y^k;\tau ^k]\}_{k=0}^1}}\limits _{ \in Y_0^+\times Y_1^+}} \bigg \{\Upsilon (y^0,y^1,\tau ^0,\tau ^1):=\underbrace{{1\over 2}\Vert P_\varOmega y^0-b\Vert _2^2}_{\psi _0(y^0)} +\tau ^0+\tau ^1:y^0=y^1\bigg \},\\&Y_0^+=\{[y^0;\tau ^0]: y^0\in Y_0,\tau ^0\ge \mu \Vert y^0\Vert _{\mathrm{nuc}}\},\\&Y_1^+=\{[y^1;\tau ^1]: y^1\in Y_1,\tau ^1\ge {\lambda }\Vert y^1\Vert _1\}, \end{aligned}$$

where \(Y_1\supset Y_0\) “fits” \(\Vert \cdot \Vert _1\) (meaning that we can point out a d.g.f. \(\omega _1(\cdot )\) for \(Y_1\) which, taken along with \(\varPsi _1(y^1)={\lambda }\Vert y^1\Vert _1\), results in easy-to-solve auxiliary problems (45)). We can take, e.g. \(\omega _1(y^1)={1\over 2}\Vert y^1\Vert _F^2\) and define \(Y_1\) as the entire space, or a centered at the origin Frobenius/\(\Vert \cdot \Vert _1\) norm ball large enough to contain \(Y_0\).

4.4.2 Image decomposition

Problem of interest In the experiments to be reported, we applied the just outlined approach to Example 2, that is, to the problem

$$\begin{aligned} {\hbox {Opt}}&= \min \limits _{y^1,y^{2},y^{3}\in {\mathbf {R}}^{n\times n}}\left\{ \Vert A (y^{1}+y^{2}+y^{3}) -b\Vert _2+\mu _1\Vert y^{1}\Vert _{\mathrm{nuc}}\right. \nonumber \\&\left. +\,\mu _2\Vert y^{2}\Vert _1+\mu _3 \Vert y^{3}\Vert _{\mathrm{TV}}\right\} , \end{aligned}$$
(58)

where \(A(y):\,{\mathbf {R}}^{n\times n}\rightarrow {\mathbf {R}}^{M}\) is a given linear mapping.

Problem reformulation We first rewrite (58) as a saddle point optimization problem

$$\begin{aligned} {\hbox {Opt}}&= \min \limits _{y^{1},y^{2},y^{3}\in {\mathbf {R}}^{n\times n}}\left\{ \Vert A(y^{1}+y^{2}+y^{3}) -b\Vert _2+\mu _1\Vert y^{1}\Vert _{\mathrm{nuc}}\right. \nonumber \\&\left. \quad +\,\mu _2 \Vert y^{2}\Vert _1+\mu _3\Vert T y^{3}\Vert _1\right\} \nonumber \\&= \min \limits _{y^{1},y^{2},y^{3}\in {\mathbf {R}}^{n\times n}}\left\{ \max _{\Vert z\Vert _2\le 1}\langle z,A(y^{1}+y^{2}+y^{3}) -b\rangle +\quad \mu _1\Vert y^{1}\Vert _{\mathrm{nuc}}\right. \nonumber \\&\left. +\,\mu _2 \Vert y^{2}\Vert _1+\mu _3\Vert T y^{3}\Vert _1\right\} , \end{aligned}$$
(59)

where \(T:\;{\mathbf {R}}^{n\times n}\rightarrow {\mathbf {R}}^{2n(n-1)}\) is the mapping \(y\mapsto T y=\) \(\left[ \begin{array}{c}\left\{ (\nabla _i y)_{n(j-1)+i}\right\} _{i=1, \ldots ,n-1,\,j=1, \ldots ,n}\\ \left\{ (\nabla _j y)_{n(i-1)+j})\right\} _{i=1, \ldots ,n,\,j=1, \ldots ,n-1}\end{array}\right] \).

Next we rewrite (59) as a linearly constrained saddle-point problem with “simple” penalties:

$$\begin{aligned} {\hbox {Opt}}&= {\mathop {\mathop {\min }\limits _{y^3\in Y_3}}\limits _{ [y^k;\tau _k]\in Y^+_k,\, 0\le k\le 2}} \max _{z\in Z}\left\{ \langle z,A(y^1+y^2+y^3) -b\rangle +\tau _1 +\tau _2+\tau _0,\;y^0=Ty^{3}\right\} , \end{aligned}$$

where

$$\begin{aligned} Y_0^+&= \left\{ [y^0;\tau _0]:\, y^0\in Y_0={\mathbf {R}}^{2n(n-1)}: \Vert y^{0}\Vert _{1}\le \tau _0/\mu _3\right\} ,\;\;\\ Y_1^+&= \left\{ [y^1;\tau _1]:\, y^1\in Y_1={\mathbf {R}}^{n\times n}: \Vert y^{1}\Vert _{{\mathrm{nuc}}}\le \tau _1/\mu _1\right\} ,\;\;\\ Y_2^+&= \left\{ [y^2;\tau _2]:\, y^2\in Y_2={\mathbf {R}}^{n\times n}: \Vert y^{2}\Vert _{1}\le \tau _2/\mu _2\right\} \\ Y_3&= {\mathbf {R}}^{n\times n}, \;\;Z=\left\{ z\in {\mathbf {R}}^{M}: \Vert z\Vert _2\le 1\right\} , \end{aligned}$$

and further approximate the resulting problem with its penalized version:

$$\begin{aligned} \widehat{{\hbox {Opt}}}&= {\mathop {\mathop {\min }\limits _{y^3\in Y_3}}\limits _{ [y^k;\tau _k]\in Y^+_k,\, 0\le k\le 2}} {\mathop {\mathop {\max }\limits _{z\in Z}}\limits _{ w\in W}} \left\{ \begin{array}{l}\langle z,A(y^1+y^2+y^3) -b\rangle \\ +\tau _1 +\tau _2+\tau _0+\rho \langle w,y^0-Ty^{3}\rangle \end{array}\right\} \!, \end{aligned}$$
(60)

with

$$\begin{aligned} W=\left\{ w\in {\mathbf {R}}^{2n(n-1)},\,\Vert w\Vert _2\le 1\right\} \!. \end{aligned}$$

Note that the function \(\psi (y^1,y^2,y^3):=\Vert A(y^1+y^2+y^3)-b\Vert _2=\max _{\Vert z\Vert _2\le 1}\) \( \langle z,\,A(y^1+y^2+y^3)-b\rangle \) is Lipschitz continuous in \(y^3\) with respect to the Euclidean norm on \({\mathbf {R}}^{n\times n}\) with corresponding Lipschitz constant \(G=\Vert A\Vert _{2,2}\), which is the spectral norm (the principal singular value) of \(A\). Further, \(\varPsi (y^0)=\mu _3\Vert y^0\Vert _1\) is Lipschitz-continuous in \(y^0\) with respect to the Euclidean norm on \({\mathbf {R}}^{2n(n-1)}\) with the Lipschitz constant \(H\le \mu _3\sqrt{2n(n-1)}\). With the help of the result of Proposition 4 we conclude that to ensure the “exact penalty” property it suffices to choose \(\rho \ge \Vert A\Vert _{2,2}+\mu _3\sqrt{2n(n-1)}\). Let us denote

$$\begin{aligned} U=\left\{ \begin{array}{c}u=[y^0;\ldots ; y^3; z; w]: \;y^k\in Y^k,\,0\le k\le 3,\\ z\in {\mathbf {R}}^M,\,\Vert z\Vert _2\le 1,\,w\in {\mathbf {R}}^{2n(n-1)},\,\Vert w\Vert _2\le 1\end{array} \right\} . \end{aligned}$$

We equip the embedding space \(E_u\) of \(U\) with the norm

$$\begin{aligned} \Vert u\Vert =\left( \alpha _0\Vert y^0\Vert _2^2+\sum _{k=1}^3 \alpha _k\Vert y^k\Vert ^2_2+\beta \Vert z\Vert _2^2+\gamma \Vert w\Vert ^2_2 \right) ^{1/2}\!, \end{aligned}$$

and \(U\) with the proximal setup \((\Vert \cdot \Vert ,\,\omega (\cdot ))\) with

$$\begin{aligned} \omega (u)= {\alpha _0\over 2}\Vert y^0\Vert _2^2+\sum _{k=1}^3 {\alpha _k\over 2}\Vert y^k\Vert ^2_2+{\beta \over 2}\Vert z\Vert _2^2+{\gamma \over 2} \Vert w\Vert _2^2. \end{aligned}$$
Table 4 Composite Mirror Prox algorithm on problem (58) with \(n\times n\) matrices

Implementing the CoMP algorithm When implementing the CoMP algorithm, we use the above proximal setup with adaptive aggregation parameters \(\alpha _0=\cdots =\alpha _4= 1/D^2\) where \(D\) is our guess for the upper bound of \(||y_*||_2\), that is, whenever the norm of the current solution exceeds 20 % of the guess value, we increase \(D\) by factor 2 and update the scales accordingly. The penalty \(\rho \) and stepsizes \(\gamma _t\) are adjusted dynamically the same way as explained in the last experiment.

Numerical results In the first series of experiments, we build the \(n\times n\) observation matrix \(b\) by first generating a random matrix with rank \(r=\lfloor \sqrt{n}\rfloor \) and another random matrix with sparsity \(p=0.01\), so that the observation matrix is a sum of these two matrices and of random noise of level \(\sigma =0.01\); we take \(y\mapsto Ay\) as the identity mapping. We use \(\mu _1=10\sigma , \mu _2=\sigma ,\mu _3=\sigma \). The very preliminary results of this series of experiments are presented in Table 4. Note that unlike the matrix completion problem, discussed in Sect. 4.4.1, here we are not able to generate the problem with known optimal solutions. Better performance evaluation would require good lower bounding of the true optimal value, which is however problematic due to unbounded problem domain.

In the second series of experiments, we implement the CoMP algorithm to decompose real images and extract the underlying low rank/sparse singular distortion/smooth background components. The purpose of these experiments is to illustrate how the algorithm performs with the choice of small regularization parameters which is meaningful from the point of view of applications to image recovery. Image decomposition results for two images are provided on Figs. 1 and 2. On Fig. 1, we present the decomposition of the observed image of size \(256\times 256\). We apply the model (59) with regularization parameters \(\mu _1=0.03,\mu _2=0.001,\mu _3=0.005\). We run 2,000 iterations of CoMP (total of 393.5 s MATLAB, Intel i5-2400S@2.5GHz CPU). The first component \(y_1\) has approximate rank \(\approx 1\); the relative reconstruction error is \(\Vert y_1+y_2+y_3-b\Vert _2/\Vert b\Vert _2\approx 2.8\times 10^{-4}\). Figure 2 shows the decomposition of the observed image of size \(480\times 640\) after 1,000 iterations of CoMP (total of \(873.6\) sec). The regularization parameters of the problem (58) were set to \(\mu _1=0.06,\mu _2=0.002,\mu _3=0.005\). The relative reconstruction error is \(\Vert y_1+y_2+y_3-b\Vert _2/\Vert b\Vert _2\approx 8.4\times 10^{-3}\).

Fig. 1
figure 1

Observed and reconstructed images (size \(256\times 256\)), a observation \(b\), b recovery \(y_1+y_2+y_3\), c low-rank component, d sparse component, e smooth component

Fig. 2
figure 2

Observed and decomposed images (size \(480\times 640\)) a observation \(b\), b low-rank component, c sparse component, d smooth component

In the third series of experiments, we compare the CoMP algorithm with some other first-order methods. To the best of our knowledge, a quite limited set of known methods are readily applicable to problems of the form (58), where the “observation-fitting” component in the objective is nonsmooth and the penalty terms involve different components of the observed image. As a result, we compared CoMP to just two alternatives. The first, below referred to as smoothing-APG, applies Nesterov’s smoothing techniques to both the first \(\Vert \cdot \Vert _2\) term and the total variation term in the objective of (58) and then uses the Accelerated Proximal Gradient method (see [20, 21] for details) to solve the resulting problem which takes the form

$$\begin{aligned} \min \limits _{y^1,y^2,y^3\in {\mathbf {R}}^{m\times n}}\left\{ f_{\rho _1}(y^1,y^2,y^3)+\mu _1\Vert y^1\Vert _{{\mathrm{nuc}}} +\mu _2\Vert y^2\Vert _1+f_{\rho _2}(y^3)\right\} \end{aligned}$$
(61)

with

$$\begin{aligned} f_{\rho _1}(y^1,y^2,y^3)&= \max _{z:\Vert z\Vert _2\le 1}\left\{ \langle P_\varOmega (y^1+y^2+y^3) -b, z\rangle -\frac{\rho _1}{2}\Vert z\Vert _2^2\right\} \\ f_{\rho _2}(y^3)&= \max _{w:\Vert w\Vert _\infty \le 1}\left\{ \mu _3\langle Ty^3,w\rangle -\frac{\rho _2}{2}\Vert w\Vert _2^2\right\} \end{aligned}$$

where \(\rho _1>0,\rho _2>0\). In the experiment, we specified the smoothing parameters as \(\rho _1=\epsilon , \rho _2=\frac{\epsilon }{2(n-1)n},\epsilon =10^{-3}\).

The second alternative, referred to as smoothing-ADMM, applies smoothing technique to the first term in the objective of (58) and uses the ADMM algorithm to solve the resulting problem

Fig. 3
figure 3

Comparing CoMP, smoothing-APG, and smoothing-ADMM on problem (58) with \(128\times 128\) matrix. \(x\)-axis: CPU time; \(y\)-axis: relative inaccuracy in terms of the objective. Platform: MATLAB on Intel i5-2400S @2.5GHz CPU with 4GB RAM, 64-bit Windows 7

$$\begin{aligned} \begin{array}{rl} \min \limits _{y^1,y^2,y^3\in {\mathbf {R}}^{m\times n}}&{}\left\{ f_{\rho _1}(y^1,y^2,y^3)+\mu _1\Vert y^1\Vert _{{\mathrm{nuc}}} +\mu _2\Vert y^2\Vert _1+\mu _3\Vert z\Vert _1\right\} \\ \text {s.t. }&{} Ty^3 - z = 0 \end{array} \end{aligned}$$
(62)

the associated augmented Lagrangian being

$$\begin{aligned} L_{\nu }(x=[y^1,y^2,y^3],z;w)&= f_{\rho _1}(y^1,y^2,y^3)+\mu _1\Vert y^1\Vert _{{\mathrm{nuc}}} +\mu _2\Vert y^2\Vert _1+\mu _3\Vert z\Vert _1\\&\quad +\langle w, Ty^3-z\rangle +\frac{\nu }{2}\Vert Ty^3-z\Vert _2^2 \end{aligned}$$

where \(\nu >0\) is a parameter. The basic version of ADMM would require performing alternatingly \(x=(y^1,y^2,y^3)\)-updates and \(z\)-updates. Since minimizing \(L_{\nu }\) in \(x\) in a closed analytic form is impossible, we are enforced to perform \(x\)-update iteratively and hence inexactly. In our experiment, we used for this purpose the Accelerated Proximal Gradient method, with three implementations differing by the allowed number of inner iterations (5, 20, 50, respectively).

In the experiment, we generated synthetic data in the same fashion as in the first series of experiments and compared the performances of the three algorithms (CoMP and two just described alternatives) by computing accuracies in terms of the objective achieved with a prescribed time budget. The results are presented in Fig. 3. One can see that the performance of ADMM heavily depends on the allowed number of inner iterations and is not better than the performance of the Accelerated Proximal Gradient algorithm as applied to smooth approximation of the problem of interest. Our algorithm, although not consistently outperforming the Smoothing-APG approach, could still be very competitive, especially when only low accuracy is required.

5 Semi-separable convex problems

5.1 Preliminaries

Our problem of interest in this section is problem (4), (6), namely,

$$\begin{aligned} {\hbox {Opt}}&= \min \limits _{[y^1;\ldots ;y^K] \in Y_1\times \cdots \times Y_K} \left\{ f\left( [y^1;\ldots ;y^K]\right) := \sum _{k=1}^K[\psi _k(y^k)\right. \nonumber \\&\quad \left. +\,\varPsi _k(y^k)]: \;\sum _{k=1}^K A_ky^k =b\right\} \nonumber \\&=\min \limits _{\left[ y^1;\ldots ;y^K\right] \in Y_1\times \cdots \times Y_K}\left\{ \sum \limits _{k=1}^K\left[ \psi _k(y^k)+\varPsi _k(y^k)\right] :\;g\left( [y^1;\ldots ;y^K]\right) \le 0 \right\} ,\nonumber \\&\qquad g\left( [y^1;\ldots ;y^K]\right) =\pi ^*\left( \sum \limits _{k=1}^K A_ky^k-b\right) =\max \limits _{\pi (w)\le 1} \sum \limits _{k=1}^K\langle A_ky^k-b,w \rangle ,\nonumber \\ \end{aligned}$$
(63)

where \(\pi (\cdot )\) is some norm and \(\pi ^*(\cdot )\) is the conjugate norm. A straightforward approach to (63) would be to rewrite it as a saddle point problem

$$\begin{aligned} \min _{\left[ y^1;\ldots ;y^K\right] \in Y_1\times \cdots \times Y_K}\max _{w}\left\{ \sum _{k=1}^K\left[ \psi _k(y^k)+\varPsi _k(y^k)\right] +\left\langle \sum _{k=1}^KA_kz^k-b,w\right\rangle \right\} \end{aligned}$$
(64)

and solve by the Mirror-Prox algorithm from Sect. 3.2 adjusted to work with an unbounded domain \(U\), or, alternatively, we could replace \(\max _w\) with \(\max _{w:\;\pi (w)\le R}\) with “large enough” \(R\) and use the above algorithm “as is.” The potential problem with this approach is that if the \(w\)-component \(w^*\) of the saddle point of (64) is of large \(\pi \)-norm (or “large enough” \(R\) is indeed large), the (theoretical) efficiency estimate would be bad since it is proportional to the magnitude of \(w^*\) (resp., to \(R\)). To circumvent this difficulty, we apply to (63) the sophisticated policy originating from [15]. This policy requires the set \(Y=Y_1\times \cdots \times Y_K\) to be bounded, which we assume below.

Course of actions Note that our problem of interest is of the generic form

$$\begin{aligned} {\hbox {Opt}}=\min _{y\in Y}\left\{ f(y): \;g(y)\le 0\right\} \end{aligned}$$
(65)

where \(Y\) is a convex compact set in a Euclidean space \(E\), \(f\) and \(g:\;Y\rightarrow {\mathbf {R}}\) are convex and Lipschitz continuous functions. For the time being, we focus on (65) and assume that the problem is feasible and thus solvable.

We intend to solve (65) by the generic algorithm presented in [15]; for our now purposes, the following description of the algorithm will do:

  1. 1.

    The algorithm works in stages. Stage \(s=1,2,...\) is associated with working parameter \(\alpha _s\in (0,1)\). We set \(\alpha _1=\frac{1}{2}\).

  2. 2.

    At stage \(s\), we apply a first order method \(\mathcal{{B}}\) to the problem

    $$\begin{aligned} {(P_s)} \quad {\hbox {Opt}}_s=\min _{y\in Y} \left\{ f_s(y)=\alpha _s f(y)+(1-\alpha _s) g(y)\right\} \end{aligned}$$
    (66)

The only property of the algorithm \(\mathcal{{B}}\) which matters here is its ability, when run on \((P_s)\), to produce in course of \(t=1,2, \ldots \) steps iterates \(y_{s,t}\), upper bounds \(\overline{f}_s^t\) on \({\hbox {Opt}}_s\) and lower bounds \(\underline{f}_{s,t}\) on \({\hbox {Opt}}_s\) in such a way that

  1. (a)

    for every \(t=1,2, \ldots \), the \(t\)-th iterate \(y_{s,t}\) of \(\mathcal{{B}}\) as applied to \((P_s)\) belongs to \(Y\);

  2. (b)

    the upper bounds \(\overline{f}_s^t\) are nonincreasing in \(t\) (this is “for free”) and “are achievable,” that is, they are of the form

    $$\begin{aligned} \overline{f}_s^t=f_s(y^{s,t}), \end{aligned}$$

    where \(y^{s,t}\in Y\) is a vector which we have at our disposal at step \(t\) of stage \(s\);

  3. (c)

    the lower bounds \(\underline{f}_{s,t}\) should be nondecreasing in t (this again is “for free”);

  4. (d)

    for some nonincreasing sequence \(\epsilon _t\rightarrow +0\), \(t\rightarrow \infty \), we should have

    $$\begin{aligned} \overline{f}_s^t-\underline{f}_{s,t}\le \epsilon _t \end{aligned}$$

    for all \(t\) and \(s\).

Note that since (65) is solvable, we clearly have \({\hbox {Opt}}_s\le \alpha _s{\hbox {Opt}}\), implying that the quantity \(\underline{f}_{s,t}/\alpha _s\) is a lower bound on \({\hbox {Opt}}\). Thus, at step \(t\) of stage \(s\) we have at our disposal a number of valid lower bounds on \({\hbox {Opt}}\); we denote the best (the largest) of these bounds \(\underline{{\hbox {Opt}}}_{s,t}\), so that

$$\begin{aligned} {\hbox {Opt}}\ge \underline{{\hbox {Opt}}}_{s,t}\ge \underline{f}_{s,t}/\alpha _s \end{aligned}$$
(67)

for all \(s,t\), and \(\underline{{\hbox {Opt}}}_{s,t}\) is nondecreasing in time.Footnote 9

  1. 3.

    When the First Order oracle is invoked at step \(t\) of stage \(s\), we get at our disposal a triple \((y_{s,t}\in Y,f(y_{s,t}),g(y_{s,t}))\). We assume that all these triples are somehow memorized. Thus, after calling First Order oracle at step \(t\) of stage \(s\), we have at our disposal a finite set \(Q_{s,t}\) on the 2D plane such that for every point \((p,q)\in Q_{s,t}\) we have at our disposal a vector \(y_{pq}\in Y\) such that \(f(y_{pq})\le p\) and \(g(y_{pq})\le q\); the set \(Q_{s,t}\) (in today terminology, a filter) is comprised of all pairs \((f(y_{s',t'}),g(y_{s',t'}))\) generated so far. We set

    $$\begin{aligned} h_{s,t}(\alpha )&= \min _{(p,q)\in Q_{s,t}} \left[ \alpha (p-\underline{{\hbox {Opt}}}_{s,t}) + (1-\alpha ) q\right] : [0,1]\rightarrow {\mathbf {R}},\nonumber \\ {\hbox {Gap}}(s,t)&= \max \limits _{0\le \alpha \le 1} h_{s,t}(\alpha ). \end{aligned}$$
    (68)
  2. 4.

    Let \(\varDelta _{s,t}=\{\alpha \in [0,1]: h_{s,t}(\alpha )\ge 0\}\), so that \(\varDelta _{s,t}\) is a segment in \([0,1]\). Unless we have arrived at \({\hbox {Gap}}(s,t)=0\) (i.e., got an optimal solution to (65), see (69)), \(\varDelta _{s,t}\) is not a singleton (since otherwise \({\hbox {Gap}}(s,t)\) were 0). Observe also that \(\varDelta _{s,t}\) are nested: \(\varDelta _{s',t'}\subset \varDelta _{s,t}\) whenever \(s'\ge s\), same as whenever \(s'=s\) and \(t'\ge t\).

    We continue iterations of stage \(s\) while \(\alpha _s\) is “well-centered” in \(\varDelta _{s,t}\), e.g., belongs to the mid-third of the segment. When this condition is violated, we start stage \(s+1\), specifying \(\alpha _{s+1}\) as the midpoint of \(\varDelta _{s,t}\).

The properties of the aforementioned routine are summarized in the following statement (cf. [15]).

Proposition 5

(i) \({\hbox {Gap}}(s,t)\) is nonincreasing in time. Furthermore, at step \(t\) of stage \(s\), we have at our disposal a solution \(\widehat{y}^{s,t}\in Y\) to 65 such that

$$\begin{aligned} f(\widehat{y}^{s,t})\le {\hbox {Opt}}+ {\hbox {Gap}}(s,t),\;\;\text{ and }\;\; g(\widehat{y}^{s,t}) \le {\hbox {Gap}}(s,t), \end{aligned}$$
(69)

so that \(\widehat{y}^{s,t}\) belongs to the domain \(Y\) of problem (65) and is both \({\hbox {Gap}}(s,t)\)-feasible and \({\hbox {Gap}}(s,t)\)-optimal.

(ii) For every \(\epsilon >0\), the number \(s(\epsilon )\) of stages until a pair \((s,t)\) with \({\hbox {Gap}}(s,t)\le \epsilon \) is found obeys the bound

$$\begin{aligned} s(\epsilon )\le {\ln \left( {3L\epsilon ^{-1}}\right) \over \ln \left( 4/3\right) }, \end{aligned}$$
(70)

where \(L<\infty \) is an a priori upper bound on \(\max _{y\in Y}\max [|f(y)|,|g(y)|]\). Besides this, the number of steps at each stage does not exceed

$$\begin{aligned} T(\epsilon )=\min \left\{ t\ge 1: \epsilon _t\le {\epsilon \over 3}\right\} +1. \end{aligned}$$
(71)

5.2 Composite Mirror Prox algorithm for semi-separable optimization

We are about to apply the approach above to the semi-separable problem (63), (6). Problem setup we consider now is as follows (cf. Sect. 4.1). For every \(k\), \(1\le k\le K\), we are given

  1. 1.

    Euclidean spaces \(E_k\) and \({\overline{E}}_k\) along with their nonempty closed and bounded convex subsets \(Y_k\) and \(Z_k\), respectively;

  2. 2.

    proximal setups for \((E_k,Y_k)\) and \(({\overline{E}}_k, Z_k)\), that is, norms \(p_k(\cdot )\) on \(E_k\), norms \(q_k\) on \({\overline{E}}_k\), and d.g.f.’s \(\omega _k(\cdot ):Y_k\rightarrow {\mathbf {R}}\), \({\overline{\omega }}_k(\cdot ):Z_k\rightarrow {\mathbf {R}}\), which are compatible with \(p_k(\cdot )\) and \(q_k(\cdot )\), respectively;

  3. 3.

    linear mapping \(y^k\mapsto A_k y^k:E_k\rightarrow E\), where \(E\) is a Euclidean space;

  4. 4.

    Lipschitz continuous convex functions \(\psi _k(y^k):Y_k\rightarrow {\mathbf {R}}\) along with their saddle point representations

    $$\begin{aligned} \psi _k(y^k)=\sup _{z^k\in Z_k}{[}\phi _k(y^k,z^k)-{\overline{\varPsi }}_k(z^k){]},\;\;1\le k\le K, \end{aligned}$$
    (72)

    where \(\phi _k(y^k,z^k):Y_k\times Z_k\rightarrow {\mathbf {R}}\) are smooth (with Lipschitz continuous gradients) functions convex in \(y^k\in Y_k\) and concave in \(z^k\in Z_k\), and \({\overline{\varPsi }}_k(z^k):Z_k\rightarrow {\mathbf {R}}\) are Lipschitz continuous convex functions such that the problems of the form

    $$\begin{aligned} \min \limits _{z^k\in Z_k} \left[ {\overline{\omega }}_k(z^k)+\langle \xi ^k,z^k\rangle +\alpha {\overline{\varPsi }}_k(z^k)\right] \quad [\alpha >0] \end{aligned}$$
    (73)

    are easy to solve;

  5. 5.

    Lipschitz continuous convex functions \(\varPsi _k(y^k):Y_k\rightarrow {\mathbf {R}}\) such that the problems of the form

    $$\begin{aligned} \min \limits _{y^k\in Y_k}&\left[ \omega _k(y^k)+\langle \xi ^k,y^k\rangle +\alpha \varPsi _k(y^k)\right] \quad [\alpha >0] \end{aligned}$$

    are easy to solve;

  6. 6.

    a norm \(\pi ^*(\cdot )\) on \(E\), with conjugate norm \(\pi (\cdot )\), along with a d.g.f. \(\widehat{\omega }(\cdot ):W:=\{w\in E:\pi (w)\le 1\}\rightarrow {\mathbf {R}}\) compatible with \(\pi (\cdot )\) and is such that problems of the form

    $$\begin{aligned} \min _{w\in W}\left[ \widehat{\omega }(w)+\langle \xi ,w\rangle \right] \end{aligned}$$

    are easy to solve.

The outlined data define the sets

$$\begin{aligned} Y^+_k&= \left\{ [y^k;\tau ^k]: \;y^k\in Y_k,\tau ^k\ge \varPsi _k(y^k)\right\} \subset E_k^+:=E_k\times {\mathbf {R}},\,\,1\le k\le K,\\ Z^+_k&= \left\{ [z^k;\sigma ^k]: \;z^k\in Z_k,\sigma ^k\ge {\overline{\varPsi }}_k(z^k)\right\} \subset {\overline{E}}_k^+:={\overline{E}}_k\times {\mathbf {R}},\,\,1\le k\le K. \end{aligned}$$

The problem of interest here is problem (63), (72):

$$\begin{aligned} {\hbox {Opt}}&= \min \limits _{[y^1;\ldots ;y^K]}\max \limits _{[z^1;\ldots ;z^K]} \left\{ \sum _{k=1}^K[\phi _k(y^k,z^k)+\varPsi _k(y^k)\right. \nonumber \\&\quad -\,{\overline{\varPsi }}_k(z^k)]: \;\;\pi ^*\left( \sum \limits _{k=1}^K A_ky^k-b\right) \le 0,\nonumber \\&\left. [y^1;\ldots ;y^K] \in Y_1\times \cdots \times Y_K,\;\;[z^1;\ldots ;z^k]\in Z_1\times \cdots \times Z_K\right\} \nonumber \\&= \min \limits _{\left\{ [y^k;\tau ^k]\right\} _{k=1}^K}\max \limits _{\left\{ [z^k;\sigma ^k]\right\} _{k=1}^K} \left\{ \sum _{k=1}^K[\phi _k(y^k,z^k)+\tau ^k\right. \nonumber \\&\quad -\,\sigma ^k]:\;\; \max \limits _{w\in W} \sum \limits _{k=1}^K\langle A_ky^k-b,w \rangle \le 0,\nonumber \\&\left. \left\{ [y^k;\tau ^k]\in Y^+_k\right\} _{k=1}^K,\;\;\left\{ [z^k;\sigma ^k]\in Z^+_k\right\} _{k=1}^K,\;w\in W \right\} . \end{aligned}$$
(74)

Solving (74) using the approach in the previous section amounts to resolving a sequence of problems \((P_s)\) as in (66) where, with a slight abuse of notation,

$$\begin{aligned} Y&= \left\{ y=\left\{ [y^k;\tau ^k]\right\} _{k=1}^K:\;[y^k;\tau ^k]\in Y^+_k, \;\tau ^k\le C_k,\,1\le k\le K\right\} ;\\ f(y)&= \max \limits _{z=\left\{ [z^k;\sigma ^k]\right\} _{k=1}^K} \left\{ \sum _{k=1}^K\left[ \phi _k(y^k,z^k)+\tau ^k-\sigma ^k\right] : \;z\in Z=\left\{ [z^k;\sigma ^k]\in Z^+_k\right\} _{k=1}^K\right\} ;\\ g(y)&= \max \limits _{w}\left\{ \sum _{k=1}^K\langle A_ky^k-b,w\rangle :\;w\in W\right\} . \end{aligned}$$

Here \(C_k\ge \max _{y^k\in Y_k}\varPsi _k(y^k)\) are finite constants introduced to make \(Y\) compact, as required in the premise of Proposition 5; it is immediately seen that the magnitudes of these constants (same as their very presence) does not affect the algorithm \(\mathcal{{B}}\) we are about to describe.

The algorithm \(\mathcal{{B}}\) we intend to use will solve \((P_s)\) by reducing the problem to the saddle point problem

$$\begin{aligned} \overline{{\hbox {Opt}}}&= \min \limits _{y}\;\max \limits _{[z;w]} \Big \{ \varPhi (y,[z;w]):=\alpha \sum _{k=1}^K\left[ \phi _k(y^k,z^k)+\tau ^k-\sigma ^k\right] \nonumber \\&\qquad \qquad \qquad +\,(1-\alpha )\sum _{k=1}^K\langle A_ky^k-b,w\rangle :\nonumber \\&\qquad \qquad \qquad y=\left\{ [y^k;\tau ^k]\right\} _{k=1}^K\in Y,\;[z=\left\{ [z^k;\sigma ^k]\right\} _{k=1}^K\in Z;\; w\in W]\Big \}, \end{aligned}$$

where \(\alpha =\alpha _s\).

Setting

$$\begin{aligned} U&= \Bigg \{u=\left[ y^1; \ldots ;y^K;z^1; \ldots ;z^K;w\right] :\;y^k\in Y_k,\;z^k\in Z_k,\;1\le k\!\le \! K, w\in W\Bigg \},\nonumber \\ X&= \Bigg \{\left[ u;v=[\tau ^1; \ldots ;\tau ^K;\sigma ^1; \ldots ;\sigma ^K]\right] :\; u\in U, \;\varPsi _k(y^k)\le \tau ^k\le C_k, \nonumber \\&\quad \; {\overline{\varPsi }}_k(z^k)\le \sigma ^k,\;1\le k\le K\Bigg \}, \end{aligned}$$

\(X\) can be thought of as the domain of the variational inequality associated with (75), the monotone operator in question being

$$\begin{aligned} F(u,v)&= [F_u(u);F_v],\nonumber \\ F_u(u)&= \left[ \begin{array}{l} \left\{ \alpha \nabla _y\phi _k(y^k,z^k)+(1-\alpha )A_k^Tw\right\} _{k=1}^K\\ \left\{ -\alpha \nabla _z\phi _k(y^k,z^k)\right\} _{k=1}^K\\ (1-\alpha )[b-\sum _{k=1}^KA_ky^k] \end{array}\right] ,\nonumber \\ F_v&= \alpha [1; \ldots ;1]. \end{aligned}$$
(75)

By exactly the same reasons as in Sect. 4, with properly assembled norm on the embedding space of \(U\) and d.g.f., (75) can be solved by the MP algorithm from Sect. 3.2. Let us denote

$$\begin{aligned} \zeta ^{s,t}= \left[ \widehat{y}^{s,t}=\left\{ [\widehat{y}^k;\widehat{\tau }^k]\right\} _{k=1}^K\in Y;\left[ z^{s,t}\in Z; w^{s,t}\in W\right] \right] \end{aligned}$$

the approximate solution obtained in course of \(t=1,2, \ldots \) steps of CoMP when solving \((P_s)\), and let

$$\begin{aligned} \widehat{f}_s^t:=\max _{z\in Z,w\in W}\varPhi (\widehat{y}^{s,t},[z;w])=\alpha \sum _{k=1}^K[\psi _k(\widehat{y}^k) +\widehat{\tau }^k]+(1-\alpha )\pi ^*\left( \sum _{k=1}^KA_k\widehat{y}^k-b\right) \end{aligned}$$

be the corresponding value of the objective of \((P_s)\). It holds

$$\begin{aligned} \widehat{f}_s^t-\overline{{\hbox {Opt}}}\le {\epsilon _{{\tiny \mathrm Sad}}}(\zeta ^{s,t}\big | Y,Z\times W,\varPhi )\le \epsilon _t:=O(1){\mathcal{{L}}}/t, \end{aligned}$$
(76)

where \({\mathcal{{L}}}<\infty \) is explicitly given by the proximal setup we use and by the related Lipschitz constant of \(F_u(\cdot )\) (note that this constant can be chosen to be independent of \(\alpha \in [0,1]\)). We assume that computing the corresponding objective value is a part of step \(t\) (these computations increase the complexity of a step by factor at most \(O(1)\)), and thus that \(\overline{f}_s^t\le \widehat{f}_s^t\). By (76), the quantity \(\widehat{f}_s^t-\epsilon _t\) is a valid lower bound on the optimal value of \((P_s)\), and thus we can ensure that \(\underline{f}_{s,t}\ge \widehat{f}_s^t-\epsilon _t\). The bottom line is that with the outlined implementation, we have

$$\begin{aligned} \overline{f}_s^t-\underline{f}_{s,t}\le \epsilon _t \end{aligned}$$

for all \(s,t\), with \(\epsilon _t\) given by (76). Consequently, by Proposition (5), the total number of CoMP steps needed to find a belonging to the domain of the problem of interest (63) \(\epsilon \)-feasible and \(\epsilon \)-optimal solution to this problem can be upper-bounded by

$$\begin{aligned} O(1)\ln \left( {3L\over \epsilon }\right) \left( {{\mathcal{{L}}}\over \epsilon }\right) , \end{aligned}$$

where \(L\) and \({\mathcal{{L}}}\) are readily given by the smoothness parameters of \(\phi _k\) and by the proximal setup we use.

5.3 Numerical illustration: \(\ell _1\)-minimization

Problem of interest We consider the simple \(\ell _1\) minimization problem

$$\begin{aligned} \min \limits _{x\in X} \left\{ \Vert x\Vert _1:\; Ax = b\right\} \end{aligned}$$
(77)

where \(x\in {\mathbf {R}}^n\), \(A\in {\mathbf {R}}^{m\times n}\) and \(m<n\). Note that this problem can also be written in the semi-separable form

$$\begin{aligned} \min \limits _{x\in X} \left\{ \sum _{k=1}^K\Vert x_k\Vert _1: \;\sum _{k=1}^KA_kx_k = b\right\} \end{aligned}$$

if the data is partitioned into \(K\) blocks: \(x=[x_1;x_2;\ldots ;x_K]\) and \(A=[A_1,A_2,\) \(\ldots ,A_K]\).

Our main purpose here is to test the approach described in 5.1 and compare it to the simplest approach where we directly apply CoMP to the (saddle point reformulation of the) problem \(\min _{x\in X}[\Vert x\Vert _1+R\Vert Ax-b\Vert _2]\) with large enough value of \(R\). For the sake of simplicity, we work with the case when \(K=1\) and \(X=\{x\in {\mathbf {R}}^n:\Vert x\Vert _2\le 1\}\).

Generating the data In the experiments to be reported, the data of (77) were generated as follows. Given \(m,n\), we first build a sparse solution \(x^*\) by drawing random vector from the standard Gaussian distribution \(\mathcal{N}(0,I_n)\), zeroing out part of the entries and scaling the resulting vector to enforce \(x^*\in X\). We also build a dual solution \(\lambda ^*\) by scaling a random vector from distribution \(\mathcal{N}(0,I_m)\) to satisfy \(\Vert \lambda ^*\Vert _2=R_*\) for a prescribed \(R_*\). Next we generate \(A\) and \(b\) such that \(x^*\) and \(\lambda ^*\) are indeed the optimal primal and dual solutions to the \(\ell _1\) minimization problem (77), i.e. \(A^T\lambda ^*\in \partial \big |_{x=x^*}\Vert x\Vert _1\) and \(Ax^*=b\). To achieve this, we set

$$\begin{aligned} A = \frac{1}{\sqrt{n}}\widehat{F}_n+ pq^T,\; b = Ax^* \end{aligned}$$

where \( p=\frac{\lambda ^*}{\Vert \lambda ^*\Vert _2^2}, q\in \partial \big |_{x=x^*} \Vert x\Vert _1-\frac{1}{\sqrt{n}}\widehat{F}_n \lambda ^*\), and \(\widehat{F}_n\) is a \(m\times n\) submatrix randomly selected from the DFT matrix \(F_n\). We expect that the larger is the \(\Vert \cdot \Vert _2\)-norm \(R_*\) of the dual solution, the harder is problem (77).

Implementing the algorithm When implementing the algorithm from Sect. 5.2, we apply at each stage \(s=1,2, \ldots \) CoMP to the saddle point problem

$$\begin{aligned} (P_s): \quad \min _{\begin{array}{c} x,\tau :\; \Vert x\Vert _2\le 1, \tau \ge \Vert x\Vert _1 \end{array}}\max _{w:\Vert w\Vert _2\le 1} \left\{ \alpha _s \tau +(1-\alpha _s)\langle Ax-b,w\rangle \right\} . \end{aligned}$$

The proximal setup for CoMP is given by equipping the embedding space of \(U=\{u=[x;w]:x\in X,\Vert w\Vert _2\le 1\}\) with the norm \(\Vert u\Vert _2=\sqrt{\frac{1}{2}\Vert x\Vert _2^2+\frac{1}{2}\Vert w\Vert _2^2}\) and equipping \(U\) with the d.g.f. \(\omega (u)=\frac{1}{2}\Vert x\Vert _2^2+\frac{1}{2}\Vert w\Vert _2^2\). In the sequel we refer to the resulting algorithm as sequential CoMP. For comparison, we solve the same problem by applying CoMP to the saddle point problem

$$\begin{aligned} (P_R): \quad \min _{\begin{array}{c} x,\tau :\;\Vert x\Vert _2\le 1, \tau \ge \Vert x\Vert _1 \end{array}}\max _{w:\Vert w\Vert _2\le 1} \left\{ \tau +R\langle Ax-b,w\rangle \right\} \end{aligned}$$

with \(R=R_*\); the resulting algorithm is referred to as simple CoMP. Both sequential CoMP and simple CoMP algorithms are terminated when the relative nonoptimality and constraint violation are both less than \(\epsilon =10^{-5}\), namely,

$$\begin{aligned} \epsilon (x) :=\max \left\{ \frac{\Vert x\Vert _1-\Vert x_*\Vert _1}{\Vert x_*\Vert _1}, \Vert Ax-b\Vert _2\right\} \le 10^{-5}. \end{aligned}$$
Table 5 \(\ell _1\)-minimization

Numerical results are presented in Table 5. One can immediately see that to achieve the desired accuracy, the simple CoMP with \(R\) set to \(R_*\), i.e., to the exact magnitude of the true Lagrangian multiplier, requires almost twice as many steps as the sequential CoMP. In more realistic examples, the simple CoMP will additionally suffer from the fact that the magnitude of the optimal Lagrange multiplier is not known in advance, and the penalty \(R\) in \((P_R)\) should be somehow tuned “online.”