1 Introduction

In this paper, we explore a straightforward path to the problem of computing the resolvent of the sum of two (not necessarily monotone) operators using resolvents of individual operators. When applied to normal cones of convex sets, this computation solves the best approximation problem of finding the projection onto the intersection of these sets.

In general, computations involving simultaneously two or more operators are usually difficult. One popular approach is to treat each operator individually, then use these calculations to construct the desired answer. Prominent examples of such splitting strategy include the Douglas–Rachford algorithm [9, 11] and the Peaceman–Rachford algorithm [12] that apply to the problem of finding a zero of the sum of maximally monotone operators. In [3], the authors proposed an extension of Dykstra’s algorithm [10] for constructing the resolvent of the sum of two maximally monotone operators. By product space reformulation, this problem was then handled in [5] for finitely many operators. Recently, the so-called averaged alternating modified reflections algorithm was used in [2] to study this problem, and was soon after re-derived in [1] from the view point of the proximal and resolvent average. Because computing the resolvent of a finite sum of operators can be transformed into that of the sum of two operators by a standard product space setting, as done in [2, 5], we will focus on the case of two operators for simplicity.

The goal of this paper is to provide a flexible approach for computing the resolvent of the sum of two weakly monotone operators from individual resolvents. Our work extends and complements recent results in this direction. We also present applications to computing the proximity operator of the sum of two weakly convex functions and to finding the best approximation to the intersection of two convex sets.

The paper is organized as follows. In Sect. 2, we provide necessary materials. Sect. 3 contains our main results. Finally, applications are presented in Sect. 4.

2 Preparation

We assume throughout that X is a real Hilbert space with inner product \(\left\langle {\cdot },{\cdot } \right\rangle \) and induced norm \(\Vert \cdot \Vert \). The set of nonnegative integers is denoted by \(\mathbb {N}\), the set of real numbers by \(\mathbb {R}\), the set of nonnegative real numbers by \({\mathbb {R}}_+:= \{{x \in \mathbb {R}}~\big |~{x \ge 0}\}\), and the set of the positive real numbers by \({\mathbb {R}}_{++}:= \{{x \in \mathbb {R}}~\big |~{x >0}\}\). The notation \(A:X\rightrightarrows X\) indicates that A is a set-valued operator on X.

Given an operator A on X, its domain is denoted by \({\text {dom}}A :=\{{x\in X}~\big |~{Ax\ne \varnothing }\}\), its range by \({\text {ran}}A :=A(X)\), its graph by \({\text {gra}}A :=\{{(x,u)\in X\times X}~\big |~{u\in Ax}\}\), its set of zeros by \({\text {zer}}A :=\{{x\in X}~\big |~{0\in Ax}\}\), and its fixed point set by \({\text {Fix}}A :=\{{x\in X}~\big |~{x\in Ax}\}\). The inverse of A, denoted by \(A^{-1}\), is the operator with graph \({\text {gra}}A^{-1} :=\{{(u,x)\in X\times X}~\big |~{u\in Ax}\}\). Recall from [8, Definition 3.1] that an operator \(A:X\rightrightarrows X\) is said to be \(\alpha \)-monotone if \(\alpha \in \mathbb {R}\) and

$$\begin{aligned} \forall (x,u), (y,v)\in {\text {gra}}A,\quad \left\langle {x-y},{u-v} \right\rangle \ge \alpha \Vert x-y\Vert ^2. \end{aligned}$$
(1)

In this case, we say that A is monotone if \(\alpha =0\), strongly monotone if \(\alpha >0\), and weakly monotone if \(\alpha <0\). The operator A is said to be maximally\(\alpha \)-monotone if it is \(\alpha \)-monotone and there is no \(\alpha \)-monotone operator \(B:X\rightrightarrows X\) such that \({\text {gra}}B\) properly contains \({\text {gra}}A\). It is worth mentioning that if A is maximally \(\alpha \)-monotone with \(\alpha \in {\mathbb {R}}_+\), then it is maximally monotone (see [8, Section 3]. Furthermore, it is also clear that A is (resp. maximally) \(\alpha \)-monotone if and only if \(A-\alpha {\text {Id}}\) is (resp. maximally) monotone, where \({\text {Id}}\) is the identity operator.

The resolvent of \(A:X\rightrightarrows X\) is defined by \(J_A:= ({\text {Id}}+ A)^{-1}\). We conclude this section by an elementary formula for computing the resolvent of special composition via resolvents of its components.

Proposition 2.1

(Resolvent of composition) Let \(A:X\rightrightarrows X\), \(q, r\in X\), \(\theta \in {\mathbb {R}}_{++}\), and \(\sigma \in \mathbb {R}\). Define \(\bar{A} :=A\circ (\theta {\text {Id}}-q)+\sigma {\text {Id}}-r\) and let \(\gamma \in {\mathbb {R}}_{++}\). Then the following hold:

  1. (i)

    A is (resp. maximally) \(\alpha \)-monotone if and only if \(\bar{A}\) is (resp. maximally) \((\theta \alpha +\sigma )\)-monotone.

  2. (ii)

    If \(1+\gamma \sigma \ne 0\), then

    $$\begin{aligned} J_{\gamma {\bar{A}}} =\frac{1}{\theta }\left( J_{\frac{\gamma \theta }{1+\gamma \sigma }A}\circ \left( \frac{\theta }{1+\gamma \sigma }{\text {Id}}+\frac{\gamma \theta }{1+\gamma \sigma }r-q\right) +q\right) ; \end{aligned}$$
    (2)

    and if, in addition, A is maximally \(\alpha \)-monotone and \(1+\gamma (\theta \alpha +\sigma ) >0\), then \(J_{\gamma \bar{A}}\) and \(J_{\frac{\gamma \theta }{1+\gamma \sigma }A}\) are single-valued and have full domain.

Proof

(i): This is straightforward from the definition.

(ii): We note that \((\theta {\text {Id}}-q)^{-1} =\frac{1}{\theta }({\text {Id}}+q)\), that \((T-z)^{-1} =T^{-1}\circ ({\text {Id}}+z)\), and that \((\alpha T)^{-1} =T^{-1}\circ (\frac{1}{\alpha }{\text {Id}})\) for any operator T, any \(z\in X\), and any \(\alpha \in \mathbb {R}\smallsetminus \{0\}\). Using these facts yields

$$\begin{aligned} J_{\gamma \bar{A}} =&\Big ((1+\gamma \sigma ){\text {Id}}+\gamma A\circ (\theta {\text {Id}}-q) -\gamma r\Big )^{-1} \end{aligned}$$
(3a)
$$\begin{aligned} =&\left( \Big (\frac{1+\gamma \sigma }{\theta }({\text {Id}}+q) +\gamma A\Big )\circ (\theta {\text {Id}}-q)\right) ^{-1}\circ ({\text {Id}}+\gamma r) \end{aligned}$$
(3b)
$$\begin{aligned} =&(\theta {\text {Id}}-q)^{-1}\circ \left( \frac{1+\gamma \sigma }{\theta }{\text {Id}}+\gamma A +\frac{1+\gamma \sigma }{\theta }q\right) ^{-1}\circ ({\text {Id}}+\gamma r) \end{aligned}$$
(3c)
$$\begin{aligned} =&(\theta {\text {Id}}-q)^{-1}\circ \left( \frac{1+\gamma \sigma }{\theta }\Big ({\text {Id}}+\frac{\gamma \theta }{1+\gamma \sigma }A\Big )\right) ^{-1}\nonumber \\&\circ \left( {\text {Id}}-\frac{1+\gamma \sigma }{\theta }q\right) \circ ({\text {Id}}+\gamma r) \end{aligned}$$
(3d)
$$\begin{aligned} =&\frac{1}{\theta }({\text {Id}}+q)\circ \left( {\text {Id}}+\frac{\gamma \theta }{1+\gamma \sigma } A\right) ^{-1}\circ \left( \frac{\theta }{1+\gamma \sigma }{\text {Id}}\right) \nonumber \\&\circ \left( {\text {Id}}+\gamma r-\frac{1+\gamma \sigma }{\theta }q\right) \end{aligned}$$
(3e)
$$\begin{aligned} =&\frac{1}{\theta }({\text {Id}}+q)\circ J_{\frac{\gamma \theta }{1+\gamma \sigma }A}\circ \left( \frac{\theta }{1+\gamma \sigma }{\text {Id}}+\frac{\gamma \theta }{1+\gamma \sigma }r-q\right) \end{aligned}$$
(3f)
$$\begin{aligned} =&\frac{1}{\theta }\left( J_{\frac{\gamma \theta }{1+\gamma \sigma }A}\circ \left( \frac{\theta }{1+\gamma \sigma }{\text {Id}}+\frac{\gamma \theta }{1+\gamma \sigma }r-q\right) +q\right) . \end{aligned}$$
(3g)

Since A is maximally \(\alpha \)-monotone, \(\bar{A}\) is maximally \((\theta \alpha +\sigma )\)-monotone. Now, since \(1+\gamma (\theta \alpha +\sigma ) >0\), [8, Proposition 3.4] implies the conclusion. \(\square \)

3 Main results

In this section, let \(A, B:X\rightrightarrows X\), \(\omega \in {\mathbb {R}}_{++}\), and \(r\in X\). We present a flexible approach to the computation of the resolvent at r of the scaled sum \(\omega (A+B)\), that is to

$$\begin{aligned} \text {compute}~ J_{\omega (A+B)}(r). \end{aligned}$$
(4)

Our analysis relies on the observation that this problem can be reformulated into the problem of finding a zero of the sum of two suitable operators. Indeed, when \(r\in {\text {dom}}J_{\omega (A+B)} ={\text {ran}}\left( {\text {Id}}+\omega (A+B)\right) \), we have by definition that

$$\begin{aligned} x\in J_{\omega (A+B)}(r) \iff r\in x +\omega (A+B)x \iff 0\in (A+B)x +\frac{1}{\omega }x -\frac{1}{\omega }r.\qquad \end{aligned}$$
(5)

By writing \(\frac{1}{\omega } =\sigma +\tau \) and \(\frac{1}{\omega }r =r_A+r_B\), the last inclusion is equivalent to

$$\begin{aligned} 0\in (A+\sigma {\text {Id}}-r_A)x +(B+\tau {\text {Id}}-r_B)x, \end{aligned}$$
(6)

which leads to finding a zero of the sum of two new operators \(A+\sigma {\text {Id}}-r_A\) and \(B+\tau {\text {Id}}-r_B\).

Based on the above observation, we proceed with a more general formulation. Assume throughout that

$$\begin{aligned} \theta \in {\mathbb {R}}_{++}\quad \text {and}\quad q\in X, \end{aligned}$$
(7)

that \((\sigma ,\tau )\in \mathbb {R}^2\) and \((r_A,r_B)\in X^2\) satisfy

$$\begin{aligned} \sigma +\tau =\frac{\theta }{\omega } \quad \text {and}\quad r_A+r_B =\frac{1}{\omega }(q+r), \end{aligned}$$
(8)

and that

$$\begin{aligned} A_\sigma :=A\circ (\theta {\text {Id}}-q)+\sigma {\text {Id}}-r_A \quad \text {and}\quad B_\tau :=B\circ (\theta {\text {Id}}-q)+\tau {\text {Id}}-r_B. \end{aligned}$$
(9)

Now, we will derive the formula for the resolvent of the scaled sum via zeros of the sum of these newly defined operators.

Proposition 3.1

(Resolvent via zeros of sum of operators) Suppose that \(r\in {\text {ran}}\left( {\text {Id}}+\omega (A+B)\right) \). Then

$$\begin{aligned} J_{\omega (A+B)}(r) =\theta {\text {zer}}(A_\sigma +B_\tau )-q \ne \varnothing . \end{aligned}$$
(10)

Consequently, if \(A_\sigma +B_\tau \) is strongly monotone, then \(J_{\omega (A+B)}(r)\) and \({\text {zer}}(A_\sigma +B_\tau )\) are singletons.

Proof

By assumption, \(J_{\omega (A+B)}(r)\ne \varnothing \). For every \(z\in X\), we derive from (8) and (9) that

$$\begin{aligned} \theta z-q\in J_{\omega (A+B)}(r) \iff&r\in (\theta z-q)+\omega (A+B)(\theta z-q) \end{aligned}$$
(11a)
$$\begin{aligned} \iff&0\in (A+B)(\theta z-q) +\frac{\theta }{\omega }z -\frac{1}{\omega }(q+r) \end{aligned}$$
(11b)
$$\begin{aligned} \iff&0\in (A+B)(\theta z-q) +(\sigma +\tau )z -(r_A+r_B) \end{aligned}$$
(11c)
$$\begin{aligned} \iff&0\in \big (A(\theta z-q)+\sigma z-r_A\big )\nonumber \\&\quad + \big (B(\theta z-q)+\tau z-r_B\big ) \end{aligned}$$
(11d)
$$\begin{aligned} \iff&z\in {\text {zer}}(A_\sigma +B_\tau ). \end{aligned}$$
(11e)

The remaining conclusion follows from [4, Proposition 23.35]. \(\square \)

The new operators \(A_\sigma \) and \(B_\tau \) along with Proposition 3.1 allow for the flexibility in chosing \((\sigma ,\tau )\) and \((r_A,r_B)\) as one can decide the values of these parameters as long as (8) is satisfied. We are now ready for our main result.

Theorem 3.2

(Resolvent of sum of \(\alpha \)- and \(\beta \)-monotone operators) Suppose that A and B are respectively maximally \(\alpha \)- and \(\beta \)-monotone with \(\alpha +\beta >-1/\omega \), that \(r\in {\text {ran}}\left( {\text {Id}}+\omega (A+B)\right) \), and that \((\sigma ,\tau )\) satisfies

$$\begin{aligned} \theta \alpha +\sigma >0 \quad \text {and}\quad \theta \beta +\tau \ge 0. \end{aligned}$$
(12)

Let \(\gamma \in {\mathbb {R}}_{++}\) be such that \(1+\gamma \sigma \ne 0\) and \(1+\gamma \tau \ne 0\). Given any \(\kappa \in \left. \right] 0,1 \left. \right] \) and \(x_0\in X\), define the sequence \((x_n)_{n\in {\mathbb {N}}}\) by

$$\begin{aligned} \forall {n\in {\mathbb {N}}}, \quad x_{n+1} :=(1-\kappa )x_n +\kappa (2J_{\gamma B_\tau }-{\text {Id}})\circ (2J_{\gamma A_\sigma }-{\text {Id}})x_n, \end{aligned}$$
(13)

with explicit formulas

$$\begin{aligned} J_{\gamma A_\sigma }&=\frac{1}{\theta }\left( J_{\frac{\gamma \theta }{1+\gamma \sigma }A}\circ \left( \frac{\theta }{1+\gamma \sigma }{\text {Id}}+\frac{\gamma \theta }{1+\gamma \sigma }r_A-q\right) +q\right) \end{aligned}$$
(14a)
$$\begin{aligned} \text {and}~ J_{\gamma B_\tau }&=\frac{1}{\theta }\left( J_{\frac{\gamma \theta }{1+\gamma \tau }B}\circ \left( \frac{\theta }{1+\gamma \tau }{\text {Id}}+\frac{\gamma \theta }{1+\gamma \tau }r_B-q\right) +q\right) . \end{aligned}$$
(14b)

Then \(J_{\omega (A+B)}(r) =J_{\frac{\gamma \theta }{1+\gamma \sigma }A}\left( \frac{\theta }{1+\gamma \sigma }\overline{x}+\frac{\gamma \theta }{1+\gamma \sigma }r_A-q\right) \) with \(\overline{x}\in {\text {Fix}}(2J_{\gamma B_\tau }-{\text {Id}})\circ (2J_{\gamma A_\sigma }-{\text {Id}})\) and the following hold:

  1. (i)

    \(\left( J_{\frac{\gamma \theta }{1+\gamma \sigma }A}\left( \frac{\theta }{1+\gamma \sigma }x_n+\frac{\gamma \theta }{1+\gamma \sigma }r_A-q\right) \right) _{n\in {\mathbb {N}}}\) converges strongly to \(J_{\omega (A+B)}(r)\).

  2. (ii)

    If \(\kappa <1\), then \((x_n)_{n\in {\mathbb {N}}}\) converges weakly to \(\overline{x}\).

  3. (iii)

    If A is Lipschitz continuous, then the convergences in (i) and (ii) are linear.

Proof

We first note that the existence of \((\sigma ,\tau )\in \mathbb {R}^2\) satisfying (8) and (12) is ensured since \(\alpha +\beta >-1/\omega \). By Proposition 2.1(i) and (12), \(A_\sigma \) and \(B_\tau \) are respectively maximally \((\theta \alpha +\sigma )\)- and \((\theta \beta +\tau )\)-monotone with \(\theta \alpha +\sigma >0\) and \(\theta \beta +\tau \ge 0\), hence, by Proposition 2.1(ii), \(J_{\gamma A_\sigma }\) and \(J_{\gamma B_\tau }\) are single-valued and have full domain. We also see that \(A_\sigma \) and \(B_\tau \) are maximally monotone and that \(A_\sigma \) and \(A_\sigma +B_\tau \) are strongly monotone. Using Proposition 3.1 and [4, Proposition 26.1(iii)(b)], we have \({\text {zer}}(A_\sigma +B_\tau ) =\{J_{\gamma A_\sigma }(\overline{x})\}\) with \(\overline{x}\in {\text {Fix}}(2J_{\gamma B_\tau }-{\text {Id}})\circ (2J_{\gamma A_\sigma }-{\text {Id}})\) and \(J_{\omega (A+B)}(r) =\theta J_{\gamma A_\sigma }(\overline{x}) -q\).

Now, Proposition 2.1(ii) implies (14), which yields

$$\begin{aligned} \theta J_{\gamma A_\sigma } -q =J_{\frac{\gamma \theta }{1+\gamma \sigma }A}\circ \left( \frac{\theta }{1+\gamma \sigma }{\text {Id}}+\frac{\gamma \theta }{1+\gamma \sigma }r_A-q\right) . \end{aligned}$$
(15)

Therefore,

$$\begin{aligned} J_{\omega (A+B)}(r) =\theta J_{\gamma A_\sigma }(\overline{x}) -q =J_{\frac{\gamma \theta }{1+\gamma \sigma }A}\left( \frac{\theta }{1+\gamma \sigma }\overline{x}+\frac{\gamma \theta }{1+\gamma \sigma }r_A-q\right) . \end{aligned}$$
(16)

(i): By applying [4, Theorem 26.11(vi)(b)] with all \(\lambda _n =\kappa \) if \(\kappa <1\) and applying [4, Proposition 26.13] if \(\kappa =1\), we obtain that \(J_{\gamma A_\sigma }(x_n)\rightarrow J_{\gamma A_\sigma }(\overline{x})\). Now combine with (15) and (16).

(ii): Again apply [4, Theorem 26.11] with all \(\lambda _n =\kappa \).

(iii): Assume that A is Lipschitz continuous with constant \(\ell \). It is straightforward to see that \(A_\sigma \) is Lipschitz continuous with constant \((\theta \ell +|\sigma |)\). The conclusion follows from [8, Theorem 4.8] with \(\lambda =\mu =2\) and \(\delta =\gamma \). \(\square \)

Remark 3.3

Some remarks regarding Theorem 3.2 are in order.

  1. (i)

    Under the assumptions made, \(A+B\) is \((\alpha +\beta )\)-monotone but not necessarily maximal. If, in addition, \(A+B\) is indeed maximally \((\alpha +\beta )\)-monotone, then \(J_{\omega (A+B)}\) has full domain by [8, Proposition 3.4(ii)]; thus, the condition \(r\in {\text {ran}}\left( {\text {Id}}+\omega (A+B)\right) \) can be removed.

  2. (ii)

    The iterative scheme (13) is the Douglas–Rachford algorithm if \(\kappa =1/2\) and the Peaceman–Rachford algorithm if \(\kappa =1\). For a more general version of (13), we refer the readers to [8]; see also [6, 7].

  3. (iii)

    If the condition (12) is replaced by

    $$\begin{aligned} \theta \alpha +\sigma \ge 0 \quad \text {and}\quad \theta \beta +\tau >0, \end{aligned}$$
    (17)

    then the conclusions in Theorem 3.2(ii)–(iii) still hold, while Theorem 3.2(i) only holds for \(\kappa <1\); see also [5, Theorem 2.1(ii) and Remark 2.2(iv)].

  4. (iv)

    One can simply choose \(\theta =1\) and \(q =0\), in which case, (14) reduces to

    $$\begin{aligned} J_{\gamma A_\sigma } =J_{\frac{\gamma }{1+\gamma \sigma }A}\circ \frac{1}{1+\gamma \sigma }({\text {Id}}+\gamma r_A) \quad \text {and}\quad J_{\gamma B_\tau } =J_{\frac{\gamma }{1+\gamma \tau }B}\circ \frac{1}{1+\gamma \tau }({\text {Id}}+\gamma r_B). \end{aligned}$$
    (18)
  5. (v)

    When A and B are maximally monotone, i.e., \(\alpha =\beta =0\), (12) is satisfied whenever \(\sigma >0\) and \(\tau \ge 0\). One thus can choose for instance \(\sigma =\tau =\frac{\theta }{2\omega }\).

  6. (vi)

    It is always possible to find \(\gamma \in {\mathbb {R}}_{++}\) satisfying even \(1+\gamma \sigma >0\) and \(1+\gamma \tau >0\). In fact, these inequalities are automatic regardless of \(\gamma \in {\mathbb {R}}_{++}\) as long as \(\sigma \) and \(\tau \) are both nonnegative.

When A and B are maximally monotone, the following result gives an iterative method for computing the resolvent of \(A+B\) where each iteration relies only on the computations of \(J_A\) and \(J_B\).

Theorem 3.4

(Resolvent of sum of two maximally monotone operators) Suppose that A and B are maximally monotone, that \(\omega >1/2\), and that \(r\in {\text {ran}}\left( {\text {Id}}+\omega (A+B)\right) \). Define

$$\begin{aligned} \bar{A}&:=\frac{2\omega }{\theta (2\omega -1)}A\circ (\theta {\text {Id}}-q) +\frac{1}{\theta (2\omega -1)}(\theta {\text {Id}}-q-r) \end{aligned}$$
(19a)
$$\begin{aligned} \text {and}~\bar{B}&:=\frac{2\omega }{\theta (2\omega -1)}B\circ (\theta {\text {Id}}-q) +\frac{1}{\theta (2\omega -1)}(\theta {\text {Id}}-q-r). \end{aligned}$$
(19b)

Let \(\kappa \in \left. \right] 0,1\left. \right] \), let \(x_0\in X\), and define the sequence \((x_n)_{n\in {\mathbb {N}}}\) by

$$\begin{aligned} \forall {n\in {\mathbb {N}}}, \quad x_{n+1} :=(1-\kappa )x_n +\kappa (2J_{\bar{B}}-{\text {Id}})\circ (2J_{\bar{A}}-{\text {Id}})x_n, \end{aligned}$$
(20)

with explicit formulas

$$\begin{aligned} J_{\bar{A}}&=\frac{1}{\theta }\left( J_A\circ \left( \Big (1-\frac{1}{2\omega }\Big )(\theta {\text {Id}}-q) +\frac{1}{2\omega }r\right) +q\right) \end{aligned}$$
(21a)
$$\begin{aligned} \text {and}~ J_{\bar{B}}&=\frac{1}{\theta }\left( J_B\circ \left( \Big (1-\frac{1}{2\omega }\Big )(\theta {\text {Id}}-q) +\frac{1}{2\omega }r\right) +q\right) . \end{aligned}$$
(21b)

Then \(J_{\omega (A+B)}(r) =J_A\left( (1-\frac{1}{2\omega })(\theta \overline{x}-q)+\frac{1}{2\omega }r\right) \) with \(\overline{x}\in {\text {Fix}}(2J_{\bar{B}}-{\text {Id}})\circ (2J_{\bar{A}}-{\text {Id}})\) and the following hold:

  1. (i)

    \(\left( J_A\left( (1-\frac{1}{2\omega })(\theta x_n-q)+\frac{1}{2\omega }r\right) \right) _{n\in {\mathbb {N}}}\) converges strongly to \(J_{\omega (A+B)}(r)\).

  2. (ii)

    If \(\kappa <1\), then \((x_n)_{n\in {\mathbb {N}}}\) converges weakly to \(\overline{x}\).

  3. (iii)

    If A is Lipschitz continuous, then the convergences in (i) and (ii) are linear.

Proof

Choosing

$$\begin{aligned} \sigma =\tau =\frac{\theta }{2\omega }>0,\quad r_A =r_B =\frac{1}{2\omega }(q+r), \quad \text {and}\quad \gamma =\frac{2\omega }{\theta (2\omega -1)} >0, \end{aligned}$$
(22)

we have that (8) is satisfied and that

$$\begin{aligned} A_\sigma= & {} A\circ (\theta {\text {Id}}-q) +\frac{1}{2\omega }(\theta {\text {Id}}-q-r)\end{aligned}$$
(23a)
$$\begin{aligned} \text {and}~B_\tau= & {} B\circ (\theta {\text {Id}}-q) +\frac{1}{2\omega }(\theta {\text {Id}}-q-r), \end{aligned}$$
(23b)

which yields \(\gamma A_\sigma =\bar{A}\) and \(\gamma B_\tau =\bar{B}\). Since \(1+\gamma \sigma =1+\gamma \theta /(2\omega ) =2\omega /(2\omega -1) =\gamma \theta \), we get (21) from (14). Now apply Theorem 3.2 with \(\alpha =\beta =0\). \(\square \)

Having the freedom of choice, one can certainly use appropriate parameters to obtain simpler formulations. The following corollary illustrates one of such instances.

Corollary 3.5

Suppose that A is maximally \(\alpha \)-monotone with \(\alpha >-1/(2\omega )\), that B is maximally \(\beta \)-monotone with \(\beta \ge -1/(2\omega )\), and that \(r\in {\text {ran}}\left( {\text {Id}}+\omega (A+B)\right) \). Let \(\eta \in {\mathbb {R}}_{++}\), \(\kappa \in \left. \right] 0,1\left. \right] \), \(x_0\in X\), and define the sequence \((x_n)_{n\in {\mathbb {N}}}\) by

$$\begin{aligned} \forall {n\in {\mathbb {N}}},\ x_{n+1}:= & {} (1-\kappa )x_n +\kappa \left( 2\eta J_{\omega B}\circ \frac{1}{2\eta }{\text {Id}}+2\eta r -{\text {Id}}\right) \nonumber \\&~~~~~~~~~~~~~~~~~~~~~ \circ \left( 2\eta J_{\omega A}\circ \frac{1}{2\eta }{\text {Id}}+2\eta r -{\text {Id}}\right) x_n. \end{aligned}$$
(24)

Then \(J_{\omega (A+B)}(r) =J_{\omega A}(\frac{1}{2\eta }\overline{x})\) with \(\overline{x}\in {\text {Fix}}\left( 2\eta J_{\omega B}\circ \frac{1}{2\eta }{\text {Id}}+2\eta r -{\text {Id}}\right) \circ \left( 2\eta J_{\omega A}\circ \frac{1}{2\eta }{\text {Id}}+2\eta r -{\text {Id}}\right) \) and the following hold:

  1. (i)

    \(\left( J_{\omega A}(\frac{1}{2\eta }x_n)\right) _{n\in {\mathbb {N}}}\) converges strongly to \(J_{\omega (A+B)}(r)\).

  2. (ii)

    If \(\kappa <1\), then \((x_n)_{n\in {\mathbb {N}}}\) converges weakly to \(\overline{x}\).

  3. (iii)

    If A is Lipschitz continuous, then the above convergences are linear.

Proof

We first see that \(\alpha +\beta >-1/\omega \). Now choose

$$\begin{aligned}&\theta =\frac{1}{\eta }>0,\quad q =r, \quad \sigma =\tau =\frac{\theta }{2\omega },\quad r_A =r_B =\frac{1}{2\omega }(q+r) =\frac{1}{\omega }r, \nonumber \\&\text {and}\quad \gamma =\frac{2\omega }{\theta } >0. \end{aligned}$$
(25)

Then (8) and (12) are satisfied, while \(\gamma \theta =2\omega \) and \(1+\gamma \sigma =1+\gamma \tau =2\). We have that

$$\begin{aligned} A_\sigma= & {} A\circ \left( \frac{1}{\eta }{\text {Id}}-r\right) +\frac{1}{2\eta \omega }{\text {Id}}-\frac{1}{\omega }r\end{aligned}$$
(26a)
$$\begin{aligned} \text {and}\quad B_\tau= & {} B\circ \left( \frac{1}{\eta }{\text {Id}}-r\right) +\frac{1}{2\eta \omega }{\text {Id}}-\frac{1}{\omega }r. \end{aligned}$$
(26b)

Noting from (14) that

$$\begin{aligned} J_{\gamma A_\sigma } =\eta \left( J_{\omega A}\circ \frac{1}{2\eta }{\text {Id}}+r\right) \quad \text {and}\quad J_{\gamma B_\tau } =\eta \left( J_{\omega B}\circ \frac{1}{2\eta }{\text {Id}}+r\right) , \end{aligned}$$
(27)

we get the conclusion due to Theorem 3.2. \(\square \)

Again thanks to the flexibility of the parameters, our results recapture the formulation and convergence analysis of recent methods. In particular, Corollaries 3.6 and 3.7 are in the spirit of [2, Theorem 3.1] and [1, Theorem 3.2], respectively.

Corollary 3.6

Let \(\eta \in \left. \right] 0,1\left[ \right. \) and \(\gamma \in {\mathbb {R}}_{++}\). Suppose that A and B are maximally monotone and that \(r\in {\text {ran}}\left( {\text {Id}}+\frac{\gamma }{2(1-\eta )}(A+B)\right) \). Let \(\kappa \in \left. \right] 0,1 \left. \right] \), let \(x_0\in X\), and define the sequence \((x_n)_{n\in {\mathbb {N}}}\) by

$$\begin{aligned} \forall {n\in {\mathbb {N}}},\ x_{n+1}:= & {} (1-\kappa )x_n +\kappa \Big (2\eta J_{\gamma B}\circ ({\text {Id}}+r) -2\eta r -{\text {Id}}\Big )\nonumber \\&~~~~~~~~~~~~~~~~~~~~\circ \Big (2\eta J_{\gamma A}\circ ({\text {Id}}+r) -2\eta r -{\text {Id}}\Big )x_n. \end{aligned}$$
(28)

Then \(J_{\frac{\gamma }{2(1-\eta )}(A+B)}(r) =J_{\gamma A}(\overline{x}+r)\) with \(\overline{x}\in {\text {Fix}}(2\eta J_{\gamma B}({\text {Id}}+r) -2\eta r -{\text {Id}})\circ (2\eta J_{\gamma A}({\text {Id}}+r) -2\eta r -{\text {Id}})\) and the following hold:

  1. (i)

    \(\left( J_{\gamma A}(x_n+r)\right) _{n\in {\mathbb {N}}}\) converges strongly to \(J_{\frac{\gamma }{2(1-\eta )}(A+B)}(r)\).

  2. (ii)

    If \(\kappa <1\), then \((x_n)_{n\in {\mathbb {N}}}\) converges weakly to \(\overline{x}\).

  3. (iii)

    If A is Lipschitz continuous, then the above convergences are linear.

Proof

Let \(\omega =\frac{\gamma }{2(1-\eta )}\), \(\theta =\frac{1}{\eta }\), \(q =-r\), \(\sigma =\tau =\frac{\theta }{2\omega } =\frac{1-\eta }{\gamma \eta }\), and \(r_A =r_B =0\). Then (8) is satisfied,

$$\begin{aligned} A_\sigma =A\circ \left( \frac{1}{\eta }{\text {Id}}+r\right) +\frac{1-\eta }{\gamma \eta }{\text {Id}}\quad \text {and}\quad B_\tau =B\circ \left( \frac{1}{\eta }{\text {Id}}+r\right) +\frac{1-\eta }{\gamma \eta }{\text {Id}}. \end{aligned}$$
(29)

Noting that \(1+\gamma \sigma =1+\gamma \tau =1\!/\eta =\theta \), we have from (14) that

$$\begin{aligned} J_{\gamma A_\sigma } =\eta \left( J_{\gamma A}\circ ({\text {Id}}+r) -r\right) \quad \text {and}\quad J_{\gamma B_\tau } =\eta \left( J_{\gamma B}\circ ({\text {Id}}+r) -r\right) . \end{aligned}$$
(30)

Applying Theorem 3.2 with \(\alpha =\beta =0\) completes the proof. \(\square \)

Corollary 3.7

Suppose that A and B are maximally monotone and that \(A+B\) is also maximally monotone. Let \(\eta \in \left. \right] 0,1\left[ \right. \), \(\kappa \in \left. \right] 0,1 \left. \right] \), \(x_0\in X\), and define the sequence \((x_n)_{n\in {\mathbb {N}}}\) by

$$\begin{aligned} \forall {n\in {\mathbb {N}}},\ x_{n+1}:= & {} (1-\kappa )x_n +\kappa \Big (2\eta J_B +2(1-\eta )r -{\text {Id}}\Big )\nonumber \\&~~~~~~~~~~~~~~~~~~~~\circ \Big (2\eta J_A +2(1-\eta )r -{\text {Id}}\Big )x_n. \end{aligned}$$
(31)

Then \(J_{\frac{1}{2(1-\eta )}(A+B)}(r) =J_A(\overline{x})\) with \(\overline{x}\in {\text {Fix}}(2\eta J_B +2(1-\eta )r -{\text {Id}})\circ (2\eta J_A +2(1-\eta )r -{\text {Id}})\) and the following hold:

  1. (i)

    \((J_A(x_n))_{n\in {\mathbb {N}}}\) converges strongly to \(J_{\frac{1}{2(1-\eta )}(A+B)}(r)\).

  2. (ii)

    If \(\kappa <1\), then \((x_n)_{n\in {\mathbb {N}}}\) converges weakly to \(\overline{x}\).

  3. (iii)

    If A is Lipschitz continuous, then the above convergences are linear.

Proof

Apply Theorem 3.4 with \(\omega =\frac{1}{2(1-\eta )}\), \(\theta =\frac{1}{\eta }\), and \(q =\frac{1-\eta }{\eta }r =\frac{1}{2\omega -1}r\) and note that \(J_{\frac{1}{2(1-\eta )}(A+B)}\) has full domain due to Remark 3.3(i). \(\square \)

4 Applications

In this section, we provide transparent applications of our result to computing the proximity operator of the sum of two weakly convex functions and to finding the closest point in the intersection of closed convex sets.

We recall that a function \(f:X\rightarrow ]-\infty ,+\infty ]\) is proper if \({\text {dom}}f :=\{{x\in X}~\big |~{f(x) <+\infty }\}\ne \varnothing \) and lower semicontinuous if \(\forall x\in {\text {dom}}f\), \(f(x)\le \liminf _{z\rightarrow x} f(z)\). The function f is said to be \(\alpha \)-convex for some \(\alpha \in \mathbb {R}\) if \(\forall x,y\in {\text {dom}}f\), \(\forall \kappa \in \left. \right] 0,1\left[ \right. \),

$$\begin{aligned} f((1-\kappa ) x+\kappa y) +\frac{\alpha }{2}\kappa (1-\kappa )\Vert x-y\Vert ^2\le (1-\kappa )f(x)+\kappa f(y). \end{aligned}$$
(32)

We say that f is convex if \(\alpha =0\), strongly convex if \(\alpha >0\), and weakly convex if \(\alpha <0\).

Let \(f:X\rightarrow ]-\infty ,+\infty ]\) be proper. The Fréchet subdifferential of f at x is given by

$$\begin{aligned} \widehat{\partial }f(x) :=\left\{ {u\in X}~\Big |~{\liminf _{z\rightarrow x}\frac{f(z)-f(x)-\left\langle {u},{z-x} \right\rangle }{\Vert z-x\Vert }\ge 0}\right\} . \end{aligned}$$
(33)

It is known that if f is differentiable at x, then \(\widehat{\partial }f(x) =\{\nabla f(x)\}\), and that if f is convex, then the Fréchet subdifferential coincides with the convex subdifferential, i.e.,

$$\begin{aligned} \widehat{\partial }f(x) =\partial f(x) :=\{{u\in X}~\big |~{\forall z\in X,\ f(z)-f(x)\ge \left\langle {u},{z-x} \right\rangle }\}. \end{aligned}$$
(34)

The proximity operator of f with parameter \(\gamma \in {\mathbb {R}}_{++}\) is the mapping \({\text {Prox}}_{\gamma f}:X\rightrightarrows X\) defined by

$$\begin{aligned} \forall x\in X, \quad {\text {Prox}}_{\gamma f}(x) := \mathop {{\text {argmin}}}\limits _{z\in X}\left( f(z)+\frac{1}{2\gamma }\Vert z-x\Vert ^2\right) . \end{aligned}$$
(35)

Given a nonempty closed subset C of X, the indicator function\(\iota _C\) of C is defined by \(\iota _C(x) =0\) if \(x\in C\) and \(\iota _C(x) =+\infty \) if \(x\notin C\). It is clear that \({\text {Prox}}_{\gamma \iota _C} =P_C\), where \(P_C:X\rightrightarrows C\) is the projector onto C given by

$$\begin{aligned} \forall x\in X,\quad P_Cx :=\mathop {{\text {argmin}}}\limits _{c\in C} \Vert x-c\Vert . \end{aligned}$$
(36)

If C is convex, then the normal cone to C is the operator \(N_C:X\rightrightarrows X\) defined by

$$\begin{aligned} \forall x\in X, \quad N_C(x) :={\left\{ \begin{array}{ll} \{{u\in X}~\big |~{\forall c\in C,\ \left\langle {u},{c-x} \right\rangle \le 0}\} &{} \quad \text {if}~ x\in C,\\ \varnothing &{} \quad \text {otherwise}. \end{array}\right. } \end{aligned}$$
(37)

Lemma 4.1

Let \(f:X\rightarrow ]-\infty ,+\infty ]\) and \(g:X\rightarrow ]-\infty ,+\infty ]\) be proper, lower semicontinuous, and respectively \(\alpha \)- and \(\beta \)-convex, let \(\omega \in {\mathbb {R}}_{++}\), and let \(r\in {\text {ran}}({\text {Id}}+\omega (\widehat{\partial }f+\widehat{\partial }g))\). Suppose that \(\alpha +\beta >-1/\omega \). Then

$$\begin{aligned} J_{\omega (\widehat{\partial }f+\widehat{\partial }g)}(r)= J_{\omega \widehat{\partial }(f+g)}(r) ={\text {Prox}}_{\omega (f+g)}(r). \end{aligned}$$
(38)

Consequently, if C and D are closed convex subsets of X with \(C\cap D\ne \varnothing \) and \(r\in {\text {ran}}({\text {Id}}+N_C+N_D)\), then

$$\begin{aligned} J_{N_C+N_D}(r) =P_{C\cap D}(r). \end{aligned}$$
(39)

Proof

On the one hand, noting that \({\text {ran}}({\text {Id}}+\omega (\widehat{\partial }f+\widehat{\partial }g)) ={\text {dom}}J_{\omega (\widehat{\partial }f+\widehat{\partial }g)}\) and that \(\widehat{\partial }f+\widehat{\partial }g\subseteq \widehat{\partial }(f+g)\), we have for any \(p\in X\) that

$$\begin{aligned} p\in J_{\omega (\widehat{\partial }f+\widehat{\partial }g)}(r)&\iff r\in p+\omega (\widehat{\partial }f+\widehat{\partial }g)(p) \end{aligned}$$
(40a)
$$\begin{aligned}&\implies r\in p+\omega \widehat{\partial }(f+g)(p) \end{aligned}$$
(40b)
$$\begin{aligned}&\iff p\in J_{\omega \widehat{\partial }(f+g)}(r). \end{aligned}$$
(40c)

On the other hand, it is straightforward from definition that \(f+g\) is \((\alpha +\beta )\)-convex. Since \(1+\omega (\alpha +\beta ) >0\), we learn from [8, Lemma 5.2] that \({\text {Prox}}_{\omega (f+g)} =J_{\omega \widehat{\partial }(f+g)}\) is single-valued and has full domain. Combining with (40) implies (38).

Now, since C and D are closed convex sets, \(\iota _C\) and \(\iota _D\) are convex functions, and therefore, \(\widehat{\partial }\iota _C =\partial \iota _C =N_C\) and \(\widehat{\partial }\iota _D =\partial \iota _D =N_D\) (see, e.g., [4, Example 16.13]). Applying (38) to \((f,g) =(\iota _C,\iota _D)\) and \(\omega =1\) yields

$$\begin{aligned} J_{N_C+N_D}(r) ={\text {Prox}}_{\iota _C+\iota _D}(r) ={\text {Prox}}_{\iota _{C\cap D}}(r) =P_{C\cap D}(r), \end{aligned}$$
(41)

which completes the proof. \(\square \)

We now derive some applications of Theorem 3.2. In what follows, \(\theta \in {\mathbb {R}}_{++}\) and \(q\in X\).

Corollary 4.2

(Proximity operator of sum of \(\alpha \)- and \(\beta \)-convex functions) Let \(f:X\rightarrow ]-\infty ,+\infty ]\) and \(g:X\rightarrow ]-\infty ,+\infty ]\) be proper, lower semicontinuous, and respectively \(\alpha \)- and \(\beta \)-convex, let \(\omega \in {\mathbb {R}}_{++}\), and let \(r\in {\text {ran}}({\text {Id}}+\omega (\widehat{\partial }f+\widehat{\partial }g))\). Suppose that \(\alpha +\beta >-1/\omega \) and let \((\sigma ,\tau )\in \mathbb {R}^2\) and \((r_f, r_g)\in X^2\) be such that \(\sigma +\tau =\theta /\omega \), \(r_f+r_g =(q+r)/\omega \),

$$\begin{aligned} \theta \alpha +\sigma >0 \quad \text {and}\quad \theta \beta +\tau \ge 0. \end{aligned}$$
(42)

Let \(\gamma \in {\mathbb {R}}_{++}\) be such that \(1+\gamma \sigma >0\) and \(1+\gamma \tau >0\). Given any \(\kappa \in \left. \right] 0,1 \left. \right] \) and \(x_0\in X\), define the sequence \((x_n)_{n\in {\mathbb {N}}}\) by

$$\begin{aligned} \forall {n\in {\mathbb {N}}}, \quad x_{n+1} :=(1-\kappa )x_n +\kappa R_gR_f x_n, \end{aligned}$$
(43)

where

$$\begin{aligned} R_f&:=\frac{2}{\theta }\left( {\text {Prox}}_{\frac{\gamma \theta }{1+\gamma \sigma }f}\circ \left( \frac{\theta }{1+\gamma \sigma }{\text {Id}}+\frac{\gamma \theta }{1+\gamma \sigma }r_f-q\right) +q\right) -{\text {Id}}\end{aligned}$$
(44a)
$$\begin{aligned} ~\text {and}~ R_g&:=\frac{2}{\theta }\left( {\text {Prox}}_{\frac{\gamma \theta }{1+\gamma \tau }g}\circ \left( \frac{\theta }{1+\gamma \tau }{\text {Id}}+\frac{\gamma \theta }{1+\gamma \tau }r_g-q\right) +q\right) -{\text {Id}}. \end{aligned}$$
(44b)

Then \({\text {Prox}}_{\omega (f+g)}(r) ={\text {Prox}}_{\frac{\gamma \theta }{1+\gamma \sigma }f}\left( \frac{\theta }{1+\gamma \sigma }\overline{x}+\frac{\gamma \theta }{1+\gamma \sigma }r_f-q\right) \) with \(\overline{x}\in {\text {Fix}}R_gR_f\) and the following hold:

  1. (i)

    \(\left( {\text {Prox}}_{\frac{\gamma \theta }{1+\gamma \sigma }f}\left( \frac{\theta }{1+\gamma \sigma }x_n+\frac{\gamma \theta }{1+\gamma \sigma }r_f-q\right) \right) _{n\in {\mathbb {N}}}\) converges strongly to \({\text {Prox}}_{\omega (f+g)}(r)\).

  2. (ii)

    If \(\kappa <1\), then \((x_n)_{n\in {\mathbb {N}}}\) converges weakly to \(\overline{x}\).

  3. (iii)

    If f is differentiable with Lipschitz continuous gradient, then the above convergences are linear.

Proof

By assumption, [8, Lemma 5.2] implies that \(\widehat{\partial }f\) and \(\widehat{\partial }g\) are respectively maximally \(\alpha \)- and \(\beta \)-monotone, and that

$$\begin{aligned} J_{\frac{\gamma }{1+\gamma \sigma }\widehat{\partial }f} ={\text {Prox}}_{\frac{\gamma }{1+\gamma \sigma }f} \quad \text {and}\quad J_{\frac{\gamma }{1+\gamma \tau }\widehat{\partial }g} ={\text {Prox}}_{\frac{\gamma }{1+\gamma \tau }g}. \end{aligned}$$
(45)

Next, Lemma 4.1 implies that \(J_{\omega (\widehat{\partial }f+\widehat{\partial }g)}(r) ={\text {Prox}}_{\omega (f+g)}(r)\). The conclusion then follows from Theorem 3.2 applied to \((A,B) =(\widehat{\partial }f,\widehat{\partial }g)\). \(\square \)

Corollary 4.3

(Projection onto intersection of two closed convex sets) Let C and D be closed convex subsets of X with \(C\cap D\ne \varnothing \) and let \(r\in {\text {ran}}({\text {Id}}+N_C+N_D)\). Let \(\sigma \in {\mathbb {R}}_{++}\), \(\tau \in {\mathbb {R}}_+\), and \((r_C,r_D)\in X^2\) with \(r_C+r_D=(\sigma +\tau )(q+r)/\theta \). Let also \(\gamma \in {\mathbb {R}}_{++}\), \(\kappa \in \left. \right] 0,1 \left. \right] \), \(x_0\in X\), and define the sequence \((x_n)_{n\in {\mathbb {N}}}\) by

$$\begin{aligned} \forall {n\in {\mathbb {N}}}, \quad x_{n+1} :=(1-\kappa )x_n +\kappa \bar{R}_D\bar{R}_C x_n, \end{aligned}$$
(46)

where

$$\begin{aligned} \bar{R}_C&:=\frac{2}{\theta }\left( P_C\circ \left( \frac{\theta }{1+\gamma \sigma }{\text {Id}}+\frac{\gamma \theta }{1+\gamma \sigma }r_C-q\right) +q\right) -{\text {Id}}\end{aligned}$$
(47a)
$$\begin{aligned} \text {and}~ \bar{R}_D&:=\frac{2}{\theta }\left( P_D\circ \left( \frac{\theta }{1+\gamma \tau }{\text {Id}}+\frac{\gamma \theta }{1+\gamma \tau }r_D-q\right) +q\right) -{\text {Id}}. \end{aligned}$$
(47b)

Then \(\left( P_C\left( \frac{\theta }{1+\gamma \sigma }x_n+\frac{\gamma \theta }{1+\gamma \sigma }r_C-q\right) \right) _{n\in {\mathbb {N}}}\) converges strongly to \(P_{C\cap D}(r) =P_C\left( \frac{\theta }{1+\gamma \sigma }\overline{x}+\frac{\gamma \theta }{1+\gamma \sigma }r_C-q\right) \) with \(\overline{x}\in {\text {Fix}}\bar{R}_D\bar{R}_C\). Furthermore, if \(\kappa <1\), then \((x_n)_{n\in {\mathbb {N}}}\) converges weakly to \(\overline{x}\).

Proof

We first derive from [4, Example 20.26] that \(N_C\) and \(N_D\) are maximally monotone and from [4, Example 23.4] that

$$\begin{aligned} J_{\frac{\gamma }{1+\gamma \sigma }N_C} =J_{N_C} =P_C \quad \text {and}\quad J_{\frac{\gamma }{1+\gamma \tau }N_D} =J_{N_D} =P_D. \end{aligned}$$
(48)

Setting \(\omega :=\theta /(\sigma +\tau ) >0\), we note that \(r\in {\text {ran}}({\text {Id}}+N_C+N_D) ={\text {ran}}\big ({\text {Id}}+\omega (N_C+N_D)\big )\) and from Lemma 4.1 that \(J_{\omega (N_C+N_D)}(r) =J_{N_C+N_D}(r) =P_{C\cap D}(r)\). Now apply Theorem 3.2 to \((A,B) =(N_C,N_D)\). \(\square \)

As in the proof of Corollary 3.6, if we choose \(\theta =\frac{1}{\eta }\) (with \(\eta \in \left. \right] 0,1\left[ \right. \)), \(q =-r\), \(\sigma =\tau =\frac{1-\eta }{\gamma \eta }\), and \(r_C =r_D =0\), then Corollary 4.3 reduces to [2, Corollary 3.1] where (46) reads as

$$\begin{aligned} \forall {n\in {\mathbb {N}}},\ x_{n+1}:= & {} (1-\kappa )x_n +\kappa \Big (2\eta P_D\circ ({\text {Id}}+r) -2\eta r -{\text {Id}}\Big )\nonumber \\&~~~~~~~~~~~~~~~~~~~ \circ \Big (2\eta P_C\circ ({\text {Id}}+r) -2\eta r -{\text {Id}}\Big )x_n. \end{aligned}$$
(49)

Similarly, if \(\theta =\frac{1}{\eta }\) (with \(\eta \in {\mathbb {R}}_{++}\)), \(q =r\), \(\sigma =\tau =\frac{1}{\gamma }\), and \(r_C =r_D =\frac{2\eta }{\gamma }r\), then (46) is simplified to

$$\begin{aligned} \forall {n\in {\mathbb {N}}},\ x_{n+1}:= & {} (1-\kappa )x_n +\kappa \left( 2\eta P_D\circ \frac{1}{2\eta }{\text {Id}}+2\eta r -{\text {Id}}\right) \nonumber \\&~~~~~~~~~~~~~~~~~~~~ \circ \left( 2\eta P_C\circ \frac{1}{2\eta }{\text {Id}}+2\eta r -{\text {Id}}\right) x_n. \end{aligned}$$
(50)