1 Introduction

Throughout this paper, \({\mathcal {H}}\) is a real Hilbert space endowed with the scalar product \(\langle .,.\rangle \) and the corresponding norm \(\Vert .\Vert \). Given \(A:{\mathcal {H}}\rightarrow 2^{\mathcal {H}}\) a maximally monotone operator, we will study the convergence properties of a general class of inertial proximal based algorithms that aim to solve the inclusion \(Ax \ni 0\), whose solution set is denoted by \(\text{ zer }A\). Given initial data \(x_0\), \(x_1\in {\mathcal {H}}\), we consider the Relaxed Inertial Proximal Algorithm, (RIPA) for short, defined by, for \(k \ge 1\)

$$\begin{aligned} \text{(RIPA) } \quad \left\{ \begin{array}{l} y_k=x_k+\alpha _k(x_k-x_{k-1})\\ x_{k+1}=(1-\rho _k)y_k + \rho _k J_{\mu _k A}(y_k). \end{array} \right. \end{aligned}$$

In the above formula, \(J_{\mu A} = \left( I + \mu A \right) ^{-1} \) is the resolvent of A with index \(\mu >0\), where I is the identity operator. It plays a central role in the analysis of (RIPA), along with the Yosida regularization of A with parameter \(\mu >0\), which is defined by \( A_{\mu } = \frac{1}{\mu } \left( I- J_{\mu A} \right) ,\) (see “Appendix A” for their main properties). We assume the following set of hypotheses

figure a

If \(\rho _k=1\) for every \(k\ge 1\), then algorithm (RIPA) reduces to the Inertial Proximal Algorithm

$$\begin{aligned} \text{(IPA) }\qquad \left\{ \begin{array}{l} y_k=x_k+\alpha _k(x_k-x_{k-1})\\ x_{k+1}=J_{\mu _k A}(y_k). \end{array} \right. \end{aligned}$$

On the other hand, if \(\alpha _k=0\) for every \(k\ge 1\), then algorithm (RIPA) boils down to the Relaxed Proximal Algorithm

$$\begin{aligned} \text{(RPA) }\qquad x_{k+1}=(1-\rho _k)x_k+\rho _k J_{\mu _k A}(x_k).\end{aligned}$$

For classical references on relaxed proximal algorithms, see for example [10, 18, 19]. An inertial version of such algorithms was first studied in [1], see also [22]. Recent studies showed the importance of the case \(\alpha _k \rightarrow ~1\). When \(A= \partial \Psi \), where \(\Psi : {\mathcal {H}}\rightarrow \mathbb {R}\cup \{+\infty \}\) is a proper closed convex function, this is a key property for obtaining fast convergent methods, in line with the Nesterov and FISTA methods [9, 24]. The case of inertial methods for general maximally monotone operators remains largely to be explored. Inertial proximal splitting methods were recently considered in [12, 17, 21, 23, 27]. One important application concerns the design of inertial ADMM algorithms for linearly constrained minimization problems. Recently, a new approach was delineated by Attouch and Peypouquet [7] based on the Yosida regularization of A with a varying parameter. In this paper, we will provide a unifying approach to these problems that extends [7] and opens new perspectives. Our study is the natural extension, in the algorithmic case, of the convergence results obtained by Attouch–Cabot [5] in the case of continuous dynamical systems.

1.1 Relaxed proximal algorithms

The classical proximal algorithm is obtained by the implicit discretization of the differential inclusion

$$\begin{aligned} \dot{x}(t) + A (x(t)) \ni 0. \end{aligned}$$
(1)

By contrast, the relaxed proximal algorithm (RPA) comes naturally by discretizing the regularized differential equation

$$\begin{aligned} \dot{x}(t) + A_{\lambda } (x(t)) =0, \end{aligned}$$
(2)

where \(A_{\lambda }\) is the Yosida approximation of A of index \(\lambda >0\). Since \(A_{\lambda }\) is Lipschitz continuous, (2) is relevant owing to the Cauchy–Lipschitz theorem. Indeed, implicit time discretization of (2), with step size \(h_k>0\), gives the relaxed proximal algorithm (details of the proof are given below in the inertial case)

$$\begin{aligned} x_{k+1} = \left( 1- \rho _k\right) x_k + \rho _k J_{\mu _k A}(x_k), \end{aligned}$$

with \(\mu _k = \lambda + h_k\) and \(\rho _k = \frac{h_k}{\lambda +h_k} \). System (2) has many advantages over the differential inclusion (1). Note that \(\text{ zer } A = \text{ zer } A_{\lambda }\), so the equilibria are the same for both systems. Since \(A_{\lambda }\) is cocoercive, the trajectories of (2) converge weakly to equilibria, which is a great contrast to the semigroup generated by A, for which, in general we only have weak ergodic convergence. Thus, one can expect that the associated algorithms also benefit from these favorable properties.

These aspects are even more striking when one considers inertial dynamics. The damped inertial dynamics \( \ddot{x}(t) + \gamma (t)\dot{x}(t) + A (x(t)) \ni 0\) is ill-posed, and no general convergence theory is available for this system. By contrast, the regularized dynamics

$$\begin{aligned} \ddot{x}(t) + \gamma (t)\dot{x}(t) + A_{\lambda (t)}(x(t)) = 0 \end{aligned}$$
(3)

is well-posed. Some first results concerning the adjustments of the parameters \(\gamma (t)\) and \(\lambda (t)\), in order to have good asymptotic convergence properties have been obtained in [2, 6] and [7]. A closely connected dynamical system with variable damping and step sizes has been considered in [11].

Let’s proceed to the implicit temporal discretization of (3). Indeed, implicit discretizations tend to follow the continuous-time trajectories more closely than explicit discretizations. Note that, due to the Yosida regularization, the explicit discretization of (3) has the same numerical complexity as the implicit discretization (they each need one resolvent computation per iteration). Taking a time step \(h_k>0\), and setting \(t_k= \sum \nolimits _{i=1}^k h_i\), \(x_k = x(t_k)\), \(\lambda _k = \lambda (t_k)\), \(\gamma _k =\gamma (t_k)\), an implicit finite-difference scheme for (3) with centered second-order variation gives

$$\begin{aligned} \frac{1}{h_k^2}(x_{k+1} -2 x_{k} + x_{k-1} ) +\frac{\gamma _k}{h_k}( x_{k} - x_{k-1}) + A_{\lambda _k} (x_{k+1}) =0. \end{aligned}$$
(4)

After expanding (4), we obtain \(x_{k+1} + h_k^2 A_{\lambda _k} (x_{k+1}) = x_{k} + \left( 1- {\gamma _k}{h_k}\right) ( x_{k} - x_{k-1}).\) Setting \(s_k=h_k^2\), we have

$$\begin{aligned} x_{k+1} = J_{s_k A_{\lambda _k}} \left( x_{k} + \left( 1- {\gamma _k}{h_k}\right) ( x_{k} - x_{k-1})\right) , \end{aligned}$$

where \(J_{s_k A_{\lambda _k}}\) is the resolvent of index \(s_k>0\) of the maximally monotone operator \(A_{\lambda _k}\). Setting \(\alpha _k = 1- {\gamma _k}{h_k}\), this gives the following algorithm

$$\begin{aligned} \left\{ \begin{array}{l} y_k = x_{k} + \alpha _k ( x_{k} - x_{k-1}) \\ x_{k+1} = J_{s_k A_{\lambda _k}}(y_k). \end{array}\right. \end{aligned}$$
(5)

The resolvent equation, \(\left( A_{\lambda }\right) _s = A_{\lambda + s}\), gives \( J_{s A_{\lambda }} = \frac{\lambda }{\lambda +s}I + \frac{s}{\lambda +s}J_{(\lambda +s) A}. \) Hence, (5) is equivalent to

$$\begin{aligned} \left\{ \begin{array}{l} y_k= \displaystyle {x_{k} + \alpha _k( x_{k} - x_{k-1})} \\ x_{k+1} = \displaystyle {\frac{\lambda _k}{\lambda _k +s_k}y_k + \frac{s_k}{\lambda _k +s_k}J_{(\lambda _k +s_k) A}(y_k)}. \end{array}\right. \end{aligned}$$

That’s algorithm (RIPA) with \(\mu _k= \lambda _k +s_k \) and \(\rho _k= \frac{s_k}{\lambda _k +s_k} \).

1.2 Geometrical aspects of (RIPA)

(RIPA) has a simple geometrical interpretation. This is illustrated in Fig. 1, where, as a distinctive feature of the proximal method, the closed affine half-space \( \left\{ z\in {\mathcal {H}}: \left\langle y_k- J_{\mu _k A}(y_k), z- J_{\mu _k A}(y_k) \right\rangle \le 0 \right\} \) separates \(y_k\) from \( \text{ zer }A\).

Fig. 1
figure 1

Algorithm (RIPA)

In the Fig. 1, the parameter \(\rho _k\) has been taken between zero and one, so that \(x_{k+1}\) belongs to the line segment \([y_k, J_{\mu _k A}(y_k)]\). Indeed, we will see that, under certain conditions, the parameter \(\rho _k\) can vary in the interval [0, 2]. The case \(\rho _k >1\) is particularly interesting, since it allows to combine inertial effect with over-relaxation. These combined aspects have been little studied until now, see [20] for some recent results in the case of a fixed resolvent operator \(J_{\mu A}\).

For a maximally monotone operator A such that \(\text{ zer }A\ne \emptyset \), we have \(\lim _{\mu \rightarrow +\infty }J_{\mu A}(x)= \text{ proj }_{{\mathrm{zer}}A} x\), see [8, Theorem 23.47 \(\mathrm{{(i)}}\)]. Thus, in the case \(\mu _k \rightarrow +\infty \) (which, we will see, is important for obtaining fast methods) we have \( J_{\mu _k A}(y_k) \sim \text{ proj }_{{\mathrm{zer}}A} y_k\), as shown in Fig. 1.

1.3 Presentation of the results

The case \(A=0\) already reveals some crucial notions. In this case, \(J_{\mu _k A}=I\) and algorithm (RIPA) becomes \(x_{k+1}=x_k+\alpha _k(x_k-x_{k-1}).\) It ensues that for every \(k\ge 1\),

$$\begin{aligned}x_{k+1}-x_k=\left( \prod _{j=1}^k \alpha _j\right) (x_1-x_0) \, \text{ which } \text{ gives } \, x_k=x_1+\left( \sum _{l=1}^{k-1}\prod _{j=1}^l \alpha _j\right) (x_1-x_0). \end{aligned}$$

Hence, \((x_k)\) converges if and only if \(x_1=x_0\) or if the following condition is satisfied   \(\sum \nolimits _{l=1}^{+\infty }\prod _{j=1}^l \alpha _j <+\infty .\) When \(A=\partial \Psi \) is the subdifferential of a convex function \(\Psi :{\mathcal {H}}\rightarrow \mathbb {R}\cup \{+\infty \}\) with a continuum of minima, this condition has been identified as a necessary condition for the convergence of the iterates of (IPA), see [16]. From now on, we assume that

figure b

and we define the sequence \((t_i)\) by

$$\begin{aligned} t_{i}=1+\sum _{l=i}^{+\infty }\left( \prod _{j=i}^l \alpha _j\right) . \end{aligned}$$
(6)

The sequence \((t_i)\) plays a crucial role in the study of the asymptotic behavior of the iterates of (IPA). This was recently highlighted by Attouch and Cabot [4] in the potential case. However, most of the energetical arguments used in [4] are not available in the general framework of maximally monotone operators. It follows that we must develop different techniques.

As a model example of our results, let us give the following shortened version of Theorem 2.6.

Theorem 1.1

Under (H), assume that \({\mathrm{zer}}A\ne \emptyset \). Suppose that \(\alpha _k\in [0,1]\) and \(\rho _k\in ]0,2]\) for every \(k\ge 1\). Under \((K_0)\), let \((t_i)\) be the sequence defined by (6). Assume that there exists \(\varepsilon \in ]0,1[\) such that for k large enough,

figure c

Then for any sequence \((x_k)\) generated by (RIPA), we have

  1. (i)

    \(\displaystyle \sum \nolimits _{i=1}^{+\infty }\alpha _i t_{i+1}\Vert x_i-x_{i-1}\Vert ^2<+\infty \).

  2. (ii)

    \(\displaystyle \sum \nolimits _{i=1}^{+\infty }\rho _i (2-\rho _i)\, t_{i+1}\Vert \mu _i A_{\mu _i}(x_i)\Vert ^2<+\infty \).

  3. (iii)

    For any \(z\in {\mathrm{zer}}A\), \(\lim _{k\rightarrow +\infty }\Vert x_k-z\Vert \) exists, and hence \((x_k)\) is bounded.

Assume moreover that \(\limsup _{k\rightarrow +\infty } \rho _k<2\), and \(\liminf _{k\rightarrow +\infty } \rho _k>0.\) Then the following holds

  1. (iv)

    \(\lim _{k\rightarrow +\infty }\mu _k A_{\mu _k}(x_k)=0\).

  2. (v)

    If   \(\liminf _{k\rightarrow +\infty }\mu _k>0\), then there exists \(x_\infty \in {\mathrm{zer}}A\) such that \(x_k\rightharpoonup x_\infty \) weakly in \({\mathcal {H}}\) as \(k\rightarrow +\infty \).

These results are complemented in Theorem 2.14 so as to cover the case of a possibly vanishing parameter \(\rho _k\). Then, the assumption \(\liminf _{k\rightarrow +\infty } \rho _k>0\) is removed and replaced with an alternative set of assumptions.

1.4 Organization of the paper

Our main convergence results are established in Sect. 2, see Theorem 2.6 and Theorem 2.14. Based on the behavior of the sequences \((\alpha _k)\), \((\mu _k)\), \((\rho _k)\), we show the weak convergence of the sequences \((x_k)\) generated by algorithm (RIPA). Thus, in the general context of maximally monotone operators acting on Hilbert spaces, we unify and extend most of the previously known results concerning the combination of the proximal methods with relaxation and inertia. These results are illustrated in Sect. 3 which presents applications to special classes of sequences \((\alpha _k)\), \((\mu _k)\), \((\rho _k)\). In particular, we find the recent result obtained by Attouch–Peypouquet based on the accelerated method of Nesterov. Finally, in Sect. 4, we provide ergodic convergence results, extending the seminal result of Brezis–Lions. The paper is supplemented by some auxiliary technical lemmas contained in the appendix.

2 Convergence results

2.1 Equivalent forms of (RIPA)

Let us give several equivalent formulations of (RIPA). Observe that

$$\begin{aligned}(1-\rho _k)y_k+\rho _k J_{\mu _k A}(y_k)=y_k-\rho _k\mu _k A_{\mu _k}(y_k).\end{aligned}$$

It ensues that (RIPA) can be equivalently rewritten as

$$\begin{aligned} \left\{ \begin{array}{l} y_k=x_k+\alpha _k(x_k-x_{k-1})\\ x_{k+1}=y_k-\rho _k\mu _k A_{\mu _k}(y_k). \end{array} \right. \end{aligned}$$
(7)

Recalling that \(\xi =A_\mu (y)\) if and only if \(\xi \in A(y-\mu \xi )\), we obtain the following equivalences

$$\begin{aligned}&-\frac{1}{\rho _k\mu _k}(x_{k+1}-y_k)=A_{\mu _k}(y_k)\\&\quad \Longleftrightarrow -\frac{1}{\rho _k\mu _k}(x_{k+1}-y_k)\in A\left( y_k+\frac{1}{\rho _k}(x_{k+1}-y_k)\right) \\&\quad \Longleftrightarrow -\frac{1}{\rho _k\mu _k}(x_{k+1}-y_k)\in A\left( x_{k+1}+\left( \frac{1}{\rho _k}-1\right) (x_{k+1}-y_k)\right) . \end{aligned}$$

This gives rise to the equivalent formulation of (RIPA)

$$\begin{aligned} \left\{ \begin{array}{l} y_k=x_k+\alpha _k(x_k-x_{k-1})\\ x_{k+1}-y_k\in -\rho _k\mu _k\, A\left( x_{k+1}+\left( \frac{1}{\rho _k}-1\right) (x_{k+1}-y_k)\right) . \end{array} \right. \end{aligned}$$
(8)

Depending on the situation, we will use one of the above mentioned equivalent formulations.

2.2 The anchor sequence \((h_k)\)

Given \(z\in {\mathcal {H}}\) and a sequence \((x_k)\) generated by (RIPA), let us define the sequence \((h_k)\) by \(h_k=\frac{1}{2}\Vert x_k-z\Vert ^2\). The difference \(h_{k+1}-h_k-\alpha _k(h_k-h_{k-1})\) plays a central role in the study of the asymptotic behavior of \((x_k)\) as \(k\rightarrow +\infty \). Let us start with a basic result that relies on algebraic manipulations of the terms \(h_{k-1}\), \(h_k\) and \(h_{k+1}\).

Lemma 2.1

Let \((x_k)\) be a sequence in \({\mathcal {H}}\), and let \((\alpha _k)\) be a sequence of real numbers. Given \(z\in {\mathcal {H}}\), let us define the sequence \((h_k)\) by \(h_k=\frac{1}{2}\Vert x_k-z\Vert ^2\). We then have

$$\begin{aligned} h_{k+1}- & {} h_k-\alpha _k(h_k-h_{k-1})= \frac{1}{2}(\alpha _k+\alpha _k^2)\Vert x_k-x_{k-1}\Vert ^2+\langle x_{k+1}-y_k, x_{k+1}-z\rangle \nonumber \\&-\frac{1}{2}\Vert x_{k+1}-y_k\Vert ^2, \end{aligned}$$
(9)

where \(y_k=x_k+\alpha _k(x_k-x_{k-1})\).

Proof

Observe that

$$\begin{aligned} \Vert y_k-z\Vert ^2= & {} \Vert x_k+\alpha _k(x_k-x_{k-1})-z\Vert ^2\\= & {} \Vert x_k-z\Vert ^2+\alpha _k^2\Vert x_k-x_{k-1}\Vert ^2+2\alpha _k\langle x_k-z,x_k-x_{k-1}\rangle \\= & {} \Vert x_k-z\Vert ^2+\alpha _k^2\Vert x_k-x_{k-1}\Vert ^2\\&+\,\alpha _k \Vert x_k-z\Vert ^2+ \alpha _k\Vert x_k-x_{k-1}\Vert ^2-\alpha _k\Vert x_{k-1}-z\Vert ^2\\= & {} \Vert x_k-z\Vert ^2+\alpha _k(\Vert x_k-z\Vert ^2-\Vert x_{k-1}-z\Vert ^2)\\&+\,(\alpha _k+\alpha _k^2)\Vert x_k-x_{k-1}\Vert ^2\\= & {} 2[h_k+\alpha _k(h_k-h_{k-1})]+(\alpha _k+\alpha _k^2)\Vert x_k-x_{k-1}\Vert ^2. \end{aligned}$$

We deduce that

$$\begin{aligned} h_{k+1}-h_k-\alpha _k(h_k-h_{k-1})= & {} \frac{1}{2}\Vert x_{k+1}-z\Vert ^2-\frac{1}{2}\Vert y_k-z\Vert ^2 \\&+\,\frac{1}{2}(\alpha _k+\alpha _k^2)\Vert x_k-x_{k-1}\Vert ^2\\= & {} \langle x_{k+1}-y_k,x_{k+1}-z\rangle -\frac{1}{2}\Vert x_{k+1}-y_k\Vert ^2 \\&+\,\frac{1}{2}(\alpha _k+\alpha _k^2)\Vert x_k-x_{k-1}\Vert ^2. \end{aligned}$$

\(\square \)

Let us particularize Lemma 2.1 to sequences generated by (RIPA). In the following statement, \({\mathrm{gph}}A\) stands for the graph of A, see “Appendix A”.

Lemma 2.2

Assume (H) and let \((z,q)\in {\mathrm{gph}}A\). Given a sequence \((x_k)\) generated by (RIPA), let \((h_k)\) be the sequence defined by \(h_k=\frac{1}{2}\Vert x_k-z\Vert ^2\). Then we have for every \(k\ge 1\),

$$\begin{aligned}&h_{k+1}-h_k-\alpha _k(h_k-h_{k-1})+\rho _{k}\mu _k\left\langle x_{k+1}+\left( \frac{1}{\rho _k}-1\right) (x_{k+1}-y_k)-z,q\right\rangle \nonumber \\&\quad \le \frac{1}{2}(\alpha _k+\alpha _k^2)\Vert x_k-x_{k-1}\Vert ^2-\frac{2-\rho _k}{2\rho _k}\Vert x_{k+1}-y_k\Vert ^2 . \end{aligned}$$
(10)

Assume moreover that \({\mathrm{zer}}A\ne \emptyset \), and let \(z\in {\mathrm{zer}}A\). The following holds true for every \(k\ge 1\)

$$\begin{aligned} h_{k+1}-h_k-\alpha _k(h_k-h_{k-1})\le \frac{1}{2}(\alpha _k+\alpha _k^2)\Vert x_k-x_{k-1}\Vert ^2-\frac{2-\rho _k}{2\rho _k}\Vert x_{k+1}-y_k\Vert ^2. \end{aligned}$$
(11)

Proof

Iteration (RIPA) can be expressed as

$$\begin{aligned}x_{k+1}-y_k\in -\rho _k\mu _k\, A\left( x_{k+1}+\left( \frac{1}{\rho _k}-1\right) (x_{k+1}-y_k)\right) ,\end{aligned}$$

see (8). Since \(q\in A(z)\), the monotonicity of A yields

$$\begin{aligned}\left\langle x_{k+1}-y_k+\rho _k\mu _k \,q, x_{k+1}+\left( \frac{1}{\rho _k}-1\right) (x_{k+1}-y_k)-z\right\rangle \le 0.\end{aligned}$$

Hence

$$\begin{aligned}&\langle x_{k+1}-y_k, x_{k+1}-z\rangle \le -\left( \frac{1}{\rho _k}-1\right) \Vert x_{k+1}-y_k\Vert ^2 \\&\quad -\rho _k\mu _k\left\langle q, x_{k+1}+\left( \frac{1}{\rho _k}-1\right) (x_{k+1}-y_k)-z\right\rangle . \end{aligned}$$

From equality (9) of Lemma 2.1, we deduce immediately that

$$\begin{aligned}&h_{k+1}-h_k-\alpha _k(h_k-h_{k-1})+\rho _{k}\mu _k\left\langle q, x_{k+1}+\left( \frac{1}{\rho _k}-1\right) (x_{k+1}-y_k)-z\right\rangle \\&\quad \le \frac{1}{2}(\alpha _k+\alpha _k^2)\Vert x_k-x_{k-1}\Vert ^2-\left( \frac{1}{\rho _k}-\frac{1}{2}\right) \Vert x_{k+1}-y_k\Vert ^2, \end{aligned}$$

which is nothing but (10). Finally, if \(z\in {\mathrm{zer}}A\), then inequality (11) is obtained by taking \(q=0\) in (10). \(\square \)

Lemma 2.3

Under (H), assume that \(\rho _k\in ]0,2]\) for every \(k\ge 1\). Suppose that \({\mathrm{zer}}A\ne \emptyset \) and let \(z\in {\mathrm{zer}}A\). Given a sequence \((x_k)\) generated by (RIPA), let \((h_k)\) be the sequence defined by \(h_k=\frac{1}{2}\Vert x_k-z\Vert ^2\). Then we have for every \(k\ge 1\),

$$\begin{aligned}&h_{k+1}-h_k-\alpha _k(h_k-h_{k-1})\nonumber \\&\quad \le \left( \frac{1}{2}(\alpha _k+\alpha _k^2)-\frac{2-\rho _k}{2\rho _k}(1-\alpha _k)^2\right) \Vert x_k-x_{k-1}\Vert ^2\nonumber \\&\qquad -\frac{2-\rho _k}{2\rho _k}(1-\alpha _k)(\Vert x_{k+1}-x_k\Vert ^2-\Vert x_{k}-x_{k-1}\Vert ^2). \end{aligned}$$
(12)

Proof

Let us formulate Lemma 2.2 in a recursive form. Observe that

$$\begin{aligned} \Vert x_{k+1}-y_k\Vert ^2= & {} \Vert x_{k+1}-x_k -\alpha _k(x_k-x_{k-1})\Vert ^2\\= & {} \Vert x_{k+1}-x_k -(x_k-x_{k-1})+(1-\alpha _k)(x_k-x_{k-1})\Vert ^2\\= & {} \Vert x_{k+1}-2x_k +x_{k-1}\Vert ^2+(1-\alpha _k)^2\Vert x_k-x_{k-1}\Vert ^2 \\&+\, 2(1-\alpha _k)\langle x_{k+1}-2x_k +x_{k-1},x_k-x_{k-1}\rangle . \end{aligned}$$

On the other hand, we have

$$\begin{aligned} \Vert x_{k+1}-x_k\Vert ^2= & {} \Vert x_{k+1}-2x_k +x_{k-1}\Vert ^2+\Vert x_k-x_{k-1}\Vert ^2 \\&+\,2\langle x_{k+1}-2x_k +x_{k-1},x_k-x_{k-1}\rangle . \end{aligned}$$

By combining the above equalities, we obtain

$$\begin{aligned} \Vert x_{k+1}-y_k\Vert ^2= & {} \alpha _k \Vert x_{k+1}-2x_k +x_{k-1}\Vert ^2+(1-\alpha _k)^2\Vert x_k-x_{k-1}\Vert ^2\\&+\,(1-\alpha _k)(\Vert x_{k+1}-x_k\Vert ^2-\Vert x_{k}-x_{k-1}\Vert ^2)\\\ge & {} (1-\alpha _k)^2\Vert x_k-x_{k-1}\Vert ^2 \\&+\,(1-\alpha _k)(\Vert x_{k+1}-x_k\Vert ^2-\Vert x_{k}-x_{k-1}\Vert ^2). \end{aligned}$$

Since \(\rho _k\in ]0,2]\) by assumption, the expected inequality follows immediately from (11). \(\square \)

2.3 The sequences \((t_i)\) and \((t_{i,k})\)

Let us introduce the sequences \((t_i)\) and \((t_{i,k})\) which will play a central role in the analysis of algorithm (RIPA) (the sequence \((t_i)\) has already been briefly defined in the introduction). Throughout the paper, we use the convention \(\prod _{j=i}^{i-1}\alpha _j=1\) for \(i\ge 1\). Given i, \(k\ge 1\), we write \(t_{i,k}\) the quantity defined by

$$\begin{aligned} t_{i,k}=\sum _{l=i-1}^{k-1}\left( \prod _{j=i}^l \alpha _j\right) =1+\sum _{l=i}^{k-1}\left( \prod _{j=i}^l \alpha _j\right) \quad \text{ if } i\le k, \end{aligned}$$
(13)

and \(t_{i,k}=0\) if \(i>k\). Observe that for every \(i\ge 1\) and \(k\ge i+1\),

$$\begin{aligned} 1+\alpha _i t_{i+1,k}=1+\alpha _i\left( \sum _{l=i}^{k-1}\left( \prod _{j=i+1}^l \alpha _j\right) \right) =1+\sum _{l=i}^{k-1}\left( \prod _{j=i}^l \alpha _j\right) =t_{i,k}. \end{aligned}$$
(14)

From now on, we assume that

figure d

We define the sequence \((t_i)\) by

$$\begin{aligned} t_{i}=\sum _{l=i-1}^{+\infty }\left( \prod _{j=i}^l \alpha _j\right) =1+\sum _{l=i}^{+\infty }\left( \prod _{j=i}^l \alpha _j\right) . \end{aligned}$$
(15)

For each \(i\ge 1\), the sequence \((t_{i,k})_k\) converges increasingly to \(t_i\). By letting \(k\rightarrow +\infty \) in (14), we obtain

$$\begin{aligned}1+\alpha _i t_{i+1}=t_i,\end{aligned}$$

for every \(i\ge 1\). Let us summarize the above results.

Lemma 2.4

Let \((\alpha _k)\) be a sequence of nonnegative real numbers. Then we have

  1. (i)

    The sequence \((t_{i,k})\) defined by (13) satisfies the recursive relation: for every \(i\ge 1\) and \(k\ge i+1\)

    $$\begin{aligned} 1+\alpha _i t_{i+1,k}=t_{i,k} \end{aligned}$$
  2. (ii)

    Under \((K_0)\), the sequence \((t_i)\) given by (15) is well-defined and satisfies for every \(i\ge 1\)

    $$\begin{aligned}1+\alpha _i t_{i+1}= t_i.\end{aligned}$$

2.4 Weak convergence of the iterates

Our convergence results are based on Lyapunov analysis. The weak convergence of the sequences generated by (RIPA) is based on the Opial lemma [25], which we recall in its discrete form.

Lemma 2.5

(Opial). Let S be a nonempty subset of \(\mathcal {H}\), and \((x_k)\) a sequence of elements of \(\mathcal {H}\) satisfying

  1. (i)

    for every \(z\in S\), \(\lim _{k\rightarrow +\infty }\Vert x_k-z\Vert \) exists;

  2. (ii)

    every sequential weak cluster point of \((x_k)\), as \(k\rightarrow +\infty \), belongs to S.

Then the sequence \((x_k)\) converges weakly as \(k\rightarrow +\infty \) toward some \(x_\infty \in S\).

Let us state the main result of this section.

Theorem 2.6

Under (H), assume that \({\mathrm{zer}}A\ne \emptyset \). Suppose that \(\alpha _k\in [0,1]\) and \(\rho _k\in ]0,2]\) for every \(k\ge 1\). Under \((K_0)\), let \((t_i)\) be the sequence defined by (15). Assume that there exists \(\varepsilon \in ]0,1[\) such that for k large enough,

figure e

Then for any sequence \((x_k)\) generated by (RIPA), we have

  1. (i)

    \(\displaystyle \sum \nolimits _{i=1}^{+\infty }\frac{2-\rho _{i-1}}{\rho _{i-1}}(1-\alpha _{i-1})\Vert x_i-x_{i-1}\Vert ^2<+\infty \), and as a consequence   \(\displaystyle \sum \nolimits _{i=1}^{+\infty }\alpha _i t_{i+1}\Vert x_i-x_{i-1}\Vert ^2<+\infty \).

  2. (ii)

    \(\displaystyle \sum \nolimits _{i=1}^{+\infty }\rho _i (2-\rho _i)\, t_{i+1}\Vert \mu _i A_{\mu _i}(y_i)\Vert ^2<+\infty \),  and   \(\displaystyle \sum \nolimits _{i=1}^{+\infty }\rho _i (2-\rho _i)\, t_{i+1}\Vert \mu _i A_{\mu _i}(x_i)\Vert ^2<+\infty \).

  3. (iii)

    For any \(z\in {\mathrm{zer}}A\), \(\lim _{k\rightarrow +\infty }\Vert x_k-z\Vert \) exists, and hence \((x_k)\) is bounded.

Assume moreover that

figure f

Then the following holds

  1. (iv)

    \(\lim _{k\rightarrow +\infty }\mu _k A_{\mu _k}(y_k)=0\), and   \(\lim _{k\rightarrow +\infty }\mu _k A_{\mu _k}(x_k)=0\).

  2. (v)

    If   \(\liminf _{k\rightarrow +\infty }\mu _k>0\), then there exists \(x_\infty \in {\mathrm{zer}}A\) such that \(x_k\rightharpoonup x_\infty \) weakly in \({\mathcal {H}}\) as \(k\rightarrow +\infty \).

Proof

\(\mathrm{{(i)}}\) Let \(z\in {\mathrm{zer}}A\), and let us set \(h_k=\frac{1}{2}\Vert x_k-z\Vert ^2\) for every \(k\ge 1\). Setting \(a_k=h_k-h_{k-1}\) and

$$\begin{aligned} w_k:= & {} \left( \frac{1}{2}(\alpha _k+\alpha _k^2)-\frac{2-\rho _k}{2\rho _k}(1-\alpha _k)^2\right) \Vert x_k-x_{k-1}\Vert ^2\\&-\frac{2-\rho _k}{2\rho _k}(1-\alpha _k)(\Vert x_{k+1}-x_k\Vert ^2-\Vert x_{k}-x_{k-1}\Vert ^2), \end{aligned}$$

we can rewrite inequality (12) of Lemma 2.3 in the condensed form \(a_{k+1}\le \alpha _k a_k +w_k\). By applying Lemma B.1\(\mathrm{{(i)}}\), we obtain for every \(k\ge 1\)

$$\begin{aligned} h_k-h_0= & {} \sum _{i=1}^k a_i \le t_{1,k}(h_1-h_0)+\sum _{i=1}^{k-1}t_{i+1,k}w_i\\= & {} t_{1,k}(h_1{-}h_0){-}\sum _{i=1}^{k-1}t_{i+1,k}\left[ \left( \frac{2{-}\rho _i}{2\rho _i}(1{-}\alpha _i)^2-\frac{1}{2}(\alpha _i+\alpha _i^2)\right) \Vert x_i-x_{i-1}\Vert ^2\right. \\&\left. +\frac{2-\rho _i}{2\rho _i}(1-\alpha _i)(\Vert x_{i+1}-x_i\Vert ^2-\Vert x_{i}-x_{i-1}\Vert ^2)\right] . \end{aligned}$$

Since \(t_{1,k}\le t_1\) and \(h_k\ge 0\), we deduce that

$$\begin{aligned}&\sum _{i=1}^{k-1}t_{i+1,k}\left[ \left( \frac{2-\rho _i}{\rho _i}(1-\alpha _i)^2-(\alpha _i+\alpha _i^2)\right) \Vert x_i-x_{i-1}\Vert ^2\right. \\&\quad \left. +\, \frac{2-\rho _i}{\rho _i}(1-\alpha _i)(\Vert x_{i+1}-x_i\Vert ^2-\Vert x_{i}-x_{i-1}\Vert ^2)\right] \le C, \end{aligned}$$

with \(C:=2h_0+2t_{1}|h_1-h_0|\). Now observe that (we perform a discrete form of integration by parts)

$$\begin{aligned}&\sum _{i=1}^{k-1}t_{i+1,k}\frac{2-\rho _i}{\rho _i}(1-\alpha _i)(\Vert x_{i+1}-x_i\Vert ^2-\Vert x_{i}-x_{i-1}\Vert ^2)\\&\quad =\sum _{i=1}^{k-1}\left( t_{i,k}\frac{2-\rho _{i-1}}{\rho _{i-1}}(1-\alpha _{i-1}) -t_{i+1,k}\frac{2-\rho _{i}}{\rho _{i}}(1-\alpha _{i})\right) \Vert x_{i}-x_{i-1}\Vert ^2\\&\qquad + t_{k,k}\frac{2-\rho _{k-1}}{\rho _{k-1}}(1-\alpha _{k-1})\Vert x_{k}-x_{k-1}\Vert ^2 -t_{1,k}\frac{2-\rho _{0}}{\rho _{0}}(1-\alpha _{0})\Vert x_{1}-x_{0}\Vert ^2. \end{aligned}$$

Since the second last term is nonnegative and since \(t_{1,k}\le t_1\), we deduce from the above equality that

$$\begin{aligned}&\sum _{i=1}^{k-1}t_{i+1,k}\frac{2-\rho _i}{\rho _i}(1-\alpha _i)(\Vert x_{i+1}-x_i\Vert ^2-\Vert x_{i}-x_{i-1}\Vert ^2)\\&\quad \ge \sum _{i=1}^{k-1}\left( t_{i,k}\frac{2-\rho _{i-1}}{\rho _{i-1}}(1-\alpha _{i-1}) -t_{i+1,k}\frac{2-\rho _{i}}{\rho _{i}}(1-\alpha _{i})\right) \Vert x_{i}-x_{i-1}\Vert ^2 \\&\qquad -\,t_{1}\frac{2-\rho _{0}}{\rho _{0}}(1-\alpha _{0})\Vert x_{1}-x_{0}\Vert ^2. \end{aligned}$$

Collecting the above results, we infer that

$$\begin{aligned} \sum _{i=1}^{k-1}\delta _{i,k}\Vert x_{i}-x_{i-1}\Vert ^2\le C_1, \end{aligned}$$
(16)

with   \(C_1=2h_0+2t_{1}|h_1-h_0|+t_{1}\frac{2-\rho _{0}}{\rho _{0}}(1-\alpha _{0})\Vert x_{1}-x_{0}\Vert ^2\) and

$$\begin{aligned} \delta _{i,k}= & {} t_{i+1,k}\left( \frac{2-\rho _i}{\rho _i}(1-\alpha _i)^2-(\alpha _i+\alpha _i^2)\right) \\&+ t_{i,k}\frac{2-\rho _{i-1}}{\rho _{i-1}}(1-\alpha _{i-1})-t_{i+1,k}\frac{2-\rho _{i}}{\rho _{i}}(1-\alpha _{i}). \end{aligned}$$

Now recall that \(t_{i,k}=1+\alpha _it_{i+1,k}\) for every \(i\ge 1\) and \(k\ge i+1\), see Lemma 2.4\(\mathrm{{(i)}}\). It ensues that

$$\begin{aligned} \delta _{i,k}= & {} \frac{2-\rho _{i-1}}{\rho _{i-1}}(1-\alpha _{i-1})+t_{i+1,k}\left( \frac{2-\rho _i}{\rho _i}(1-\alpha _i)^2-(\alpha _i+\alpha _i^2) \right. \\&\left. + \alpha _i \frac{2-\rho _{i-1}}{\rho _{i-1}}(1-\alpha _{i-1})- \frac{2-\rho _{i}}{\rho _{i}}(1-\alpha _{i})\right) \\= & {} \frac{2-\rho _{i-1}}{\rho _{i-1}}(1-\alpha _{i-1})+t_{i+1,k}\left( -\alpha _i\frac{2-\rho _i}{\rho _i}(1-\alpha _i)-(\alpha _i+\alpha _i^2)\right. \\&\left. + \alpha _i \frac{2-\rho _{i-1}}{\rho _{i-1}}(1-\alpha _{i-1})\right) \\= & {} \frac{2-\rho _{i-1}}{\rho _{i-1}}(1-\alpha _{i-1})-\alpha _i t_{i+1,k}\left( 1+\alpha _i+\frac{2-\rho _i}{\rho _i}(1-\alpha _i)\right. \\&\left. - \frac{2-\rho _{i-1}}{\rho _{i-1}}(1-\alpha _{i-1})\right) \\\ge & {} \frac{2-\rho _{i-1}}{\rho _{i-1}}(1-\alpha _{i-1})-\alpha _i t_{i+1}\left( 1+\alpha _i+\left[ \frac{2-\rho _i}{\rho _i}(1-\alpha _i)\right. \right. \\&\left. \left. - \frac{2-\rho _{i-1}}{\rho _{i-1}}(1-\alpha _{i-1})\right] _+\right) . \end{aligned}$$

We then infer from (16) that for every \(k\ge 2\),

$$\begin{aligned}&\sum _{i=1}^{k-1}\left[ \frac{2-\rho _{i-1}}{\rho _{i-1}}(1-\alpha _{i-1})-\alpha _i t_{i+1}\left( 1+\alpha _i+\left[ \frac{2-\rho _i}{\rho _i}(1-\alpha _i)\right. \right. \right. \\&\quad \left. \left. \left. -\, \frac{2-\rho _{i-1}}{\rho _{i-1}}(1-\alpha _{i-1})\right] _+\right) \right] \Vert x_{i}-x_{i-1}\Vert ^2\le C_1. \end{aligned}$$

By assumption, inequality \((K_1)\) holds true for k large enough. Without loss of generality, we may assume that it is satisfied for every \(k\ge 1\). In view of the above inequality, it ensues that

$$\begin{aligned}\sum _{i=1}^{k-1}\varepsilon \frac{2-\rho _{i-1}}{\rho _{i-1}}(1-\alpha _{i-1})\Vert x_{i}-x_{i-1}\Vert ^2\le C_1.\end{aligned}$$

Taking the limit as \(k\rightarrow +\infty \), we find

$$\begin{aligned}\sum _{i=1}^{+\infty }\frac{2-\rho _{i-1}}{\rho _{i-1}}(1-\alpha _{i-1})\Vert x_{i}-x_{i-1}\Vert ^2\le \frac{C_1}{\varepsilon }<+\infty .\end{aligned}$$

By using again \((K_1)\), we deduce that

$$\begin{aligned} \sum _{i=1}^{+\infty }\alpha _i t_{i+1}\Vert x_{i}-x_{i-1}\Vert ^2<+\infty . \end{aligned}$$
(17)

\(\mathrm{{(ii)}}\) Let us come back to inequality (11). Using that \(\alpha _k\in [0,1]\), we get

$$\begin{aligned} h_{k+1}-h_k-\alpha _k(h_k-h_{k-1})\le \alpha _k\Vert x_k-x_{k-1}\Vert ^2-\frac{2-\rho _k}{2\rho _k}\Vert x_{k+1}-y_k\Vert ^2. \end{aligned}$$
(18)

Since \(x_{k+1}-y_k=-\rho _k\mu _k A_{\mu _k}(y_k)\), this implies that

$$\begin{aligned}h_{k+1}-h_k-\alpha _k(h_k-h_{k-1})\le \alpha _k\Vert x_k-x_{k-1}\Vert ^2-\frac{1}{2}\rho _k(2-\rho _k) \Vert \mu _k A_{\mu _k}(y_k)\Vert ^2.\end{aligned}$$

By invoking Lemma B.1\(\mathrm{{(i)}}\) with \(a_k=h_k-h_{k-1}\) and

$$\begin{aligned}w_k=\alpha _k\Vert x_k-x_{k-1}\Vert ^2-\frac{1}{2}\rho _k(2-\rho _k) \Vert \mu _k A_{\mu _k}(y_k)\Vert ^2,\end{aligned}$$

we obtain for every \(k\ge 1\),

$$\begin{aligned}&h_k-h_0=\sum _{i=1}^k a_i\le t_{1,k}(h_1-h_0) \\&\quad +\,\sum _{i=1}^{k-1}t_{i+1,k}\left[ \alpha _i\Vert x_i-x_{i-1}\Vert ^2-\frac{1}{2}\rho _i (2-\rho _i)\Vert \mu _i A_{\mu _i}(y_i)\Vert ^2 \right] . \end{aligned}$$

Since \(h_k\ge 0\) and \(t_{i+1,k}\le t_{i+1}\), we deduce that

$$\begin{aligned}&\frac{1}{2}\sum _{i=1}^{k-1}\rho _i(2-\rho _i) t_{i+1,k}\Vert \mu _i A_{\mu _i}(y_i)\Vert ^2\le h_0 +t_{1,k}(h_1-h_0) \\&\quad +\,\sum _{i=1}^{k-1}\alpha _i t_{i+1} \Vert x_i-x_{i-1}\Vert ^2. \end{aligned}$$

Recalling from \(\mathrm{{(i)}}\) that \(\sum _{i=1}^{+\infty }\alpha _i t_{i+1} \Vert x_i-x_{i-1}\Vert ^2<+\infty \), we infer that for every \(k\ge 1\),

$$\begin{aligned}\sum _{i=1}^{k-1}\rho _i(2-\rho _i)\, t_{i+1,k}\Vert \mu _i A_{\mu _i}(y_i)\Vert ^2\le C_2,\end{aligned}$$

where we have set

$$\begin{aligned}C_2:=2h_0 +2t_{1}|h_1-h_0|+2\sum _{i=1}^{+\infty }\alpha _i t_{i+1} \Vert x_i-x_{i-1}\Vert ^2<+\infty .\end{aligned}$$

Since \(t_{i+1,k}=0\) for \(i\ge k\), this yields in turn

$$\begin{aligned}\sum _{i=1}^{+\infty }\rho _i (2-\rho _i)\,t_{i+1,k}\Vert \mu _i A_{\mu _i}(y_i)\Vert ^2\le C_2.\end{aligned}$$

Letting k tend to \(+\infty \), the monotone convergence theorem then implies that

$$\begin{aligned} \sum _{i=1}^{+\infty }\rho _i (2-\rho _i)\,t_{i+1}\Vert \mu _i A_{\mu _i}(y_i)\Vert ^2\le C_2<+\infty , \end{aligned}$$
(19)

which gives the first estimate of \(\mathrm{{(ii)}}\). Using the \(\frac{1}{\mu _i}\)-Lipschitz continuity property of \(A_{\mu _i}\), we have

$$\begin{aligned} \Vert A_{\mu _i}(x_i)\Vert ^2\le & {} 2 \Vert A_{\mu _i}(y_i)\Vert ^2+2 \Vert A_{\mu _i}(x_i)-A_{\mu _i}(y_i)\Vert ^2\\\le & {} 2 \Vert A_{\mu _i}(y_i)\Vert ^2+\frac{2}{\mu _i^2} \Vert x_i-y_i\Vert ^2\\= & {} 2 \Vert A_{\mu _i}(y_i)\Vert ^2+\frac{2\alpha _i^2}{\mu _i^2} \Vert x_i-x_{i-1}\Vert ^2. \end{aligned}$$

It ensues that

$$\begin{aligned} \rho _i(2-\rho _i)\, t_{i+1}\Vert \mu _i A_{\mu _i}(x_i)\Vert ^2\le & {} 2 \rho _i(2-\rho _i)\, t_{i+1}\Vert \mu _i A_{\mu _i}(y_i)\Vert ^2 \\&+ 2\alpha _i^2\rho _i(2-\rho _i)\, t_{i+1} \Vert x_i-x_{i-1}\Vert ^2\\\le & {} 2 \rho _i(2-\rho _i)\, t_{i+1}\Vert \mu _i A_{\mu _i}(y_i)\Vert ^2 \\&+ 2\alpha _i t_{i+1} \Vert x_i-x_{i-1}\Vert ^2, \end{aligned}$$

where we have used \(\alpha _i\le 1\) and \(\rho _i(2-\rho _i)\le 1\) in the second inequality. The first (resp. second) term of the above right-hand side is summable by (19) [resp. (17)]. We deduce that the left-hand side is also summable. This proves the second estimate of \(\mathrm{{(ii)}}\).

\(\mathrm{{(iii)}}\) From (18), we derive that for every \(k\ge 1\),

$$\begin{aligned}h_{k+1}-h_k\le \alpha _k(h_k-h_{k-1})+ \alpha _k\Vert x_k-x_{k-1}\Vert ^2.\end{aligned}$$

Recall that, from \(\mathrm{{(i)}}\), we have \(\sum _{i=1}^{+\infty }\alpha _i t_{i+1}\Vert x_{i}-x_{i-1}\Vert ^2<+\infty \). Applying Lemma B.1\(\mathrm{{(ii)}}\) with \(a_k=h_k-h_{k-1}\) and \(w_k=\alpha _k\Vert x_k-x_{k-1}\Vert ^2\), we infer that \(\sum _{i=1}^{+\infty }(h_k-h_{k-1})_+<+\infty \). This classically implies that \(\lim _{k\rightarrow +\infty } h_k\) exists. Thus, we have obtained that \(\lim _{k\rightarrow +\infty } \Vert x_k-z\Vert \) exists for every \(z\in {\mathrm{zer}}A\), whence in particular the boundedness of the sequence \((x_k)\).

\(\mathrm{(iv)}\) From \((K_2)\) and \((K_3)\), there exist \(\underline{r}>0\) and \(\overline{r}<2\) such that \(\rho _k\in [\underline{r},\overline{r}]\) for k large enough. We deduce from the first estimate of \(\mathrm{{(ii)}}\) that \(\sum _{i=1}^{+\infty } t_{i+1}\Vert \mu _i A_{\mu _i}(y_i)\Vert ^2<+\infty \), hence \(\lim _{i\rightarrow +\infty } t_{i+1}\Vert \mu _i A_{\mu _i}(y_i)\Vert ^2=~0\). Since \(t_i\ge 1\) for every \(i\ge 1\), this implies in turn that \(\lim _{i\rightarrow +\infty } \Vert \mu _i A_{\mu _i}(y_i)\Vert =0\). The proof of \(\lim _{i\rightarrow +\infty } \Vert \mu _i A_{\mu _i}(x_i)\Vert =0\) follows the same lines.

\(\mathrm{(v)}\) To prove the weak convergence of \((x_k)\) as \(k\rightarrow +\infty \), we use the Opial lemma with \(S={\mathrm{zer}}A\). Item \(\mathrm{{(iii)}}\) shows the first condition of the Opial lemma. For the second one, let \((x_{k_n})\) be a subsequence of \((x_k)\) which converges weakly to some \(\overline{x^{}}\). By \(\mathrm{(iv)}\), we have \(\lim _{k\rightarrow +\infty }\mu _k A_{\mu _k}(x_k)=0\) strongly in \({\mathcal {H}}\). Since \(\liminf _{k\rightarrow +\infty }\mu _k>0\), we also have \(\lim _{k\rightarrow +\infty }A_{\mu _k}(x_k)=0\) strongly in \({\mathcal {H}}\). Passing to the limit in

$$\begin{aligned}A_{\mu _{k_n}}(x_{k_n})\in A\left( x_{k_n}-\mu _{k_n}A_{\mu _{k_n}}(x_{k_n})\right) ,\end{aligned}$$

and invoking the graph-closedness of the maximally monotone operator A for the weak–strong topology in \({\mathcal {H}}\times {\mathcal {H}}\), we find \(0\in A(\overline{x^{}})\). This shows that \(\overline{x^{}}\in {\mathrm{zer}}A\), which completes the proof. \(\square \)

Remark 2.7

The main role of assumption \((K_1)\) is to guarantee the summability condition

$$\begin{aligned} \sum _{i=1}^{+\infty }\alpha _i t_{i+1} \Vert x_i-x_{i-1}\Vert ^2<+\infty , \end{aligned}$$
(20)

obtained in \(\mathrm{{(i)}}\). A careful examination of the proof of Theorem 2.6 shows that conclusions \(\mathrm{{(ii)}}\), \(\mathrm{{(iii)}}\), \(\mathrm{(iv)}\) and \(\mathrm{(v)}\) hold true if we assume directly condition (20). The latter condition involves the sequence \((x_k)\) that is a priori unknown. However, in practice it is easy to ensure it by using a suitable on-line rule.

Let us now particularize Theorem 2.6 to the case \(\alpha _k=0\) for every \(k\ge 1\), corresponding to the absence of inertia in algorithm (RIPA). In this framework, assumptions \((K_0)\) and \((K_1)\) are automatically satisfied, and moreover \(t_i=1\) for every \(i\ge 1\). We then derive from Theorem 2.6 the following result, which is a particular case of [18, Theorem 3]. Note that the latter also takes into account the presence of errors in the computation of the resolvents.

Corollary 2.8

(Bertsekas–Eckstein [18]). Under (H), assume that \({\mathrm{zer}}A\ne \emptyset \), and that \(\rho _k\in ]0,2]\) for every \(k\ge 1\). Then, for any sequence \((x_k)\) generated by (RPA)

figure g

we have

  1. (i)

    \(\displaystyle \sum \nolimits _{i=1}^{+\infty }\frac{2-\rho _{i-1}}{\rho _{i-1}}\Vert x_i-x_{i-1}\Vert ^2<+\infty \).

  2. (ii)

    For any \(z\in {\mathrm{zer}}A\), \(\lim _{k\rightarrow +\infty }\Vert x_k-z\Vert \) exists, and hence \((x_k)\) is bounded.

Assume moreover that \(\limsup _{k\rightarrow +\infty } \rho _k<2\) and \(\liminf _{k\rightarrow +\infty } \rho _k>0.\) Then the following holds

  1. (iii)

    \(\lim _{k\rightarrow +\infty }\mu _k A_{\mu _k}(x_k)=0\).

  2. (iv)

    If   \(\liminf _{k\rightarrow +\infty }\mu _k>0\), then there exists \(x_\infty \in {\mathrm{zer}}A\) such that \(x_k\rightharpoonup x_\infty \) weakly in \({\mathcal {H}}\) as \(k\rightarrow +\infty \).

Let us now assume that \(\rho _k=1\) for every \(k\ge 1\). In such a case, the algorithm (RIPA) boils down to the inertial proximal iteration. We obtain directly the following corollary of Theorem 2.6.

Corollary 2.9

Under (H), assume that \({\mathrm{zer}}A\ne \emptyset \), and that \(\alpha _k\in [0,1]\) for every \(k\ge 1\). Suppose \((K_0)\) and let \((t_i)\) be the sequence defined by (15). Assume that there exists \(\varepsilon \in ]0,1[\) such that for k large enough,

figure h

Then for any sequence \((x_k)\) generated by (IPA)

$$\begin{aligned} {\mathrm{(IPA)}}\qquad \left\{ \begin{array}{l} y_k=x_k+\alpha _k(x_k-x_{k-1})\\ x_{k+1}=J_{\mu _k A}(y_k), \end{array} \right. \end{aligned}$$

we have

  1. (i)

    \(\displaystyle \sum \nolimits _{i=1}^{+\infty }(1-\alpha _{i-1})\Vert x_i-x_{i-1}\Vert ^2<+\infty \), and as a consequence   \(\displaystyle \sum \nolimits _{i=1}^{+\infty }\alpha _i t_{i+1}\Vert x_i-x_{i-1}\Vert ^2<+\infty \).

  2. (ii)

    \(\displaystyle \sum \nolimits _{i=1}^{+\infty } t_{i+1}\Vert \mu _i A_{\mu _i}(y_i)\Vert ^2<+\infty \), and   \(\displaystyle \sum \nolimits _{i=1}^{+\infty } t_{i+1}\Vert \mu _i A_{\mu _i}(x_i)\Vert ^2<+\infty \).

  3. (iii)

    For any \(z\in {\mathrm{zer}}A\), \(\lim _{k\rightarrow +\infty }\Vert x_k-z\Vert \) exists, and hence \((x_k)\) is bounded.

  4. (iv)

    \(\lim _{k\rightarrow +\infty }\mu _k A_{\mu _k}(y_k)=0\), and   \(\lim _{k\rightarrow +\infty }\mu _k A_{\mu _k}(x_k)=0\).

  5. (v)

    If   \(\liminf _{k\rightarrow +\infty }\mu _k>0\), then there exists \(x_\infty \in {\mathrm{zer}}A\) such that \(x_k\rightharpoonup x_\infty \) weakly in \({\mathcal {H}}\) as \(k\rightarrow +\infty \).

Remark 2.10

Following Remark 2.7, items \(\mathrm{{(ii)}}\) to \(\mathrm{(v)}\) of Corollary 2.9 hold true if we suppose that

$$\begin{aligned}\sum _{i=1}^{+\infty }\alpha _i t_{i+1}\Vert x_i-x_{i-1}\Vert ^2<+\infty .\end{aligned}$$

Assume moreover that there exists \(\overline{\alpha }\in [0,1[\) such that \(\alpha _k\in [0,\overline{\alpha }]\) for every \(k\ge 1\). Then it is easy to show that \(t_k\le 1/(1-\overline{\alpha })\) for every \(k\ge 1\). Hence the above summability condition is ensured by the following

$$\begin{aligned} \sum _{i=1}^{+\infty }\alpha _i \Vert x_i-x_{i-1}\Vert ^2<+\infty . \end{aligned}$$
(21)

To summarize, if \(\alpha _k\in [0,\overline{\alpha }]\) for every \(k\ge 1\), and if condition (21) is satisfied, then we obtain conclusions \(\mathrm{{(ii)}}\) to \(\mathrm{(v)}\) of Corollary 2.9. This is precisely the result stated in [3, Theorem 2.1].

As a consequence of Corollary 2.9, we also find the result of [3, Proposition 2.1], when \(\alpha _k\le \overline{\alpha }<\frac{1}{3}\).

Corollary 2.11

(Alvarez–Attouch [3]). Under (H), assume that \({\mathrm{zer}}A\ne \emptyset \). Suppose that there exists \(\overline{\alpha }\in [0,1/3[\) such that \(\alpha _k\in [0,\overline{\alpha }]\) for every \(k\ge 1\). Then for any sequence \((x_k)\) generated by (IPA), we have

  1. (i)

    \(\displaystyle \sum \nolimits _{i=1}^{+\infty }\Vert x_i-x_{i-1}\Vert ^2<+\infty \).

  2. (ii)

    \(\displaystyle \sum \nolimits _{i=1}^{+\infty } \Vert \mu _i A_{\mu _i}(x_i)\Vert ^2<+\infty \).

  3. (iii)

    For any \(z\in {\mathrm{zer}}A\), \(\lim _{k\rightarrow +\infty }\Vert x_k-z\Vert \) exists, and hence \((x_k)\) is bounded.

  4. (iv)

    \(\lim _{k\rightarrow +\infty }\mu _k A_{\mu _k}(x_k)=0\).

  5. (v)

    If   \(\liminf _{k\rightarrow +\infty }\mu _k>0\), there exists \(x_\infty \in {\mathrm{zer}}A\) such that \(x_k\rightharpoonup x_\infty \) weakly in \({\mathcal {H}}\) as \(k\rightarrow +\infty \).

Proof

Since \(\alpha _k\le \overline{\alpha }<1\) for every \(k\ge 1\), it is immediate to check that \((K_0)\) is satisfied and that \(t_k\le \frac{1}{1-\overline{\alpha }}\) for every \(k\ge 1\). On the one hand, observe that for every \(k\ge 1\),

$$\begin{aligned} \alpha _k t_{k+1}\left( 1+\alpha _k+\left[ \alpha _{k-1}-\alpha _{k}\right] _+\right)= & {} \alpha _k t_{k+1}(1+\max (\alpha _k,\alpha _{k-1}))\\\le & {} \frac{\overline{\alpha }}{1-\overline{\alpha }}(1+\overline{\alpha }). \end{aligned}$$

On the other hand   \(1-\alpha _{k-1}\ge 1-\overline{\alpha }.\) It ensues that \((K_1)\) is satisfied if there exists \(\varepsilon \in ]0,1[\) such that

$$\begin{aligned}(1-\varepsilon ) (1-\overline{\alpha })\ge \frac{\overline{\alpha }}{1-\overline{\alpha }}(1+\overline{\alpha }).\end{aligned}$$

The latter condition is equivalent to \((1-\overline{\alpha })^2>\overline{\alpha }(1+\overline{\alpha })\), which in turn is equivalent to \(\overline{\alpha }<1/3\). Therefore assumption \((K_1)\) is satisfied, and it suffices to apply Corollary 2.9. \(\square \)

By taking constant parameters \(\alpha _k\) and \(\rho _k\), we obtain the following consequence of Theorem 2.6.

Corollary 2.12

Under (H), assume that \({\mathrm{zer}}A\ne \emptyset \). Suppose that \(\alpha _k \equiv \alpha \in [0,1[\), \(\rho _k \equiv \rho \in ]0,2[\) for every \(k\ge 1\), and that

$$\begin{aligned} \frac{2-\rho }{\rho }(1-\alpha )^2 > \alpha (1 + \alpha ). \end{aligned}$$
(22)

Then for any sequence \((x_k)\) generated by (RIPA), we have

  1. (i)

    \(\displaystyle \sum \nolimits _{i=1}^{+\infty }\Vert x_i-x_{i-1}\Vert ^2<+\infty \).

  2. (ii)

    \(\displaystyle \sum \nolimits _{i=1}^{+\infty } \Vert \mu _i A_{\mu _i}(x_i)\Vert ^2<+\infty \).

  3. (iii)

    For any \(z\in {\mathrm{zer}}A\), \(\lim _{k\rightarrow +\infty }\Vert x_k-z\Vert \) exists, and hence \((x_k)\) is bounded.

  4. (iv)

    \(\lim _{k\rightarrow +\infty }\mu _k A_{\mu _k}(x_k)=0\).

  5. (v)

    If   \(\liminf _{k\rightarrow +\infty }\mu _k>0\), there exists \(x_\infty \in {\mathrm{zer}}A\) such that \(x_k\rightharpoonup x_\infty \) weakly in \({\mathcal {H}}\) as \(k\rightarrow +\infty \).

Proof

Since \(\alpha _k \equiv \alpha \in [0,1[\), we have for every \(i\ge 1\),   \(t_i=\sum _{l=i-1}^{+\infty }\alpha ^{l-i+1}=\frac{1}{1-\alpha }<+\infty .\) Hence condition \((K_0)\) holds true. Using that \(\alpha _k\) and \(\rho _k\) are constant, condition \((K_1)\) then amounts to

$$\begin{aligned}(1-\varepsilon )\frac{2-\rho }{\rho }(1-\alpha ) \ge \frac{\alpha }{1-\alpha } (1 + \alpha ),\end{aligned}$$

which is equivalent to (22). Therefore, all the assumptions of Theorem 2.6 are met, giving the result. \(\square \)

Remark 2.13

The above result gives some indication of the balance between the inertial effect and the relaxation effect. The inequation (22) is equivalent to \(\rho < \frac{2(1- \alpha )^2}{2 \alpha ^2 -\alpha +1} \). Therefore, for given \(0<\alpha <1\), the maximum value of the relaxation parameter is given by \(\rho _m (\alpha )= \frac{2(1- \alpha )^2}{2 \alpha ^2 -\alpha +1} \). Elementary differential calculus shows that the function \(\alpha \mapsto \rho _m (\alpha )\) is decreasing on [0, 1]. Thus, as expected, when the inertial effect increases (\(\alpha \nearrow \)), then the relaxation effect decreases (\(\rho _m \searrow \)), and vice versa, see also [20]. When \(\alpha \rightarrow 0\), the limiting value \(\rho _m (\alpha ) \) is 2, which is in accordance with Corollary 2.8. When \(\alpha \rightarrow 1\), the limiting value of \(\rho _m (\alpha )\) is zero, which is in accordance with the existing results concerning the case \(\alpha _k \rightarrow 1\).

2.5 Case of a possibly vanishing parameter \(\rho _k\)

When \(\alpha _k \rightarrow 1\), which is the case of the Nesterov accelerated method, we must take \(\rho _k \rightarrow 0\) to satisfy the condition \((K_1)\). Consequently, Theorem 2.6 does not make it possible to obtain the convergence of the iterates of (RIPA) in the case \(\alpha _k \rightarrow 1\). The following result completes Theorem 2.6 by considering the case of a possibly vanishing parameter \(\rho _k\). In the upcoming statement, assumption \((K_3)\) is removed and replaced with an alternative set of assumptions, namely \((K_4)\)\((K_5)\).

Theorem 2.14

Under (H), assume that \({\mathrm{zer}}A\ne \emptyset \). Suppose that the sequences \((\alpha _k)\) and \((\rho _k)\) satisfy \(\alpha _k\in [0,1]\) and \(\rho _k\in ]0,2]\) for every \(k\ge 1\), together with \((K_0)\)\((K_1)\). Then for any sequence \((x_k)\) generated by (RIPA),

  1. (i)

    There exists a constant \(C\ge 0\) such that for every \(k\ge 1\),

    $$\begin{aligned}\Vert x_{k+1}-x_k\Vert \le C\, \sum _{i=1}^k\left[ \left( \prod _{j=i+1}^k \alpha _j\right) \rho _i\right] .\end{aligned}$$

Assume additionally \((K_2)\), together with

figure i

Then the following holds

  1. (ii)

    \(\lim _{k\rightarrow +\infty }\mu _k A_{\mu _k}(x_k)=0\). If  \(\liminf _{k\rightarrow +\infty }\mu _k>0\), then there exists \(x_\infty \in {\mathrm{zer}}A\) such that \(x_k\rightharpoonup x_\infty \) weakly in \({\mathcal {H}}\) as \(k\rightarrow +\infty \).

Finally assume that condition \((K_5)\) is not satisfied, i.e.  \(\displaystyle \sum \nolimits _{i=1}^{+\infty }\rho _i t_{i+1}<+\infty \). Then we obtain

  1. (iii)

    \(\displaystyle \sum \nolimits _{i=1}^{+\infty }\Vert x_i-x_{i-1}\Vert <+\infty \), and hence the sequence \((x_k)\) converges strongly toward some \(x_\infty \in {\mathcal {H}}\).

Proof

\(\mathrm{{(i)}}\) Iteration (RIPA) can be rewritten as

$$\begin{aligned}x_{k+1}-x_k=\alpha _k(x_k-x_{k-1})-\rho _k\mu _k A_{\mu _k}(y_k),\end{aligned}$$

see (7). Taking the norm of each member, we find

$$\begin{aligned} \Vert x_{k+1}-x_k\Vert \le \alpha _k \Vert x_k-x_{k-1}\Vert +\rho _k\mu _k \Vert A_{\mu _k}(y_k)\Vert . \end{aligned}$$
(23)

On the other hand, for \(z\in {\mathrm{zer}}A={\mathrm{zer}}A_{\mu _k}\), the \(\frac{1}{\mu _k}\)-Lipschitz continuity of \(A_{\mu _k}\) yields

$$\begin{aligned}\Vert A_{\mu _k}(y_k)\Vert \le \frac{1}{\mu _k} \Vert y_k-z\Vert .\end{aligned}$$

Recall that the sequence \((x_k)\) is bounded by Theorem 2.6\(\mathrm{{(iii)}}\). Since \(\alpha _k\in [0,1]\), the sequence \((y_k)\) is also bounded. From the above inequality, we deduce the existence of \(C_3\ge 0\) such that \(\Vert A_{\mu _k}(y_k)\Vert \le \frac{C_3}{\mu _k}\) for every \(k\ge 1\). In view of (23), we infer that

$$\begin{aligned} \Vert x_{k+1}-x_k\Vert \le \alpha _k \Vert x_k-x_{k-1}\Vert +C_3\rho _k. \end{aligned}$$
(24)

An immediate recurrence shows that for every \(k\ge 1\),

$$\begin{aligned}\Vert x_{k+1}-x_k\Vert \le \left( \prod _{j=1}^k \alpha _j\right) \Vert x_{1}-x_0\Vert + C_3\sum _{i=1}^k\left[ \left( \prod _{j=i+1}^k \alpha _j\right) \rho _i\right] ,\end{aligned}$$

with the convention \(\prod _{j=k+1}^k \alpha _j=1\). Since \(\alpha _k\in [0,1]\), we have \(\prod _{j=i+1}^k \alpha _j\ge \prod _{j=1}^k \alpha _j\) and hence

$$\begin{aligned}\sum _{i=1}^k\left[ \left( \prod _{j=i+1}^k \alpha _j\right) \rho _i\right] \ge \left( \prod _{j=1}^k \alpha _j\right) \sum _{i=1}^k\rho _i\ge \left( \prod _{j=1}^k \alpha _j\right) \rho _1.\end{aligned}$$

Setting \(C_4:=\Vert x_{1}-x_0\Vert /\rho _1+C_3\), we deduce that for every \(k\ge 1\),

$$\begin{aligned} \Vert x_{k+1}-x_k\Vert \le C_4\sum _{i=1}^k\left[ \left( \prod _{j=i+1}^k \alpha _j\right) \rho _i\right] . \end{aligned}$$
(25)

\(\mathrm{{(ii)}}\) Recall the estimate of Theorem 2.6\(\mathrm{{(ii)}}\)

$$\begin{aligned} \sum _{i=1}^{+\infty }\rho _i (2-\rho _i)\,t_{i+1}\Vert \mu _i A_{\mu _i}(x_i)\Vert ^2<+\infty . \end{aligned}$$
(26)

According to \((K_2)\), there exists \(\bar{r}\in ]0,2[\) such that \(\rho _k\le \bar{r}\) for k large enough. We deduce from (26) that

$$\begin{aligned} \sum _{i=1}^{+\infty }\rho _i\,t_{i+1}\Vert \mu _i A_{\mu _i}(x_i)\Vert ^2<+\infty . \end{aligned}$$
(27)

Since the operator \(\mu _k A_{\mu _k}\) is 1-Lipschitz continuous, we have

$$\begin{aligned}\Vert \mu _k A_{\mu _k}(x_k)\Vert \le \Vert x_k-z\Vert \le C_5,\end{aligned}$$

with \(C_5:=\sup _{k\ge 1}\Vert x_k-z\Vert <+\infty \). It ensues that

$$\begin{aligned}&|\Vert \mu _{k+1} A_{\mu _{k+1}}(x_{k+1})\Vert ^2-\Vert \mu _{k} A_{\mu _{k}}(x_{k})\Vert ^2| \nonumber \\&\quad \le 2C_5 \Vert \mu _{k+1} A_{\mu _{k+1}}(x_{k+1})-\mu _{k} A_{\mu _{k}}(x_{k})\Vert . \end{aligned}$$
(28)

By applying [7, Lemma A.4] with \(\gamma =\mu _{k+1}\), \(\delta =\mu _k\), \(x=x_{k+1}\) and \(y=x_k\), we find

$$\begin{aligned} \Vert \mu _{k+1} A_{\mu _{k+1}}(x_{k+1})-\mu _{k} A_{\mu _{k}}(x_{k})\Vert\le & {} 2\Vert x_{k+1}-x_k\Vert +2\Vert x_{k+1}-z\Vert \frac{|\mu _{k+1}-\mu _k|}{\mu _{k+1}}\\\le & {} 2\Vert x_{k+1}-x_k\Vert +2C_5\frac{|\mu _{k+1}-\mu _k|}{\mu _{k+1}}. \end{aligned}$$

In view of (25), we deduce that for every \(k\ge 1\),

$$\begin{aligned}&\Vert \mu _{k+1} A_{\mu _{k+1}}(x_{k+1})-\mu _{k} A_{\mu _{k}}(x_{k})\Vert \\&\quad \le 2C_4\sum _{i=1}^k\left[ \left( \prod _{j=i+1}^k \alpha _j\right) \rho _i\right] +2C_5\frac{|\mu _{k+1}-\mu _k|}{\mu _{k+1}}. \end{aligned}$$

Recalling the assumption \((K_4)\), we obtain the existence of \(C_6\ge 0\) such that for k large enough,

$$\begin{aligned}\Vert \mu _{k+1} A_{\mu _{k+1}}(x_{k+1})-\mu _{k} A_{\mu _{k}}(x_{k})\Vert \le C_6 \rho _k t_{k+1}.\end{aligned}$$

Using (28), we infer that

$$\begin{aligned}\left| \Vert \mu _{k+1} A_{\mu _{k+1}}(x_{k+1})\Vert ^2-\Vert \mu _{k} A_{\mu _{k}}(x_{k})\Vert ^2\right| \le 2 C_5 C_6 \rho _k t_{k+1}.\end{aligned}$$

It follows that for every \(k\ge 1\),

$$\begin{aligned}&\sum _{i=1}^k\left| \Vert \mu _{i+1} A_{\mu _{i+1}}(x_{i+1})\Vert ^4-\Vert \mu _{i} A_{\mu _{i}}(x_{i})\Vert ^4\right| \\&\quad \le 2 C_5 C_6 \sum _{i=1}^k\rho _i t_{i+1}\left( \Vert \mu _{i+1} A_{\mu _{i+1}}(x_{i+1})\Vert ^2+\Vert \mu _{i} A_{\mu _{i}}(x_{i})\Vert ^2\right) . \end{aligned}$$

Given the estimate (27), together with the assumption \(\rho _{i} t_{i+1}={\mathcal {O}}(\rho _{i+1} t_{i+2})\) as \(i\rightarrow +\infty \), we deduce that

$$\begin{aligned}\sum _{i=1}^{+\infty }\left| \Vert \mu _{i+1} A_{\mu _{i+1}}(x_{i+1})\Vert ^4-\Vert \mu _{i} A_{\mu _{i}}(x_{i})\Vert ^4\right| <+\infty .\end{aligned}$$

From a classical result, this implies that \(\lim _{k\rightarrow +\infty }\Vert \mu _{k} A_{\mu _{k}}(x_{k})\Vert ^4\) exists, which entails in turn that \(\lim _{k\rightarrow +\infty }\Vert \mu _{k} A_{\mu _{k}}(x_{k})\Vert \) exists. Using again the estimate (27), together with the assumption \((K_5)\), we immediately conclude that \(\lim _{k\rightarrow +\infty }\Vert \mu _{k} A_{\mu _{k}}(x_{k})\Vert =0\). The proof of the weak convergence of the sequence \((x_k)\) follows the same lines as in Theorem 2.6\(\mathrm{(v)}\).

\(\mathrm{{(iii)}}\) Let us now assume that \(\sum _{i=1}^{+\infty }\rho _it_{i+1}<+\infty \). Recall from inequality (24) that

$$\begin{aligned}\Vert x_{k+1}-x_k\Vert \le \alpha _k \Vert x_k-x_{k-1}\Vert +C_3\rho _k.\end{aligned}$$

By applying Lemma B.1\(\mathrm{{(ii)}}\) with \(a_k=\Vert x_k-x_{k-1}\Vert \) and \(w_k=C_3\rho _k\), we obtain that \(\sum _{i=1}^{+\infty }\Vert x_i-x_{i-1}\Vert <~+\infty \). The last assertion is immediate. \(\square \)

3 Application to particular classes of parameters \(\alpha _k\), \(\mu _k\) and \(\rho _k\)

3.1 Some criteria for \((K_0)\) and \((K_1)\)

The following proposition provides a criterion for simply obtaining an asymptotic equivalent of \(t_k\).

Proposition 3.1

Let \((\alpha _k)\) be a sequence such that \(\alpha _k\in [0,1[\) for every \(k\ge 1\). Assume thatFootnote 1

$$\begin{aligned} \lim _{k\rightarrow +\infty }\left( \frac{1}{1-\alpha _{k+1}}-\frac{1}{1-\alpha _{k}}\right) =c, \end{aligned}$$
(29)

for some \(c\in [0,1[\). Then we have

  1. (i)

    The property \((K_0)\) is satisfied, and

    $$\begin{aligned}t_{k+1}\sim \frac{1}{(1-c)(1-\alpha _k)}\quad \text{ as } k\rightarrow +\infty .\end{aligned}$$
  2. (ii)

    The equivalence \(1-\alpha _k \sim 1-\alpha _{k+1}\) holds true as \(k\rightarrow +\infty \), hence \(t_{k+1}\sim t_{k+2}\) as \(k\rightarrow +\infty \).

  3. (iii)

    \(\displaystyle \sum \nolimits _{k=1}^{+\infty }(1-\alpha _k)=+\infty \).

Proof

\(\mathrm{{(i)}}\) This result was proved by the authors in [4, Proposition 15].

\(\mathrm{{(ii)}}\) First assume that \(c\in ]0,1[\). By a standard summation procedure, we infer from (29) that

$$\begin{aligned}\frac{1}{1-\alpha _{k}}\sim c k \quad \text{ as } k\rightarrow +\infty .\end{aligned}$$

It ensues that \(1-\alpha _{k}\sim \frac{1}{c k}\) as \(k\rightarrow +\infty \), and hence clearly \(1-\alpha _{k}\sim 1-\alpha _{k+1}\) as \(k\rightarrow +\infty \). Now assume that \(c=0\). Multiplying (29) by \(1-\alpha _k\), we find

$$\begin{aligned} \frac{1-\alpha _{k}}{1-\alpha _{k+1}}=1+ o(1-\alpha _{k})\rightarrow 1\quad \text{ as } k\rightarrow +\infty , \end{aligned}$$

because \(\alpha _k\in [0,1[\). This completes the proof of the equivalence \(1-\alpha _k \sim 1-\alpha _{k+1}\) as \(k\rightarrow +\infty \). The last assertion then follows immediately from \(\mathrm{{(i)}}\).

\(\mathrm{{(iii)}}\) Fix \(\varepsilon >0\). In view of (29), there exists \(k_0\ge 1\) such that for every \(k\ge k_0\),

$$\begin{aligned}\frac{1}{1-\alpha _{k+1}}-\frac{1}{1-\alpha _{k}}\le c+\varepsilon .\end{aligned}$$

By summing the above inequality, we obtain \(\frac{1}{1-\alpha _{k}}\le \frac{1}{1-\alpha _{k_0}}+ (c+\varepsilon )(k-k_0)\) for every \(k\ge k_0\). Setting \(d=1/(1-\alpha _{k_0})\), we deduce immediately that \(1-\alpha _{k}\ge 1/(d+ (c+\varepsilon )(k-k_0))\), thus implying that \(\sum _{k=1}^{+\infty }(1-\alpha _k)=+\infty \). \(\square \)

Let us now analyze the condition \((K_1)\)

figure j

Following an argument parallel to the continuous case, see [5, Proposition 3.2], let us introduce the following condition:

$$\begin{aligned}&\text{ There } \text{ exists } \, \, c'\in ]-1, +1[ \, \text{ such } \text{ that } \nonumber \\&\quad \lim _{k\rightarrow +\infty } \frac{\frac{2-\rho _{k}}{\rho _{k}}(1-\alpha _{k}) -\frac{2-\rho _{k-1}}{\rho _{k-1}}(1-\alpha _{k-1})}{\frac{2-\rho _{k-1}}{\rho _{k-1}} (1-\alpha _{k -1})^2 } = c'. \end{aligned}$$
(30)

Proposition 3.2

Let’s make assumptions (29) and (30), with \(|c'| < 1-c\). Then \((K_1)\) is satisfied if

$$\begin{aligned} \liminf _{k\rightarrow +\infty } \frac{2-\rho _{k}}{\rho _{k}}(1-\alpha _{k})^2 > \limsup _{k\rightarrow +\infty }\frac{\alpha _k(1+ \alpha _k)}{1-c-|c'|}. \end{aligned}$$
(31)

Proof

Setting \(\theta _{k} =\frac{2-\rho _{k}}{\rho _{k}}(1-\alpha _{k}) \), let us rewrite \((K_1)\) as a discrete differential inequality, as follows

$$\begin{aligned} (1-\varepsilon ) \theta _{k-1} \ge \alpha _k t_{k+1}\left( 1+\alpha _k+\left[ \theta _{k} - \theta _{k-1}\right] _+\right) . \end{aligned}$$
(32)

According to Proposition 3.1 we have \(t_{k+1}\sim t_{k}\sim \frac{1}{(1-c)(1-\alpha _{k-1})}\quad \text{ as } k\rightarrow +\infty .\) Consequently, (32) can be equivalently formulated as

$$\begin{aligned} (1-\varepsilon )(1-c) \frac{2-\rho _{k-1}}{\rho _{k-1}}(1-\alpha _{k-1})^2 \ge (1+ o(1))\,\alpha _k\left( 1+\alpha _k+\left[ \theta _{k} - \theta _{k-1}\right] _+\right) . \end{aligned}$$

On the other hand, condition (30) gives

$$\begin{aligned} | \theta _{k} - \theta _{k-1}| = |c'|\frac{2-\rho _{k-1}}{\rho _{k-1}}(1-\alpha _{k-1})^2 + o\left( \frac{2-\rho _{k-1}}{\rho _{k-1}}(1-\alpha _{k-1})^2 \right) . \end{aligned}$$

Setting \(R_k := \frac{2-\rho _{k}}{\rho _{k}}(1-\alpha _{k})^2 \), we deduce that \((K_1)\) is implied by the following condition

$$\begin{aligned} (1-\varepsilon )(1-c)R_{k-1} \ge (1+ o(1))\alpha _k\left( 1+\alpha _k + |c'|R_{k-1} + o(R_{k-1}) \right) . \end{aligned}$$

Rearranging the terms we obtain

$$\begin{aligned} \left[ (1-\varepsilon )(1-c) - \alpha _k (|c'|+ o(1))\right] R_{k-1} \ge (1+ o(1))\alpha _k\left( 1+\alpha _k \right) . \end{aligned}$$

Since \(\alpha _k \le 1\), the above inequality will be satisfied if

$$\begin{aligned} \left[ (1-\varepsilon )(1-c) - (|c'|+ o(1)) \right] R_{k-1} \ge (1 + o(1))\alpha _k\left( 1+\alpha _k \right) . \end{aligned}$$

This will be fulfilled if

$$\begin{aligned} \liminf _{k\rightarrow +\infty } \frac{2-\rho _{k-1}}{\rho _{k-1}}(1-\alpha _{k-1})^2 > \limsup _{k\rightarrow +\infty }\frac{\alpha _k(1+ \alpha _k) }{(1-\varepsilon )(1-c)-|c'| }. \end{aligned}$$
(33)

The right member of (33) is a continuous increasing function of \(\varepsilon \). Consequently, it is equivalent to assume that the above strict inequality is satisfied for \(\varepsilon =0\), which gives the claim. \(\square \)

The next proposition brings to light a set of conditions which guarantee that condition \((K_1)\) is satisfied.

Proposition 3.3

Suppose that \(\alpha _k\in [0,1[\) and \(\rho _k\in ]0,2[\) for every \(k\ge 1\). Let us assume that there exist \(\overline{\rho }\in [0,2[\), \(c \in [0,1[ \) and \(c'' \in {\mathbb {R}}\), with \(-(1-\overline{\rho }/2)<c''\le -(1-\overline{\rho }/2) c\) such that

$$\begin{aligned}&\lim _{k\rightarrow +\infty }\rho _k=\overline{\rho }; \end{aligned}$$
(34)
$$\begin{aligned}&\lim _{k\rightarrow +\infty }\left( \frac{1}{1-\alpha _{k+1}}-\frac{1}{1-\alpha _{k}}\right) =c ; \end{aligned}$$
(35)
$$\begin{aligned}&\lim _{k\rightarrow +\infty } \frac{\rho _{k+1}-\rho _k}{\rho _{k+1}(1-\alpha _{k})} = c'' ; \end{aligned}$$
(36)
$$\begin{aligned}&\liminf _{k\rightarrow +\infty } \frac{(1-\alpha _{k})^2}{\rho _{k}} > \limsup _{k\rightarrow +\infty }\frac{\alpha _k (1+ \alpha _k) }{2-\overline{\rho }+2c''}. \end{aligned}$$
(37)

Then condition \((K_1)\) is satisfied.

Proof

Let us check that the conditions (30) and (31) of Proposition 3.2 are satisfied. First observe that

$$\begin{aligned}&\frac{2-\rho _{k}}{\rho _{k}}(1-\alpha _{k})-\frac{2-\rho _{k-1}}{\rho _{k-1}}(1-\alpha _{k-1})\nonumber \\&\quad =\frac{2-\rho _{k-1}}{\rho _{k-1}}((1-\alpha _{k})-(1-\alpha _{k-1}))\nonumber \\&\qquad +\left( \frac{2-\rho _{k}}{\rho _{k}}-\frac{2-\rho _{k-1}}{\rho _{k-1}}\right) (1-\alpha _{k})\nonumber \\&\quad =\frac{2-\rho _{k-1}}{\rho _{k-1}}((1-\alpha _{k})-(1-\alpha _{k-1}))-2\,\frac{\rho _k-\rho _{k-1}}{\rho _k \rho _{k-1}} (1-\alpha _{k}). \end{aligned}$$
(38)

In view of assumption (35), we have

$$\begin{aligned} (1-\alpha _{k})-(1-\alpha _{k-1})= & {} -c(1-\alpha _{k-1})(1-\alpha _{k})+ o((1-\alpha _{k-1})(1-\alpha _{k}))\\= & {} -c(1-\alpha _{k-1})^2+ o(1-\alpha _{k-1})^2, \end{aligned}$$

since \(1-\alpha _{k-1}\sim 1-\alpha _{k}\) as \(k\rightarrow +\infty \), see Proposition 3.1\(\mathrm{{(ii)}}\). Setting \(R_k := \frac{2-\rho _{k}}{\rho _{k}}(1-\alpha _{k})^2\), this leads to

$$\begin{aligned} \frac{2-\rho _{k-1}}{\rho _{k-1}}((1-\alpha _{k})-(1-\alpha _{k-1}))=-cR_{k-1}+o(R_{k-1}) \quad \text{ as } k\rightarrow +\infty . \end{aligned}$$
(39)

On the other hand, assumption (36) yields

$$\begin{aligned} \frac{\rho _k-\rho _{k-1}}{\rho _k \rho _{k-1}} (1-\alpha _{k})= & {} \frac{c''}{\rho _{k-1}}(1-\alpha _{k-1})(1-\alpha _{k})+o\left( \frac{1}{\rho _{k-1}}(1-\alpha _{k-1})(1-\alpha _{k})\right) \nonumber \\= & {} \frac{c''}{\rho _{k-1}}(1-\alpha _{k-1})^2+o\left( \frac{1}{\rho _{k-1}}(1-\alpha _{k-1})^2\right) \nonumber \\= & {} \frac{c''}{2-\overline{\rho }}R_{k-1}+o(R_{k-1}), \end{aligned}$$
(40)

where we used assumption (34) in the last equality. By combining (38), (39) and (40), we obtain

$$\begin{aligned}&\frac{2-\rho _{k}}{\rho _{k}}(1-\alpha _{k})-\frac{2-\rho _{k-1}}{\rho _{k-1}}(1-\alpha _{k-1})\\&\quad =-\left( c+\frac{2c''}{2-\overline{\rho }}\right) R_{k-1}+o(R_{k-1})\quad \text{ as } k\rightarrow +\infty . \end{aligned}$$

It ensues that condition (30) is satisfied with \(c'=-\left( c+\frac{c''}{1-\overline{\rho }/2}\right) \). Since \(c''\le -(1-\overline{\rho }/2) c\) by assumption, we have \(c'\ge 0\). This implies that

$$\begin{aligned} 1-c-|c'|=1-c-c'=1+\frac{c''}{1-\overline{\rho }/2}. \end{aligned}$$
(41)

Using that \(-(1-\overline{\rho }/2)<c''\) by assumption, we deduce that the above quantity is positive, hence \(|c'|<1-c\).

Let us finally check condition (31). Recalling that \(\lim _{k\rightarrow +\infty }\rho _k=\overline{\rho }\), condition (31) is equivalent to

$$\begin{aligned}(2-\overline{\rho })\liminf _{k\rightarrow +\infty } \frac{(1-\alpha _{k})^2}{\rho _{k}} > \limsup _{k\rightarrow +\infty }\frac{\alpha _k (1+ \alpha _k) }{1-c-|c'|}.\end{aligned}$$

In view of (41), the latter condition is in turn equivalent to

$$\begin{aligned}\liminf _{k\rightarrow +\infty } \frac{(1-\alpha _{k})^2}{\rho _{k}} > \limsup _{k\rightarrow +\infty }\frac{\alpha _k (1+ \alpha _k) }{2-\overline{\rho }+2c''},\end{aligned}$$

which holds true by (37). Then just use Proposition 3.2. \(\square \)

3.2 Application of the main results

Combining Theorem 2.6 with Proposition 3.3, we obtain the following result.

Theorem 3.4

Under (H), assume that \({\mathrm{zer}}A\ne \emptyset \). Suppose that \(\alpha _k\in [0,1[\) and \(\rho _k\in ]0,2[\) for every \(k\ge 1\). Let us assume that there exist \(\overline{\rho }\in [0,2[\), \(c \in [0,1[ \) and \(c'' \in {\mathbb {R}}\), with \(-(1-\overline{\rho }/2)<c''\le -(1-\overline{\rho }/2) c\) such that

$$\begin{aligned}&\lim _{k\rightarrow +\infty }\rho _k=\overline{\rho };\\&\lim _{k\rightarrow +\infty }\left( \frac{1}{1-\alpha _{k+1}}-\frac{1}{1-\alpha _{k}}\right) =c ;\\&\lim _{k\rightarrow +\infty } \frac{\rho _{k+1}-\rho _k}{\rho _{k+1}(1-\alpha _{k})} = c'' ; \\&\liminf _{k\rightarrow +\infty } \frac{(1-\alpha _{k})^2}{\rho _{k}} > \limsup _{k\rightarrow +\infty }\frac{\alpha _k (1+ \alpha _k) }{2-\overline{\rho }+2c''}. \end{aligned}$$

Then for any sequence \((x_k)\) generated by (RIPA), we have

  1. (i)

    \(\displaystyle \sum \nolimits _{i=1}^{+\infty }\frac{1-\alpha _{i-1}}{\rho _{i-1}}\Vert x_i-x_{i-1}\Vert ^2<+\infty \).

  2. (ii)

    \(\displaystyle \sum \nolimits _{i=1}^{+\infty } \frac{\rho _i}{1- \alpha _i}\Vert \mu _i A_{\mu _i}(y_i)\Vert ^2<+\infty \),  and   \(\displaystyle \sum \nolimits _{i=1}^{+\infty }\frac{\rho _i}{1- \alpha _i}\Vert \mu _i A_{\mu _i}(x_i)\Vert ^2<+\infty \).

  3. (iii)

    For any \(z\in {\mathrm{zer}}A\), \(\lim _{k\rightarrow +\infty }\Vert x_k-z\Vert \) exists, and hence \((x_k)\) is bounded.

Assume moreover that \(\overline{\rho }>0\). Then the following holds

  1. (iv)

    \(\lim _{k\rightarrow +\infty }\mu _k A_{\mu _k}(y_k)=0\), and   \(\lim _{k\rightarrow +\infty }\mu _k A_{\mu _k}(x_k)=0\).

  2. (v)

    If   \(\liminf _{k\rightarrow +\infty }\mu _k>0\), then there exists \(x_\infty \in {\mathrm{zer}}A\) such that \(x_k\rightharpoonup x_\infty \) weakly in \({\mathcal {H}}\) as \(k\rightarrow +\infty \).

Proof

Proposition 3.1 shows that \((K_0)\) is satisfied and that \(t_{k+1}\sim \frac{1}{(1-c)(1-\alpha _k)}\,\text{ as } k\rightarrow +\infty .\) On the other hand, condition \((K_1)\) is fulfilled in view of Proposition 3.3. Items \(\mathrm{{(i)}}\) to \(\mathrm{(v)}\) then follow immediately from Theorem 2.6. \(\square \)

To apply Theorem 2.14, we must find suitable conditions that ensure that condition \((K_4)\) is satisfied. The following result gives an equivalent, when \(k\rightarrow +\infty \), of the first expression appearing in condition \((K_4)\).

Proposition 3.5

Let \((\alpha _k)\) and \((\rho _k)\) be sequences such that \(\alpha _k\in [0,1[\) and \(\rho _k\in ]0,2]\) for every \(k\ge 1\). Let us assume that there exist \(\overline{\alpha }\in [0,1]\), \(c \in [0,1[ \) and \(c'' \in {\mathbb {R}}\), with \(1+c+c''\overline{\alpha }>0\) such that

$$\begin{aligned}&\lim _{k\rightarrow +\infty }\alpha _k=\overline{\alpha }; \end{aligned}$$
(42)
$$\begin{aligned}&\lim _{k\rightarrow +\infty }\left( \frac{1}{1-\alpha _{k+1}}-\frac{1}{1-\alpha _{k}}\right) =c ; \end{aligned}$$
(43)
$$\begin{aligned}&\lim _{k\rightarrow +\infty } \frac{\rho _{k+1}-\rho _k}{\rho _{k+1}(1-\alpha _{k})} = c''. \end{aligned}$$
(44)

Then the following equivalence holds true

$$\begin{aligned}\sum _{i=1}^k\left[ \left( \prod _{j=i+1}^k \alpha _j\right) \rho _i\right] \sim \frac{1}{(1+c+c''\overline{\alpha })}\frac{\rho _k}{1-\alpha _{k}}\quad \text{ as } k\rightarrow +\infty .\end{aligned}$$

Proof

Observe that for every \(i\le k\),

$$\begin{aligned}&\frac{\rho _i}{1-\alpha _i}\prod _{j=i+1}^k\alpha _j-\frac{\rho _{i-1}}{1-\alpha _{i-1}}\prod _{j=i}^{k}\alpha _j \nonumber \\&\quad =\left( \prod _{j=i+1}^k\alpha _j\right) \left[ \frac{\rho _i}{1-\alpha _i}-\frac{\rho _{i-1}\alpha _i}{1-\alpha _{i-1}}\right] \nonumber \\&\quad =\left( \prod _{j=i+1}^k\alpha _j\right) \left[ \frac{\rho _i}{1-\alpha _i}-\frac{\rho _{i}\alpha _i}{1-\alpha _{i-1}}+\frac{(\rho _i-\rho _{i-1})\alpha _i}{1-\alpha _{i-1}}\right] \nonumber \\&\quad =\left( \prod _{j=i+1}^k\alpha _j\right) \rho _i \left[ \frac{1}{1-\alpha _i}-\frac{1}{1-\alpha _{i-1}}+\frac{1-\alpha _{i}}{1-\alpha _{i-1}}+\frac{(\rho _i-\rho _{i-1})\alpha _i}{\rho _i(1-\alpha _{i-1})}\right] \quad \end{aligned}$$
(45)

In view of assumption (43), we have \(\lim _{i\rightarrow +\infty }(1-\alpha _{i})/(1-\alpha _{i-1})=1\), see Proposition 3.1\(\mathrm{{(ii)}}\). By using assumptions (42), (43) and (44), we then obtain that

$$\begin{aligned} \lim _{i\rightarrow +\infty }\left[ \frac{1}{1-\alpha _i}-\frac{1}{1-\alpha _{i-1}}+\frac{1-\alpha _{i}}{1-\alpha _{i-1}}+\frac{(\rho _i-\rho _{i-1})\alpha _i}{\rho _i(1-\alpha _{i-1})}\right] =1+c+c''\overline{\alpha }. \end{aligned}$$
(46)

Recalling that \(1+c+c''\overline{\alpha }>0\) by assumption, let us fix \(\varepsilon \in \left]0,\,1+c+c''\overline{\alpha }\right[\). We infer from (46) that there exists \(i_0\ge 1\) such that for every \(i\ge i_0\),

$$\begin{aligned} 1+c+c''\overline{\alpha }-\varepsilon\le & {} \left[ \frac{1}{1-\alpha _i}-\frac{1}{1-\alpha _{i-1}}+\frac{1-\alpha _{i}}{1-\alpha _{i-1}}+\frac{(\rho _i-\rho _{i-1})\alpha _i}{\rho _i(1-\alpha _{i-1})}\right] \\\le & {} 1+c+c''\overline{\alpha }+\varepsilon . \end{aligned}$$

In view of (45), this implies that for every \(i\ge i_0\) and \(k\ge i\),

$$\begin{aligned}&(1+c+c''\overline{\alpha }-\varepsilon )\left( \prod _{j=i+1}^k\alpha _j\right) \rho _i\le \frac{\rho _i}{1-\alpha _i}\prod _{j=i+1}^k\alpha _j-\frac{\rho _{i-1}}{1-\alpha _{i-1}}\prod _{j=i}^{k}\alpha _j \nonumber \\&\quad \le (1+c+c''\overline{\alpha }+\varepsilon )\left( \prod _{j=i+1}^k\alpha _j\right) \rho _i. \end{aligned}$$
(47)

Let us sum the above inequalities from \(i=i_0\) to k. We find

$$\begin{aligned}&(1+c+c''\overline{\alpha }-\varepsilon )\sum _{i=i_0}^k\left[ \left( \prod _{j=i+1}^k\alpha _j\right) \rho _i\right] \le \frac{\rho _k}{1-\alpha _k}-\frac{\rho _{i_0-1}}{1-\alpha _{i_0-1}}\prod _{j=i_0}^{k}\alpha _j \\&\quad \le (1+c+c''\overline{\alpha }+\varepsilon )\sum _{i=i_0}^k\left[ \left( \prod _{j=i+1}^k\alpha _j\right) \rho _i\right] . \end{aligned}$$

It ensues that

$$\begin{aligned}&(1+c+c''\overline{\alpha })\sum _{i=i_0}^k\left[ \left( \prod _{j=i+1}^k\alpha _j\right) \rho _i\right] \sim \frac{\rho _k}{1-\alpha _k} \\&\quad -\frac{\rho _{i_0-1}}{1-\alpha _{i_0-1}}\prod _{j=i_0}^{k}\alpha _j \quad \text{ as } k\rightarrow +\infty . \end{aligned}$$

It remains now to prove that \(\prod _{j=i_0}^{k}\alpha _j=o\left( \frac{\rho _k}{1-\alpha _k} \right) \) as \(k\rightarrow +\infty \). If there exists \(k_0\ge i_0\) such that \(\alpha _{k_0}=0\), then the sequence \(\left( \prod _{j=i_0}^{k}\alpha _j\right) \) is stationary and equal to 0 for \(k\ge k_0\). Without loss of generality, we can assume that \(\alpha _k>0\) for every \(k\ge i_0\). Let us come back to the left inequality of (47), and divide each member by \(\prod _{j=i_0}^{k}\alpha _j\). We find

$$\begin{aligned} (1+c+c''\overline{\alpha }-\varepsilon )\frac{\rho _i}{\prod \nolimits _{j=i_0}^i\alpha _j}\le \frac{\rho _i}{1-\alpha _i}\frac{1}{\prod \nolimits _{j=i_0}^i\alpha _j}-\frac{\rho _{i-1}}{1-\alpha _{i-1}}\frac{1}{\prod \nolimits _{j=i_0}^{i-1}\alpha _j}. \end{aligned}$$
(48)

Since \(1+c+c''\overline{\alpha }>\varepsilon \), we infer that the sequence \(\left( \frac{\rho _i}{1-\alpha _i}\frac{1}{\prod _{j=i_0}^i\alpha _j} \right) \) is increasing. This implies that for every \(i\ge i_0\),

$$\begin{aligned}\frac{\rho _i}{1-\alpha _i}\frac{1}{\prod \nolimits _{j=i_0}^i\alpha _j} \ge \frac{\rho _{i_0-1}}{1-\alpha _{i_0-1}}.\end{aligned}$$

In view of (48), we deduce that

$$\begin{aligned} (1+c+c''\overline{\alpha }-\varepsilon )\frac{\rho _{i_0-1}}{1-\alpha _{i_0-1}}(1-\alpha _i) \le \frac{\rho _i}{1-\alpha _i}\frac{1}{\prod \nolimits _{j=i_0}^i\alpha _j}-\frac{\rho _{i-1}}{1-\alpha _{i-1}}\frac{1}{\prod \nolimits _{j=i_0}^{i-1}\alpha _j}. \end{aligned}$$

By summing the above inequality from \(i=i_0\) to k, we obtain

$$\begin{aligned} (1+c+c''\overline{\alpha }-\varepsilon )\frac{\rho _{i_0-1}}{1-\alpha _{i_0-1}}\sum _{i=i_0}^k(1-\alpha _i) \le \frac{\rho _k}{1-\alpha _k}\frac{1}{\prod \nolimits _{j=i_0}^k\alpha _j}-\frac{\rho _{i_0-1}}{1-\alpha _{i_0-1}}.\end{aligned}$$

Using that \(\sum _{i=1}^{+\infty }(1-\alpha _i)=+\infty \) by Proposition 3.1\(\mathrm{{(iii)}}\), this entails that

$$\begin{aligned}\lim _{k\rightarrow +\infty }\frac{\rho _k}{1-\alpha _k}\frac{1}{\prod \nolimits _{j=i_0}^k\alpha _j}=+\infty .\end{aligned}$$

This shows that \(\prod _{j=i_0}^{k}\alpha _j=o\left( \frac{\rho _k}{1-\alpha _k} \right) \) as \(k\rightarrow +\infty \), which completes the proof. \(\square \)

Combining Theorem 2.14 with Proposition 3.5, we obtain the following result.

Theorem 3.6

Under (H), assume that \({\mathrm{zer}}A\ne \emptyset \). Suppose that the sequences \((\alpha _k)\) and \((\rho _k)\) satisfy \(\alpha _k\in [0,1[\) and \(\rho _k\in ]0,2[\) for every \(k\ge 1\). Let us assume that there exist \(\overline{\alpha }\in [0,1]\), \(\overline{\rho }\in [0,2[\), \(c \in [0,1[ \) and \(c'' \in {\mathbb {R}}\), with \(-(1-\overline{\rho }/2)<c''\le -(1-\overline{\rho }/2) c\) such that

$$\begin{aligned}&\lim _{k\rightarrow +\infty }\alpha _k=\overline{\alpha }; \end{aligned}$$
(49)
$$\begin{aligned}&\lim _{k\rightarrow +\infty }\rho _k=\overline{\rho }; \end{aligned}$$
(50)
$$\begin{aligned}&\lim _{k\rightarrow +\infty }\left( \frac{1}{1-\alpha _{k+1}}-\frac{1}{1-\alpha _{k}}\right) =c ; \end{aligned}$$
(51)
$$\begin{aligned}&\lim _{k\rightarrow +\infty } \frac{\rho _{k+1}-\rho _k}{\rho _{k+1}(1-\alpha _{k})} = c''; \end{aligned}$$
(52)
$$\begin{aligned}&\liminf _{k\rightarrow +\infty } \frac{(1-\alpha _{k})^2}{\rho _{k}} > \frac{\overline{\alpha } (1+ \overline{\alpha }) }{2-\overline{\rho }+2c''}. \end{aligned}$$
(53)

Then for any sequence \((x_k)\) generated by (RIPA), we have

  1. (i)

    \(\Vert x_{k+1}-x_k\Vert ={\mathcal {O}}\left( \frac{\rho _k}{1-\alpha _k}\right) \quad \text{ as } k\rightarrow +\infty .\)

Assume additionally that \(\frac{|\mu _{k+1}-\mu _k|}{\mu _{k+1}}={\mathcal {O}}\left( \frac{\rho _k}{1-\alpha _k}\right) \)  as \(k\rightarrow +\infty \), together with  \(\displaystyle \sum \nolimits _{i=1}^{+\infty }\frac{\rho _i}{1-\alpha _i}=+\infty .\)

Then the following holds

  1. (ii)

    \(\lim _{k\rightarrow +\infty }\mu _k A_{\mu _k}(x_k)=0\). If  \(\liminf _{k\rightarrow +\infty }\mu _k>0\), then there exists \(x_\infty \in ~{\mathrm{zer}}A\) such that \(x_k\rightharpoonup x_\infty \) weakly in \({\mathcal {H}}\) as \(k\rightarrow +\infty \).

Finally assume that \(\displaystyle \sum \nolimits _{i=1}^{+\infty }\frac{\rho _i}{1-\alpha _i}<+\infty \). Then we obtain

  1. (iii)

    \(\displaystyle \sum \nolimits _{i=1}^{+\infty }\Vert x_i-x_{i-1}\Vert <+\infty \), and hence the sequence \((x_k)\) converges strongly toward some \(x_\infty \in {\mathcal {H}}\).

Proof

Let us check that the assumptions of Theorem 2.14 are satisfied. Condition \((K_0)\) is fulfilled owing to assumption (51) and Proposition 3.1\(\mathrm{{(i)}}\). Assumptions (49)–(50)–(51)–(52)–(53) ensure that condition \((K_1)\) is satisfied, see Proposition 3.3. Since \(\overline{\rho }\in [0,2[\), condition \((K_2)\) holds true in view of assumption (50).

\(\mathrm{{(i)}}\) Observe that

$$\begin{aligned} 1+c+c''\overline{\alpha }\ge & {} 1+c+c'' \quad \text{ since }~ \overline{\alpha }\le 1~ \text{ and }~ c''\le 0,\\> & {} c+\overline{\rho }/2 \qquad \text{ because }~ c''>-(1-\overline{\rho }/2), \end{aligned}$$

hence \(1+c+c''\overline{\alpha }>0\). Proposition 3.5 then shows that

$$\begin{aligned} \sum _{i=1}^k\left[ \left( \prod _{j=i+1}^k \alpha _j\right) \rho _i\right] \sim \frac{1}{(1+c+c''\overline{\alpha })}\frac{\rho _k}{1-\alpha _{k}}\quad \text{ as } k\rightarrow +\infty . \end{aligned}$$
(54)

By combining this equivalence with Theorem 2.14\(\mathrm{{(i)}}\), we obtain that \(\Vert x_{k+1}-x_k\Vert ={\mathcal {O}}\left( \frac{\rho _k}{1-\alpha _k}\right) \, \text{ as } k\rightarrow ~+\infty .\)

\(\mathrm{{(ii)}}\)\(\mathrm{{(iii)}}\) In view of (54) and the equivalence \(t_{k+1}\sim \frac{1}{1-c}\frac{1}{1-\alpha _{k}}\, \text{ as } k\rightarrow +\infty \), we immediately see that the first condition of \((K_4)\) is satisfied. The second condition of \((K_4)\) is guaranteed by the assumption \(\frac{|\mu _{k+1}-\mu _k|}{\mu _{k+1}}={\mathcal {O}}\left( \frac{\rho _k}{1-\alpha _k}\right) \)  as \(k\rightarrow +\infty \). From assumption (52) and \(\alpha _k\in [0,1[\), we get

$$\begin{aligned}\rho _{k+1}-\rho _k={\mathcal {O}}(\rho _{k+1}) \text{ as } k\rightarrow +\infty .\end{aligned}$$

It ensues that \(\rho _k={\mathcal {O}}(\rho _{k+1}) \text{ as } k\rightarrow +\infty \). Recalling from Proposition 3.1\(\mathrm{{(ii)}}\) that \(t_{k+1}\sim t_{k+2} \text{ as } k\rightarrow ~+\infty \), we deduce immediately that the third condition of \((K_4)\) is satisfied. Finally, condition \((K_5)\) is fulfilled owing to the assumption \(\sum _{i=1}^{+\infty }\frac{\rho _i}{1-\alpha _i}=+\infty .\) Points \(\mathrm{{(ii)}}\) and \(\mathrm{{(iii)}}\) then follow from the corresponding points of Theorem 2.14. \(\square \)

3.3 Some particular cases

Let us now particularize our results to the case \(\alpha _k=1-\alpha /k^q\) and \(\rho _k=\beta /k^r\), for some \(\alpha \), \(\beta >0\), \(q\in ]0,1[\) and \(r>0\).

Corollary 3.7

Under (H), assume that \({\mathrm{zer}}A\ne \emptyset \). Suppose that \((q,r)\in ]0,1[\times {\mathbb {R}}_+^*\) is such that \(r\ge 2q\), and that \((\alpha , \beta )\in {\mathbb {R}}_+^*\times {\mathbb {R}}_+^*\) satisfies \(\alpha ^2/\beta >1\) if \(r=2q\) (no condition if \(r>2q\)). Assume that \(\alpha _k=1-\alpha /k^q\) and \(\rho _k=\beta /k^r\) for every \(k\ge 1\). Then for any sequence \((x_k)\) generated by (RIPA), we have

  1. (i)

    \(\displaystyle \sum \nolimits _{i=1}^{+\infty }i^{r-q}\Vert x_i-x_{i-1}\Vert ^2<+\infty \).

  2. (ii)

    \(\displaystyle \sum \nolimits _{i=1}^{+\infty }\frac{1}{i^{r-q}}\Vert \mu _i A_{\mu _i}(y_i)\Vert ^2<+\infty \),  and   \(\displaystyle \sum \nolimits _{i=1}^{+\infty } \frac{1}{i^{r-q}}\Vert \mu _i A_{\mu _i}(x_i)\Vert ^2<+\infty \).

  3. (iii)

    For any \(z\in {\mathrm{zer}}A\), \(\lim _{k\rightarrow +\infty }\Vert x_k-z\Vert \) exists, and hence \((x_k)\) is bounded.

  4. (iv)

    \(\Vert x_{k+1}-x_k\Vert ={\mathcal {O}}\left( \frac{1}{k^{r-q}}\right) \quad \text{ as } k\rightarrow +\infty .\)

Assume additionally that \(r\le q+1\) and that \(\frac{|\mu _{k+1}-\mu _k|}{\mu _{k+1}}={\mathcal {O}}\left( \frac{1}{k^{r-q}}\right) \)  as \(k\rightarrow +\infty \). Then the following holds

  1. (v)

    \(\lim _{k\rightarrow +\infty }\mu _k A_{\mu _k}(x_k)=0\).

  2. (vi)

    If  \(\liminf _{k\rightarrow +\infty }\mu _k>0\), then there exists \(x_\infty \in ~{\mathrm{zer}}A\) such that \(x_k\rightharpoonup x_\infty \) weakly in \({\mathcal {H}}\) as \(k\rightarrow +\infty \).

Finally assume that \(r>q+1\). Then we obtain

  1. (vii)

    \(\displaystyle \sum \nolimits _{i=1}^{+\infty }\Vert x_i-x_{i-1}\Vert <+\infty \), and hence the sequence \((x_k)\) converges strongly toward some \(x_\infty \in {\mathcal {H}}\).

Proof

We first check that the assumptions (49), (50), (51), (52) and (53) are fulfilled. Assumptions (49)–(50) are clearly satisfied, with \(\overline{\alpha }=1\) and \(\overline{\rho }=0\) respectively. Now observe that

$$\begin{aligned}\frac{1}{1-\alpha _{k+1}}-\frac{1}{1-\alpha _{k}}=\frac{1}{\alpha }((k+1)^q-k^q)\sim \frac{q}{\alpha }k^{q-1}\rightarrow 0 \quad \text{ as } k\rightarrow +\infty ,\end{aligned}$$

where we have used \(q\in ]0,1[\). Hence assumption (51) is verified with \(c=0\). On the other hand, we have

$$\begin{aligned}\frac{\rho _{k+1}-\rho _k}{\rho _{k+1}(1-\alpha _k)}=\left( \frac{1}{(k+1)^r}-\frac{1}{k^r} \right) (k+1)^r \frac{k^q}{\alpha }\sim -\frac{r}{\alpha } k^{q-1}\rightarrow 0\quad \text{ as } k\rightarrow +\infty .\end{aligned}$$

This shows that assumption (52) is fulfilled with \(c''=0\). Finally, hypothesis (53) amounts to

$$\begin{aligned} \liminf _{k\rightarrow +\infty } \frac{(1-\alpha _k)^2}{\rho _k}>1. \end{aligned}$$

We have \((1-\alpha _k)^2/\rho _k=(\alpha ^2/k^{2q})(k^r/\beta )=\frac{\alpha ^2}{\beta } k^{r-2q}\), hence

$$\begin{aligned}\lim _{k\rightarrow +\infty } \frac{(1-\alpha _k)^2}{\rho _k}=\left\{ \begin{array}{lll} +\infty &{}\quad \text{ if } &{}r>2q\\ \alpha ^2/\beta &{}\quad \text{ if } &{}r=2q. \end{array} \right. \end{aligned}$$

It ensues that assumption (53) is automatically satisfied if \(r>2q\), while it is equivalent to \(\alpha ^2/\beta >1\) if \(r=2q\). Therefore the assumptions of Theorem 3.6 are satisfied, which implies that the hypotheses of Theorem 3.4 are also fulfilled. Points \(\mathrm{{(i)}}\), \(\mathrm{{(ii)}}\) and \(\mathrm{{(iii)}}\) follow immediately from Theorem 3.4. Item \(\mathrm{(iv)}\) is a consequence of Theorem 3.6\(\mathrm{{(i)}}\). Condition \(\sum _{i=1}^{+\infty }\frac{\rho _i}{1-\alpha _i}=+\infty \) amounts to \(r\le q+1\). Points \(\mathrm{(v)}\), \(\mathrm{(vi)}\) and \(\mathrm{(vii)}\) can be immediately derived from the corresponding points of Theorem 3.6. \(\square \)

Consider finally the case \(q=1\), thus leading to a sequence \((\alpha _k)\) of the form \(\alpha _k=1-\alpha /k\). This case was recently studied by Attouch and Peypouquet [7] in connection with Nesterov’s accelerated methods.

Corollary 3.8

Under (H), assume that \({\mathrm{zer}}A\ne \emptyset \). Let \(r\ge 2\), \(\alpha >r\) and \(\beta >0\) be such that \(\beta <\alpha (\alpha -2)\) if \(r=2\) (no condition on \(\beta \) if \(r>2\)). Assume that \(\alpha _k=1-\alpha /k\) and \(\rho _k=\beta /k^r\) for every \(k\ge 1\). Then for any sequence \((x_k)\) generated by (RIPA), we have

  1. (i)

    \(\displaystyle \sum \nolimits _{i=1}^{+\infty }i^{r-1}\Vert x_i-x_{i-1}\Vert ^2<+\infty \).

  2. (ii)

    \(\displaystyle \sum \nolimits _{i=1}^{+\infty }\frac{1}{i^{r-1}}\Vert \mu _i A_{\mu _i}(y_i)\Vert ^2<+\infty \),  and   \(\displaystyle \sum \nolimits _{i=1}^{+\infty } \frac{1}{i^{r-1}}\Vert \mu _i A_{\mu _i}(x_i)\Vert ^2<+\infty \).

  3. (iii)

    For any \(z\in {\mathrm{zer}}A\), \(\lim _{k\rightarrow +\infty }\Vert x_k-z\Vert \) exists, and hence \((x_k)\) is bounded.

  4. (iv)

    \(\Vert x_{k+1}-x_k\Vert ={\mathcal {O}}\left( \frac{1}{k^{r-1}}\right) \quad \text{ as } k\rightarrow +\infty .\)

Assume additionally that \(r=2\) and that \(\frac{|\mu _{k+1}-\mu _k|}{\mu _{k+1}}={\mathcal {O}}\left( \frac{1}{k}\right) \)  as \(k\rightarrow +\infty \). Then the following holds

  1. (v)

    \(\lim _{k\rightarrow +\infty }\mu _k A_{\mu _k}(x_k)=0\).

  2. (vi)

    If  \(\liminf _{k\rightarrow +\infty }\mu _k>0\), then there exists \(x_\infty \in ~{\mathrm{zer}}A\) such that \(x_k\rightharpoonup x_\infty \) weakly in \({\mathcal {H}}\) as \(k\rightarrow +\infty \).

Finally assume that \(r>2\). Then we obtain

  1. (vii)

    \(\displaystyle \sum \nolimits _{i=1}^{+\infty }\Vert x_i-x_{i-1}\Vert <+\infty \), and hence the sequence \((x_k)\) converges strongly toward some \(x_\infty \in {\mathcal {H}}\).

Proof

Assumptions (49)–(50) are clearly satisfied, with \(\overline{\alpha }=1\) and \(\overline{\rho }=0\) respectively. Now observe that

$$\begin{aligned}\frac{1}{1-\alpha _{k+1}}-\frac{1}{1-\alpha _{k}}=\frac{1}{\alpha }(k+1)-\frac{1}{\alpha }k=\frac{1}{\alpha },\end{aligned}$$

hence assumption (51) is verified with \(c=\frac{1}{\alpha }\). On the other hand, we have

$$\begin{aligned}\frac{\rho _{k+1}-\rho _k}{\rho _{k+1}(1-\alpha _k)}=\left( \frac{1}{(k+1)^r}-\frac{1}{k^r} \right) (k+1)^r \frac{k}{\alpha }\rightarrow -\frac{r}{\alpha }\quad \text{ as } k\rightarrow +\infty .\end{aligned}$$

This shows that assumption (52) is fulfilled with \(c''=-\frac{r}{\alpha }\). The hypothesis \(-(1-\overline{\rho }/2)<c''\le -(1-\overline{\rho }/2)c\) amounts to \(-1<-\frac{r}{\alpha }\le -\frac{1}{\alpha }\), which is in turn equivalent to \(1\le r<\alpha \). This holds true in view of the assumptions of Corollary 3.8. Finally, hypothesis (53) can be rewritten as

$$\begin{aligned} \liminf _{k\rightarrow +\infty } \frac{(1-\alpha _k)^2}{\rho _k}>\frac{1}{1-r/\alpha }=\frac{\alpha }{\alpha -r}. \end{aligned}$$

We have \((1-\alpha _k)^2/\rho _k=(\alpha ^2/k^{2})(k^r/\beta )=\frac{\alpha ^2}{\beta } k^{r-2}\), hence

$$\begin{aligned}\lim _{k\rightarrow +\infty } \frac{(1-\alpha _k)^2}{\rho _k}=\left\{ \begin{array}{lll} +\infty &{} \quad \text{ if } &{}r>2\\ \alpha ^2/\beta &{} \quad \text{ if } &{}r=2. \end{array} \right. \end{aligned}$$

It ensues that assumption (53) is automatically satisfied if \(r>2\), while it is equivalent to \(\alpha (\alpha -2)>\beta \) if \(r=2\). Points \(\mathrm{{(i)}}\), \(\mathrm{{(ii)}}\) and \(\mathrm{{(iii)}}\) follow immediately from Theorem 3.4. Item \(\mathrm{(iv)}\) is a consequence of Theorem 3.6\(\mathrm{{(i)}}\). Condition \(\sum _{i=1}^{+\infty }\frac{\rho _i}{1-\alpha _i}=+\infty \) amounts to \(r\le 2\), which boils down to \(r=2\). Points \(\mathrm{(v)}\), \(\mathrm{(vi)}\) and \(\mathrm{(vii)}\) can be immediately derived from the corresponding points of Theorem 3.6. \(\square \)

The case \(r=2\) corresponds to the situation studied by Attouch and Peypouquet [7]. More precisely, they considered the case

$$\begin{aligned}\alpha _k = 1- \frac{\alpha }{k}, \quad \rho _k = \frac{s}{\lambda _k +s}\quad \text{ and } \quad \mu _k = \lambda _k + s,\end{aligned}$$

where \(\alpha \), \(s>0\) and \(\lambda _k=(1+ \varepsilon )\frac{s}{\alpha ^2}k^2\), for some \(\varepsilon >0\). Let us recall their result, that can be obtained as a direct consequence of Theorems 3.4 and 3.6. The details are left to the reader.

Theorem 3.9

(Attouch–Peypouquet [7]) Let \(A: \mathcal {H} \rightarrow 2^{\mathcal {H}}\) be a maximally monotone operator such that \({\mathrm{zer}}A \ne \emptyset \). Let \((x_k)\) be a sequence generated by the Regularized Inertial Proximal Algorithm

$$\begin{aligned} \text{(RIPA) }_{\alpha , s} \qquad \left\{ \begin{array}{lll} y_k&{}= &{} \displaystyle {x_{k} + \left( 1- \frac{\alpha }{k}\right) ( x_{k} - x_{k-1})} \\ x_{k+1} &{} = &{} \displaystyle {\frac{\lambda _k}{\lambda _k +s}y_k + \frac{s}{\lambda _k +s}J_{(\lambda _k +s) A}\left( y_k \right) }. \end{array}\right. \end{aligned}$$

Suppose that \(\alpha >2\), \(s>0\), \(\varepsilon > \frac{2}{\alpha -2}\), and \( \lambda _k = (1+ \varepsilon )\frac{s}{\alpha ^2}k^2\) for all \(k \ge 1\). Then,

  1. (i)

    \(\Vert x_{k+1} - x_{k} \Vert = \mathcal {O} (\frac{1}{k})\) as \( k\rightarrow +\infty \), and   \(\sum _{k=1}^{+\infty } k \Vert x_{k} - x_{k-1} \Vert ^2 < +\infty \).

  2. (ii)

    There exists \(x_\infty \in {\mathrm{zer}}A\) such that \(x_k\rightharpoonup x_\infty \) weakly in \({\mathcal {H}}\) as \(k\rightarrow +\infty \).

  3. (iii)

    The sequence \((y_k)\) converges weakly in \({\mathcal {H}}\) to \(x_\infty \), as \(k\rightarrow +\infty \).

The following table gives a synthetic view of some of the situations studied previously (the large number of cases does not allow to enter all of them). Each column gives the joint tuning of the parameters \(\alpha _k\), \(\rho _k\), and \(\mu _k\), which provides the convergence of the iterates generated by (RIPA). For ease of reading, we recall the definition of (RIPA)

$$\begin{aligned} \text{(RIPA) } \quad \left\{ \begin{array}{l} y_k=x_k+\alpha _k(x_k-x_{k-1})\\ x_{k+1}=(1-\rho _k)y_k + \rho _k J_{\mu _k A}(y_k). \end{array} \right. \end{aligned}$$

From left to right, the table is ordered according to the decreasing values of \(\alpha _k\).

\(\alpha _k\)

\(\alpha _k=1-\frac{\alpha }{k}\)

\(\alpha _k=1-\frac{\alpha }{k^q}\)

\(\alpha _k=1-\frac{\alpha }{k^q}\)

\(\alpha _k \equiv \alpha \in [0,1[\)

 

\( \alpha >2\)

\(q \in ]0,1[, \, \alpha >0\)

\(q \in ]0,1[, \, \alpha >0\)

 

\(\rho _k\)

\(\rho _k = \frac{\beta }{k^2}, \)

\(\rho _k = \frac{\beta }{k^{2q}}, \, \beta < \alpha ^2\)

\(\rho _k = \frac{\beta }{k^r}\),

\( \rho _k \equiv \rho < \frac{2 (1-\alpha )^2}{2\alpha ^2 -\alpha +1}\)

 

\(\beta < \alpha (\alpha -2)\)

 

\(2q < r\le q+1, \, \beta >0\)

 

\(\mu _k\)

\(\frac{|\mu _{k+1}-\mu _k|}{\mu _{k+1}}={\mathcal {O}}\left( \frac{1}{k}\right) \)

\(\frac{|\mu _{k+1}-\mu _k|}{\mu _{k+1}}={\mathcal {O}}\left( \frac{1}{k^q}\right) \)

\(\frac{|\mu _{k+1}-\mu _k|}{\mu _{k+1}}={\mathcal {O}}\left( \frac{1}{k^{r-q}}\right) \)

 
 

\(\liminf \mu _k >0\)

\(\liminf \mu _k >0\)

\(\liminf \mu _k >0\)

\(\liminf \mu _k >0\)

As noticed before, we can observe the balance between the inertial effect and the relaxation effect. As \(\alpha _k\) gets closer to one, the relaxation parameter \(\rho _k\) gets closer to zero.

4 Ergodic convergence results

4.1 Ergodic variant of the Opial lemma

An ergodic version of the Opial lemma was derived by Passty [26] in the case of the averaging process defined by

$$\begin{aligned} \widehat{x}_k=\frac{1}{\sum \nolimits _{i=1}^k s_i}\sum _{i=1}^k s_i x_i,\end{aligned}$$

where \((s_k)\) is a sequence of positive steps. In order to deal with a more general averaging process, let us consider a double sequence \((\tau _{i,k})_{i,k\ge 1}\) of nonnegative numbers satisfying the following assumptions

$$\begin{aligned}&\sum _{i=1}^{+\infty }\tau _{i,k}=1\quad \text{ for } \text{ every } k\ge 1 \end{aligned}$$
(55)
$$\begin{aligned}&\lim _{k\rightarrow +\infty }\tau _{i,k}=0 \quad \text{ for } \text{ every } i\ge 1. \end{aligned}$$
(56)

To each bounded sequence \((x_k)\) of \({\mathcal {H}}\), we associate the averaged sequence \((\widehat{x}_k)\) by

$$\begin{aligned} \widehat{x}_k=\sum _{i=1}^{+\infty }\tau _{i,k}x_i. \end{aligned}$$
(57)

Lemma B.2 in the appendix shows that the sequence \((\widehat{x}_k)\) is well-defined, bounded and that convergence of \((x_k)\) implies convergence of \((\widehat{x}_k)\) as \(k\rightarrow +\infty \) toward the same limit (Cesaro property). The extension of Opial lemma to a general averaging process satisfying (55) and (56) is given hereafter. This result can be obtained as a consequence of the generalized Opial lemma established by Brézis–Browder, see [14, Lemma 1]. For the sake of the reader, we give an independent and self-contained proof.

Proposition 4.1

Let S be a nonempty subset of \({\mathcal {H}}\) and let \((x_k)\) be a bounded sequence of \(({\mathcal {H}})\). Let \((\tau _{i,k})\) be a double sequence of nonnegative numbers satisfying (55) and (56), and let \((\widehat{x}_k)\) be the averaged sequence defined by (57). Assume that

  1. (i)

    For every \(z\in S\), \(\lim _{k\rightarrow +\infty }\Vert x_k-z\Vert \) exists;

  2. (ii)

    every weak limit point of the sequence \((\widehat{x}_k)\) belongs to S.

Then the sequence \((\widehat{x}_k)\) converges weakly as \(k\rightarrow +\infty \) toward some \(x_\infty \in S\).

Proof

From Lemma B.2\(\mathrm{{(i)}}\), the sequence \((\widehat{x}_k)\) is bounded, therefore it is enough to establish the uniqueness of weak limit points. Let \((\widehat{x}_{k_n})\) and \((\widehat{x}_{k_m})\) be two weakly converging subsequences satisfying respectively \(\widehat{x}_{k_n}\rightharpoonup \overline{x^{}}_1\) as \(n\rightarrow +\infty \) and \(\widehat{x}_{k_m}\rightharpoonup \overline{x^{}}_2\) as \(m\rightarrow +\infty \). From \(\mathrm{{(ii)}}\), the weak limit points \(\overline{x^{}}_1\) and \(\overline{x^{}}_2\) belong to S. From \(\mathrm{{(i)}}\), we deduce that \(\lim _{k\rightarrow +\infty }\Vert x_k-\overline{x^{}}_1\Vert ^2\) and \(\lim _{k\rightarrow +\infty }\Vert x_k-\overline{x^{}}_2\Vert ^2\) exist. Writing that

$$\begin{aligned}\Vert x_k-\overline{x^{}}_1\Vert ^2-\Vert x_k-\overline{x^{}}_2\Vert ^2=2\,\left\langle x_k-\frac{\overline{x^{}}_1+\overline{x^{}}_2}{2},\overline{x^{}}_2-\overline{x^{}}_1\right\rangle ,\end{aligned}$$

we infer that \(\lim _{k\rightarrow +\infty }\langle x_k,\overline{x^{}}_2-\overline{x^{}}_1\rangle \) exists. Observe that

$$\begin{aligned} \langle \widehat{x}_k,\overline{x^{}}_2-\overline{x^{}}_1\rangle= & {} \left\langle \sum _{i=1}^{+\infty }\tau _{i,k}x_i,\overline{x^{}}_2-\overline{x^{}}_1\right\rangle \\= & {} \sum _{i=1}^{+\infty }\tau _{i,k}\left\langle x_i,\overline{x^{}}_2-\overline{x^{}}_1\right\rangle . \end{aligned}$$

By applying Lemma B.2\(\mathrm{{(ii)}}\) to the real sequence \(\left( \big \langle x_k,\overline{x^{}}_2-\overline{x^{}}_1\big \rangle \right) \), we deduce that \(\lim _{k\rightarrow +\infty }\langle \widehat{x}_k,\overline{x^{}}_2-\overline{x^{}}_1\rangle \) exists. This implies that

$$\begin{aligned}\lim _{n\rightarrow +\infty }\langle \widehat{x}_{k_n},\overline{x^{}}_2-\overline{x^{}}_1\rangle =\lim _{m\rightarrow +\infty }\langle \widehat{x}_{k_m},\overline{x^{}}_2-\overline{x^{}}_1\rangle ,\end{aligned}$$

which entails that \(\langle \overline{x^{}}_1,\overline{x^{}}_2-\overline{x^{}}_1\rangle =\langle \overline{x^{}}_2,\overline{x^{}}_2-\overline{x^{}}_1\rangle .\) Therefore \(\Vert \overline{x^{}}_2-\overline{x^{}}_1\Vert ^2=0\), which ends the proof. \(\square \)

Remark 4.2

By taking \((\tau _{i,k})\) defined by

$$\begin{aligned}\tau _{i,k}=\left\{ \begin{array}{ll} 1&{}\quad \text{ if } i=k\\ 0&{}\quad \text{ if } i\ne k,\\ \end{array} \right. \end{aligned}$$

conditions (55) and (56) are trivially satisfied and we find \(\widehat{x}_k=x_k\) for every \(k\ge 1\). It ensues that the Opial lemma appears as a particular case of Proposition 4.1.

4.2 Ergodic convergence of the iterates

To each sequence \((x_k)\) generated by (RIPA), we associate a suitable averaged sequence as in (57). The weight coefficients are judiciously chosen and depend on \(\alpha _k\), \(\mu _k\) and \(\rho _k\). Under conditions \((K_0)\)\((K_1)\)\((K_2)\)\((K_3)\), we show that the averaged sequence converges weakly toward some zero of the operator A.

Theorem 4.3

Under (H), assume that \({\mathrm{zer}}A\ne \emptyset \). Suppose that \(\alpha _k\in [0,1]\) and \(\rho _k\in ]0,2]\) for every \(k\ge 1\). Under \((K_0)\), let \((t_{i,k})\) and \((t_i)\) be the sequences respectively defined by (13) and (15). Assume that conditions \((K_1)\)\((K_2)\)\((K_3)\) hold, together with

$$\begin{aligned} \sum _{i=1}^{+\infty } t_i \rho _{i-1}\mu _{i-1}=+\infty . \end{aligned}$$
(58)

Let us define the sequence \((\tau _{i,k})\) by

$$\begin{aligned} \tau _{i,k}=\frac{t_{i,k} \rho _{i-1}\mu _{i-1}}{\sum \nolimits _{i=1}^k t_{i,k} \rho _{i-1}\mu _{i-1}}. \end{aligned}$$
(59)

Then for any sequence \((x_k)\) generated by (RIPA), there exists \(x_\infty \in {\mathrm{zer}}A\) such that

$$\begin{aligned}\widehat{x}_k=\sum _{i=1}^k\tau _{i,k} x_i\rightharpoonup x_\infty \quad \text{ weakly } \text{ in }~ {\mathcal {H}}~ \text{ as }~ k\rightarrow +\infty .\end{aligned}$$

Proof

The proof relies on Proposition 4.1 applied with \(S={\mathrm{zer}}A\). Let us first check that conditions (55) and (56) are satisfied for the sequence \((\tau _{i,k})\) given by (59). Property (55) follows immediately from the definition of \((\tau _{i,k})\) (recall that \(t_{i,k}=0\) for \(i>k\)). On the other hand, observe that for every i, \(k\ge 1\),

$$\begin{aligned} \tau _{i,k}\le \frac{t_{i} \rho _{i-1}\mu _{i-1}}{\sum \nolimits _{i=1}^k t_{i,k} \rho _{i-1}\mu _{i-1}}. \end{aligned}$$
(60)

The quantity \(t_{i} \rho _{i-1}\mu _{i-1}\) is finite and independent of k. Since \(t_{i,k}\) tends increasingly toward \(t_i\) as \(k\rightarrow +\infty \), the monotone convergence theorem implies that

$$\begin{aligned} \lim _{k\rightarrow +\infty }\sum _{i=1}^k t_{i,k} \rho _{i-1}\mu _{i-1}= \lim _{k\rightarrow +\infty }\sum _{i=1}^{+\infty } t_{i,k} \rho _{i-1}\mu _{i-1}=\sum _{i=1}^{+\infty } t_{i} \rho _{i-1}\mu _{i-1}=+\infty , \end{aligned}$$
(61)

where we have used the assumption (58). We then deduce from the inequality (60) that \(\lim _{k\rightarrow +\infty }\tau _{i,k}=~0\), which establishes (56).

We now have to prove that the conditions \(\mathrm{{(i)}}\) and \(\mathrm{{(ii)}}\) of Proposition 4.1 are fulfilled. Condition \(\mathrm{{(i)}}\) is realized in view of Theorem 2.6\(\mathrm{{(iii)}}\). Let us now assume that there exist \(x_\infty \in {\mathcal {H}}\) and a sequence \((k_n)\) such that \(k_n\rightarrow +\infty \) and \(\widehat{x}_{k_n}\rightharpoonup x_\infty \) weakly in \({\mathcal {H}}\) as \(n\rightarrow +\infty \). Let us fix \((z,q)\in {\mathrm{gph}}A\) and define the sequence \((h_k)\) by \(h_k=\frac{1}{2}\Vert x_k-z\Vert ^2\). From inequality (10) of Lemma 2.2, we have

$$\begin{aligned}&h_{k+1}-h_k-\alpha _k(h_k-h_{k-1})+\rho _{k}\mu _k\left\langle x_{k+1}+\left( \frac{1}{\rho _k}-1\right) (x_{k+1}-y_k)-z,q\right\rangle \\&\quad \le \alpha _k\Vert x_k-x_{k-1}\Vert ^2, \end{aligned}$$

because the assumptions \(\alpha _k\in [0,1]\) and \(\rho _k\in ]0,2]\) imply respectively \(\frac{1}{2}(\alpha _k+\alpha _k^2)\le \alpha _k\) and \(\frac{2-\rho _k}{2\rho _k}\ge 0\). Since \(x_{k+1}=y_k-\rho _k\mu _k A_{\mu _k}(y_k)\), the above inequality can be rewritten as

$$\begin{aligned}&h_{k+1}-h_k-\alpha _k(h_k-h_{k-1})+\rho _{k}\mu _k\left\langle x_{k+1}-z-(1-\rho _k)\mu _k A_{\mu _k}(y_k),q\right\rangle \nonumber \\&\quad \le \alpha _k\Vert x_k-x_{k-1}\Vert ^2. \end{aligned}$$
(62)

Setting \(a_k=h_k-h_{k-1}\) and

$$\begin{aligned}w_k=\alpha _k\Vert x_k-x_{k-1}\Vert ^2-\rho _{k}\mu _k\left\langle x_{k+1}-z-(1-\rho _k)\mu _k A_{\mu _k}(y_k),q\right\rangle ,\end{aligned}$$

inequality (62) amounts to \(a_{k+1}\le \alpha _k a_k+ w_k.\) By applying Lemma B.1\(\mathrm{{(i)}}\), we obtain for every \(k\ge 1\),

$$\begin{aligned} h_k-h_0= & {} \sum _{i=1}^k a_i\le t_{1,k}(h_1-h_0)+\sum _{i=1}^{k-1}t_{i+1,k}w_i\\= & {} t_{1,k}(h_1-h_0)+\sum _{i=1}^{k-1}t_{i+1,k}\left[ \alpha _i\Vert x_i-x_{i-1}\Vert ^2 \right. \\&\left. -\rho _{i}\mu _i\left\langle x_{i+1}-z-(1-\rho _i)\mu _i A_{\mu _i}(y_i),q\right\rangle \right] . \end{aligned}$$

Since \(h_k\ge 0\) and \(t_{i+1,k}\le t_{i+1}\), we deduce that

$$\begin{aligned} \sum _{i=1}^{k-1}t_{i+1,k}\rho _{i}\mu _i\langle x_{i+1}-z,q\rangle\le & {} h_0+t_{1,k}(h_1-h_0)+\sum _{i=1}^{k-1}t_{i+1}\alpha _i\Vert x_i-x_{i-1}\Vert ^2\\&+\,\sum _{i=1}^{k-1}t_{i+1,k}\rho _{i}\mu _i\left\langle (1-\rho _i)\mu _i A_{\mu _i}(y_i),q\right\rangle . \end{aligned}$$

Recalling from Theorem 2.6\(\mathrm{{(i)}}\) that \(\sum _{i=1}^{+\infty }t_{i+1}\alpha _i\Vert x_i-x_{i-1}\Vert ^2<+\infty \), we infer that for every \(k\ge 1\),

$$\begin{aligned}\sum _{i=1}^{k-1}t_{i+1,k}\rho _{i}\mu _i\langle x_{i+1}-z,q\rangle \le C+\sum _{i=1}^{k-1}t_{i+1,k}\rho _{i}\mu _i\left\langle (1-\rho _i)\mu _i A_{\mu _i}(y_i),q\right\rangle ,\end{aligned}$$

where we have set   \(C:=h_0+t_1|h_1-h_0|+\sum _{i=1}^{+\infty }t_{i+1}\alpha _i\Vert x_i-x_{i-1}\Vert ^2<+\infty .\) Since \(\rho _k\in ]0,2]\), according to the Cauchy–Schwarz inequality we have that

$$\begin{aligned}|\left\langle (1-\rho _i)\mu _i A_{\mu _i}(y_i),q\right\rangle |\le \Vert \mu _i A_{\mu _i}(y_i)\Vert \Vert q\Vert . \end{aligned}$$

It ensues that

$$\begin{aligned}\sum _{i=1}^{k-1}t_{i+1,k}\rho _{i}\mu _i\langle x_{i+1}-z,q\rangle \le C+\Vert q\Vert \sum _{i=1}^{k-1}t_{i+1,k}\rho _{i}\mu _i\Vert \mu _i A_{\mu _i}(y_i)\Vert .\end{aligned}$$

By shifting the index of summation, we deduce from the above inequality that

$$\begin{aligned} \sum _{i=1}^{k}t_{i,k}\rho _{i-1}\mu _{i-1}\langle x_{i}-z,q\rangle\le & {} C+\Vert q\Vert \sum _{i=1}^{k}t_{i,k}\rho _{i-1}\mu _{i-1}\Vert \mu _{i-1} A_{\mu _{i-1}}(y_{i-1})\Vert \\&+ t_{1,k}\rho _{0}\mu _{0} \langle x_{1}-z,q\rangle -\Vert q\Vert \,t_{1,k}\rho _{0}\mu _{0}\Vert \mu _{0} A_{\mu _{0}}(y_{0})\Vert \\\le & {} C'+\Vert q\Vert \sum _{i=1}^{k}t_{i,k}\rho _{i-1}\mu _{i-1}\Vert \mu _{i-1} A_{\mu _{i-1}}(y_{i-1})\Vert , \end{aligned}$$

where we have set \(C':=C+t_{1}\rho _{0}\mu _{0} |\langle x_{1}-z,q\rangle |.\) This can be rewritten as

$$\begin{aligned}\left\langle \sum _{i=1}^{k}t_{i,k}\rho _{i-1}\mu _{i-1} (x_{i}-z),q\right\rangle \le C'+\Vert q\Vert \sum _{i=1}^{k}t_{i,k}\rho _{i-1}\mu _{i-1}\Vert \mu _{i-1} A_{\mu _{i-1}}(y_{i-1})\Vert .\end{aligned}$$

Dividing by \(\sum _{i=1}^{k}t_{i,k}\rho _{i-1}\mu _{i-1}\), we find

$$\begin{aligned} \langle \widehat{x}_k-z,q\rangle\le & {} \frac{C'}{\sum \nolimits _{i=1}^{k}t_{i,k}\rho _{i-1}\mu _{i-1}} \nonumber \\&+\,\frac{\Vert q\Vert }{\sum \nolimits _{i=1}^{k}t_{i,k}\rho _{i-1}\mu _{i-1}}\sum _{i=1}^{k}t_{i,k}\rho _{i-1}\mu _{i-1}\Vert \mu _{i-1} A_{\mu _{i-1}}(y_{i-1})\Vert . \end{aligned}$$
(63)

By Theorem 2.6\(\mathrm{(iv)}\) we have \(\lim _{k\rightarrow +\infty }\Vert \mu _{k} A_{\mu _{k}}(y_{k})\Vert =0\). From the Cesaro property, we infer that

$$\begin{aligned} \frac{1}{\sum \nolimits _{i=1}^{k}t_{i,k}\rho _{i-1}\mu _{i-1}}\sum _{i=1}^{k}t_{i,k}\rho _{i-1}\mu _{i-1}\Vert \mu _{i-1} A_{\mu _{i-1}}(y_{i-1})\Vert \longrightarrow 0 \quad \text{ as } k\rightarrow +\infty ,\end{aligned}$$

see Lemma B.2. Using (61) and taking the upper limit as \(k\rightarrow +\infty \) in inequality (63), we then obtain

$$\begin{aligned}\limsup _{k\rightarrow +\infty }\langle \widehat{x}_k-z,q\rangle \le 0.\end{aligned}$$

Since \(\widehat{x}_{k_n}\rightharpoonup x_\infty \) weakly in \({\mathcal {H}}\) as \(n\rightarrow +\infty \), we have \(\langle \widehat{x}_{k_n}-z,q\rangle \rightarrow \langle x_\infty -z,q\rangle \) as \(n\rightarrow +\infty \). From what precedes, we deduce that \(\langle x_\infty -z,q\rangle \le 0\). Since this is true for every \((z,q)\in {\mathrm{gph}}A\), and since the operator A is maximally monotone, we infer that \(0\in A(x_\infty )\). We have proved that \(x_\infty \in {\mathrm{zer}}A\), which shows that condition \(\mathrm{{(ii)}}\) of Proposition 4.1 is satisfied. The proof is complete. \(\square \)

Let us now apply Theorem 4.3 to the case \(\alpha _k=0\) for every \(k\ge 1\). In this case, assumptions \((K_0)\) and \((K_1)\) are trivially satisfied, and moreover \(t_i=t_{i,k}=1\) for every \(i\ge 1\) and \(k\ge i\). We then obtain the following corollary of Theorem 4.3.

Corollary 4.4

Under (H), assume that \({\mathrm{zer}}A\ne \emptyset \). Suppose moreover that \(\limsup _{k\rightarrow +\infty } \rho _k<2\) and \(\liminf _{k\rightarrow +\infty } \rho _k>0\), together with \(\sum _{i=0}^{+\infty }\rho _i\mu _i=+\infty \). Then for any sequence \((x_k)\) generated by (RPA)

figure k

there exists \(x_\infty \in {\mathrm{zer}}A\) such that

$$\begin{aligned} \frac{1}{\sum \nolimits _{i=0}^k \rho _i\mu _i}\sum _{i=0}^k \rho _i\mu _i x_i\rightharpoonup x_\infty \quad \text{ weakly } \text{ in }~ {\mathcal {H}}~ \text{ as }~ k\rightarrow +\infty . \end{aligned}$$
(64)

Proof

From Theorem 4.3, we obtain that

$$\begin{aligned}\widehat{x}_k=\frac{1}{\sum \nolimits _{i=1}^k \rho _{i-1}\mu _{i-1}}\sum _{i=1}^k \rho _{i-1}\mu _{i-1} x_i\rightharpoonup x_\infty \quad \text{ weakly } \text{ in }~ {\mathcal {H}}~ \text{ as }~ k\rightarrow +\infty .\end{aligned}$$

We deduce immediately that

$$\begin{aligned} \frac{1}{\sum \nolimits _{i=0}^k \rho _i\mu _i}\sum _{i=0}^k \rho _i\mu _i x_{i+1}\rightharpoonup x_\infty \quad \text{ weakly } \text{ in }~ {\mathcal {H}}~ \text{ as }~ k\rightarrow +\infty . \end{aligned}$$
(65)

Recall from Corollary 2.8\(\mathrm{{(i)}}\) that \(\sum _{i=1}^{+\infty }\frac{2-\rho _i}{\rho _i}\Vert x_{i+1}-x_i\Vert ^2<+\infty \). Since \(\limsup _{k\rightarrow +\infty } \rho _k<2\), this implies that \(\sum _{i=1}^{+\infty }\Vert x_{i+1}-x_i\Vert ^2<+\infty \), which entails in turn that \(\lim _{k\rightarrow +\infty }\Vert x_{k+1}-x_k\Vert =0\). From the Cesaro property, we infer that

$$\begin{aligned} \frac{1}{\sum \nolimits _{i=0}^k \rho _i\mu _i}\sum _{i=0}^k \rho _i\mu _i (x_{i+1}-x_i)\longrightarrow 0 \quad \text{ strongly } \text{ in }~ {\mathcal {H}}~ \text{ as }~ k\rightarrow +\infty . \end{aligned}$$
(66)

By putting together (65) and (66), we immediately obtain (64). \(\square \)

If we assume moreover that \(\rho _k=1\) for every \(k\ge 1\), we recover a classical result of ergodic convergence for the proximal point algorithm, see the seminal paper of Brezis and Lions, see [15, Remarque 10].

Corollary 4.5

Under (H), assume that \({\mathrm{zer}}A\ne \emptyset \) and that \(\sum _{i=0}^{+\infty }\mu _i=+\infty \). Then for any sequence \((x_k)\) generated by the algorithm

figure l

there exists \(x_\infty \in {\mathrm{zer}}A\) such that \(\displaystyle {\frac{1}{\sum \nolimits _{i=0}^k \mu _i}\sum \limits _{i=0}^k \mu _i x_i\rightharpoonup x_\infty } \quad \text{ weakly } \text{ in }~ {\mathcal {H}}~ \text{ as }~ k\rightarrow +\infty .\)

5 Conclusion, perspective

The introduction of inertial features into proximal-based algorithms to solve general monotone inclusions is a long-standing difficult problem. (RIPA) algorithm, which addresses these issues, involves three basic parameters, \(\alpha _k\), \(\mu _k\), \(\rho _k\), which depend on the iteration index k, and which take into account respectively the inertia, the proximal step size, and the relaxation. (RIPA) provides a general framework for understanding the subtle tuning of these different parameters to achieve the weak convergence of the iterates. In particular, we obtained convergence results based on the Nesterov acceleration method, in the context of maximally monotone operators, which extend the recent result of Attouch–Peypouquet [7]. Several basic splitting algorithms in optimization, naturally rely on the maximally monotone approach, such as ADMM, primal-dual methods, Douglas-Rachford. Our results provide a general way to understand the acceleration of these algorithms via inertia. Several important questions remain to be studied, such as obtaining splitting methods in this context, and studying the convergence rate of these methods. In this respect, it would be important to study the case \(A= \partial \Psi \) where \(\Psi \) is a closed convex function, thus recovering the rate of convergence of the values for Nesterov methods.