1 Introduction

Many nonlinear problems can be reduced to the problem of finding a zero of a maximal monotone operator in a Hilbert space. A typical example is the problem of finding a minimizer of a proper, lower-semicontinuous, and convex function. It has therefore been an interesting topic of seeking iterative methods that find zeros of maximal monotone operators. In his seminal paper [1], Rockafellar [1] initiated the study of this topic in a general Hilbert space. His algorithm, referred to as the proximal point algorithm (PPA, for short), always converges weakly. Rockafellar [1] then raised the question of whether PPA can have strong convergence in an infinite-dimensional Hilbert space. This question was eventually answered in the negative by Güler [2] in 1991. This however opens another interesting topic that is how to modify PPA so as to guarantee strong convergence in a general Hilbert space.

Solodov and Svaiter [3] were the first to begin the search for strongly convergent modifications of PPA. Their adaptive modification consists of two steps. The first step is to use the nth iterate to construct two hyperplanes, and the second step is to orthogonally project the initial guess onto the intersection of these hyperplanes to define the \((n+1)\)th iterate. The feature is that these additional projections force the sequence thus constructed to strongly converge. Kamimura and Takahashi [4] extended Solodov and Svaiter’s modified PPA [3] to the setting of uniformly convex and uniformly smooth Banach spaces.

Another modification of PPA, referred to as the contraction-proximal point algorithm (CPPA, for short), was introduced by Marino and Xu [5]. Their idea is simple. Instead of using additional projections as done in [3], they made, at each iteration, a convex combination of the nth iterate with a fixed point called anchor. Such convex combinations seem remarkable since strong convergence is guaranteed again (see [5] for details).

It is also a fact whether a modification of PPA can converge strongly depends on the errors’ magnitudes. If the errors are summable, then the errors are believed to be small enough to have no negative impact on the strong convergence of the modified PPA. However, if the errors fail to be summable, the strong convergence of a modified PPA looks much more subtle: Certain criteria must nevertheless be satisfied to secure strong convergence.

In this paper, we will introduce two modifications of PPA. Our modifications are quite general and include several existing modifications of PPA as special cases. Moreover, we consider both summable and nonsummable errors. Our objective is to prove strong convergence of our modifications of PPA under appropriate assumptions on sequences of parameters that define the algorithms and on the errors which are not necessarily summable. Since our assumptions are weaker than those in the existing literature (see Sect. 3), we employ a new method of argument to overcome difficulties that arise from our weaker assumptions on the parameters and the errors.

The rest of our paper is organized as follows. In the next section, we introduce some basic concepts such as monotone operators, the subdifferential of a convex function, projections and nonexpansive mappings. We review some existing modifications of PPA. We also include some fundamental tools in Hilbert spaces, which play an important role in verifying strong convergence of bounded sequences in a Hilbert space. In Sect. 3, we present the main results of this paper; that is, we prove the strong convergence of our two modifications of PPA. A conclusion is included in Sect. 4.

2 Preliminaries

Let H be a real Hilbert space with inner product \(\langle \cdot ,\cdot \rangle \) and norm \(\Vert \cdot \Vert \), respectively. If C is a nonempty, closed, and convex subset of H, then we use \(P_{C}\) to denote the (nearest point) projection from H to C. That is, for \(x\in H\), \(P_{C}x\) is the unique point in C with the property: \(\Vert x-P_{C}x\Vert =\min \limits _{y\in C}\Vert x-y\Vert \). It is well known that \(P_{C}x\) is characterized by:

$$\begin{aligned} P_{C}x\in C,\quad \langle x-P_{C}x,z-P_{C}x\rangle \le 0,\quad z\in C. \end{aligned}$$
(1)

Recall that an operator A with domain D(A) and range R(A) in H is said to be monotone iff

$$\begin{aligned} \langle u-v,x-y\rangle \ge 0,\quad x,~y\in D(A),~~u\in Ax,~~v\in Ay. \end{aligned}$$

A monotone operator A is said to be maximal monotone iff its graph

$$\begin{aligned} G(A)=\{(x,y): x\in D(A), y\in Ax\} \end{aligned}$$

is not properly contained in the graph of any other monotone operator. The resolvent of a maximal monotone operator A is defined as

$$\begin{aligned} J_cx:=(I+cA)^{-1}x,\quad x\in H, \end{aligned}$$

where \(c>0\). It is known that \(J_c\) is everywhere defined and single-valued.

Recall also that a mapping \(T: H\rightarrow H\) is said to be

  • Nonexpansive iff \(\Vert Tx-Ty\Vert \le \Vert x-y\Vert \) for \(x, y\in H\);

  • Firmly nonexpansive iff \(\Vert Tx-Ty\Vert ^{2}\le \Vert x-y\Vert ^{2}-\Vert (I-T)x-(I-T)y\Vert ^{2}\) for \(x, y\in H\).

The set of fixed points of T is denoted by Fix(T); that is, \(Fix(T):=\{x\in H: Tx=x\}\).

Finding zeros of maximal monotone operators is an important part of the theory of monotone operators. In other words, the problem consists in finding \(x\in D(A)\) such that

$$\begin{aligned} 0\in Ax, \end{aligned}$$
(2)

where A is a maximal monotone operator in a Hilbert space H. Various problems, including convex programming and variational inequalities, can be formulated as (2). As a matter of fact, the minimization problem

$$\begin{aligned} \min _{x\in C}\varphi (x), \end{aligned}$$

where C is a closed and convex subset of H and \(\varphi : H\rightarrow ]-\infty ,\infty ]\) is a proper, lower-semicontinuous, and convex function, is equivalent to (2) with \(A=\partial \varphi +N_C\). Here, \(\partial \varphi \) is the subdifferential operator of \(\varphi \) defined by

$$\begin{aligned} \partial \varphi (x):=\{\xi \in H: \varphi (y)\ge \varphi (x)+\langle \xi ,y-x\rangle ,\ y\in H\},\quad x\in H, \end{aligned}$$

and \(N_C\) is the normal cone to C; i.e., \(N_C(x)=\emptyset \) if \(x\not \in C\) and \(N_C(x)=\{z\in H: \langle z,w-x\rangle \le 0,\ w\in C\}\) if \(x\in C\).

The proximal point algorithm (PPA) for the inclusion (2) is introduced by Rockafellar in his seminal paper [1] and generates a sequence \(\{x_n\}\) recursively as follows: The initial guess \(x_{0}\) is chosen in H arbitrarily, and the \((n+1)\)th iterate \(x_{n+1}\) is generated via the inclusion:

$$\begin{aligned} x_{n}+e_{n}\in x_{n+1}+c_{n}A(x_{n+1}), \end{aligned}$$

where \(e_{n}\) stands for an error, and \(c_{n}\) is a positive number. Alternatively, \(x_{n+1}\) can be expressed via the resolvent of A as follows:

$$\begin{aligned} x_{n+1}=J_{c_{n}}(x_{n}+e_{n}). \end{aligned}$$
(3)

Rockafellar [1] proved the weak convergence of PPA (3) to a solution (if any) of (2). He also raised the question whether PPA (2) can always converge strongly in a general Hilbert space. This question was eventually answered by Güler [2] in the negative. Therefore, recent attention has turned to strongly convergent modifications of PPA, which is also the theme of our paper.

In PPA and its modifications, the errors \(\{e_{n}\}\) play a crucial role in the weak or strong convergence. Several criteria for the errors \(\{e_{n}\}\) have therefore been introduced in the literature. Perhaps, the most often used criterion is that the errors \(\{e_n\}\) are summable; that is, \(\{e_n\}\) satisfy the condition:

$$\begin{aligned} \Vert e_{n}\Vert \le \varepsilon _{n}\quad \mathrm{with}\quad \sum \limits _{n=0}^{\infty }\varepsilon _{n}<\infty . \end{aligned}$$
(4)

Rockafellar [1] also considered another accuracy criterion for the errors \(\{e_n\}\) to fulfill:

$$\begin{aligned} \Vert e_{n}\Vert \le \eta _{n}\Vert J_{c_{n}}(x_{n}+e_{n})-x_{n}\Vert \quad \mathrm{and}\quad \sum \limits _{n=0}^{\infty }\eta _{n}<\infty . \end{aligned}$$
(5)

Eckstein [6] nevertheless proposed the following criterion:

$$\begin{aligned} \sum \limits _{n=1}^{\infty }\Vert e_{n}\Vert < +\infty , \quad \sum \limits _{n=1}^{\infty } \langle e_{n}, x_{n} \rangle \quad \text {converges}. \end{aligned}$$
(6)

[Note: The second part of (6) is implied by the first part if \(\{x_n\}\) is bounded.]

Han and He [7] weakened Rockafellar’s criterion (5) by considering the criterion:

$$\begin{aligned} \Vert e_{n}\Vert \le \eta _{n}\Vert J_{c_{n}}(x_{n}+e_{n})-x_{n}\Vert \quad \mathrm{and}\quad \sum \limits _{n=0}^{\infty }\eta ^{2}_{n}<\infty . \end{aligned}$$
(7)

The study of strongly convergent modifications of PPA was initiated by Solodov and Svaiter [3]. They use the iterates generated by PPA to construct two half-spaces and require, at each iteration, additional projections onto the intersection of the two half-spaces. [Their modification is however not elaborated on here as it is not the main concern in this paper.]

Xu [8] introduced the following modification of PPA:

$$\begin{aligned} x_{n+1}=\alpha _{n}u+(1-\alpha _{n})J_{c_{n}}x_{n}+e_{n}, \end{aligned}$$
(8)

where \(u\in H\) and \(\{\alpha _{n}\}\subset ]0,1[\). Marino and Xu [5] call it the contraction-proximal point algorithm (CPPA). Various conditions [912] were proposed to guarantee the strong convergence of (8) with criterion (4). For instance, Marino and Xu [5] proved the strong convergence of (8) under the conditions:

$$\begin{aligned}&\bullet \ \lim \limits _{n\rightarrow \infty }\alpha _{n}=0,\ \sum _{n=0}^\infty \alpha _{n}=\infty ,\ \sum _{n=0}^\infty \Vert e_{n}\Vert <\infty ,\\&\bullet \ \mathrm{either}\ \sum _{n=0}^\infty |\alpha _{n+1}-\alpha _{n}|<\infty ~~ \mathrm{or}\ \lim \limits _{n\rightarrow \infty }\frac{\alpha _{n}}{\alpha _{n+1}}=1,\\&\bullet \ 0<\underline{c}\le c_{n}\le \overline{c}<\infty ,~~ \sum _{n=0}^\infty |c_{n+1}-c_{n}|<\infty . \end{aligned}$$

Yao and Noor [13] generalized CPPA (8) by including one more term (i.e., the \(x_n\) term) as follows:

$$\begin{aligned} x_{n+1}=\alpha _{n}u+\beta _{n}x_{n}+\lambda _{n}J_{c_{n}}x_{n}+e_{n}, \end{aligned}$$
(9)

where \(\{\alpha _{n}\},~~ \{\beta _{n}\},~~\{\lambda _{n}\}\subset ]0,1[\) are such that \(\alpha _{n}+\beta _{n}+\lambda _{n}=1\) for all n. They proved the strong convergence of (9) when the parameters satisfy certain appropriate conditions (which are not stated explicitly here).

Wang and Cui [14] proved the strong convergence of (9) under these conditions: \(\lim _{n\rightarrow \infty }\alpha _{n}=0\), \(\sum _{n=0}^\infty \alpha _{n}=\infty \), \(\liminf _{n\rightarrow \infty }c_{n}>0\), \(\liminf _{n\rightarrow \infty }\lambda _{n}>0\), and either \(\sum _{n=0}^\infty \Vert e_{n}\Vert <\infty \) or \(\lim \limits _{n\rightarrow \infty }\Vert e_{n}\Vert /\alpha _{n}=0\).

On the other hand, some authors prefer to include the errors \(e_n\) inside the resolvent; that is, the modification of PPA takes the form:

$$\begin{aligned} x_{n+1}=\lambda _{n}u+(1-\lambda _{n})J_{c_{n}}(x_{n}+e_{n}), \end{aligned}$$
(10)

where \(u\in H\) and \(\{\lambda _{n}\}\) is a sequence of positive real numbers. This algorithm, introduced independently by Kamimura–Takahashi [15] and Xu [8], is also known as the contraction-proximal point algorithm (CPPA) [5] since it has no essential differences from (8).

Regarding Algorithm (10), Ceng et al. [16] proved the strong convergence under the assumptions: \(\lim _{n\rightarrow \infty } c_{n}=\infty \), \(\lim _{n\rightarrow \infty }\lambda _{n}=0\), \(\sum _{n=0}^{\infty }\lambda _{n}=\infty \), and (7).

Tian and Wang [17] considered possibly bounded choice of \(\{c_n\}\) and proved the strong convergence of CPPA (9) under the conditions: \(c_{n}\ge c>0\) for all \(n\ge 0\); \(\lim _{n\rightarrow \infty }\lambda _{n}=0\) and \(\sum _{n=0}^{\infty }\lambda _{n}=\infty \); and \(\Vert e_{n}\Vert \le \eta _{n}\Vert J_{c_{n}}(x_{n}+e_{n})-x_{n}\Vert \), where the sequence of errors, \(\{e_n\}\), is such that either \(\sum _{n=0}^{\infty }\eta _{n}^{2}< \infty \) or \(\lim _{n\rightarrow \infty } \eta _{n}^{2}/\lambda _{n}=0\).

In the present paper, we will consider extensions of Algorithms (9) and (10). More precisely, we will focus on the following two modifications of PPA:

$$\begin{aligned} x_{n+1}=\alpha _{n}h(x_{n})+\beta _{n}x_{n}+\lambda _{n}J_{c_{n}}x_{n}+e_{n} \end{aligned}$$
(11)

and

$$\begin{aligned} x_{n+1}=\alpha _{n}u+\beta _{n}x_{n}+\lambda _{n}J_{c_{n}}(x_{n}+e_{n}), \end{aligned}$$
(12)

where \(\{\alpha _{n}\},~~\{\beta _{n}\},~~\{\lambda _{n}\}\subset ]0,1[\) are such that \(\alpha _{n}+\beta _{n}+\lambda _{n}=1\) for all n, and \(h: H\rightarrow H\) is a \(\rho \)-contraction with \(\rho \in [0,1[\).

The purpose of this paper is to prove the strong convergence of Algorithms (11) and (12) under different criteria of the errors \(\{e_n\}\). Since we use a new technique of argument for dealing with strong convergence, our conditions, which are sufficient for the strong convergence of Algorithms (11) and (12), are weaker than those of [13, 14]. Moreover, our results also unify the results of [17].

We need some useful tools to be employed in our subsequent argument for proving the main results of this paper.

Lemma 2.1

Let A be a maximal monotone operator and \(J_c\) be its resolvent with \(c>0\). Then, we have

  1. (i)

    \(J_{c}: H\rightarrow H\) is single-valued and firmly nonexpansive;

  2. (ii)

    \(Fix(J_{c})=A^{-1}(0)=\{x\in D(A): 0\in Ax\}\);

  3. (iii)

    \(\Vert x-J_{c}x\Vert \le 2\Vert x-J_{c'}x\Vert \) for all \(0< c \le c'\) and all \(x\in H\) [5].

Lemma 2.2

([18]) (Demiclosedness principle). Let D be a nonempty, weakly sequentially closed subset of H, let \(T: D\rightarrow H\) be a nonexpansive operator, let \(\{x_{n}\}\) be a sequence in D, and let x and u be points in H. Suppose that \(x_{n}\rightharpoonup x\) and that \(x_{n}-Tx_{n}\rightarrow u\). Then, \(x-Tx=u\). In particular, if \(u=0\), then \(x\in Fix(T)\).

Lemma 2.3

([8]) Let \(\{s_{n}\}, \{c_{n}\}\subset \mathbb {R}^{+}\), \(\{\alpha _{n}\}\subset ]0,1[\) and \(\{b_{n}\}\subset \mathbb {R}\) be sequences such that

$$\begin{aligned} s_{n+1}\le (1-\alpha _{n})s_{n}+b_{n}+c_{n}, \ \text {for all} \ n\ge 0. \end{aligned}$$

If, in addition, \(\sum \limits _{n=0}^{\infty }\alpha _{n}=\infty \), \(\sum \limits _{n=0}^{\infty }c_{n}< \infty \) and \(\limsup \limits _{n\rightarrow \infty } (b_{n}/\alpha _{n})\le 0\), then \(\lim \limits _{n\rightarrow \infty }s_{n}=0\).

Lemma 2.4

Let \(x,y\in H\) and \(t,s \in \mathbb {R}\). Then,

  • \(\Vert x+y\Vert ^{2}\le \Vert x\Vert ^{2}+2\langle y,x+y\rangle \);

  • \(\Vert tx+sy\Vert ^{2}=t(t+s)\Vert x\Vert ^{2}+s(t+s)\Vert y\Vert ^{2}-st\Vert x-y\Vert ^{2}\).

3 Algorithms and Convergence Results

In this section, we propose two modifications of the proximal point algorithm (PPA) for finding zeros of a maximal monotone operator A. To this end, we denote by S the set of zeros of A; namely, \(S=A^{-1}(0)\), and always assume that S is nonempty. We wish to prove the strong convergence of both algorithms under appropriate conditions of the parameters that define the algorithms.

We briefly describe the standard key steps for proving the strong convergence of the sequence \(\{x_n\}\) generated by Algorithms (8) and (9). After the boundedness of \(\{x_n\}\) is proven, the two steps below are key to the proof of the strong convergence of \(\{x_n\}\):

  • Step 1: \(\lim _{n\rightarrow \infty }\Vert J_{c_n}x_n-x_n\Vert =0\);

  • Step 2: \(\limsup _{n\rightarrow \infty }\langle u-z,x_n-z \rangle \le 0\), with \(z=P_Su\).

Step 1 guarantees that all the weak cluster points of \(\{x_n\}\) are in S, and Step 2 requires to find a weakly converging subsequence \(\{x_{n_j}\}\) of \(\{x_n\}\) that assumes the limit superior; in other words,

$$\begin{aligned} \limsup _{n\rightarrow \infty }\langle u-z,x_n-z \rangle =\lim _{j\rightarrow \infty }\langle u-z,x_{n_j}-z \rangle =\langle u-z,x^*-z \rangle , \end{aligned}$$
(13)

where \(x^*\) is the weak limit of \(\{x_{n_j}\}\).

For our more complicated algorithms to be discussed in the subsequent two subsections below, both Steps 1 and 2 above seem unreachable, unfortunately. We can get Step 1 only for a subsequence \(\{x_{n(k)}\}\) of \(\{x_n\}\); i.e., \(\lim _{k\rightarrow \infty }\Vert J_{c_{n(k)}}x_{n(k)}-x_{n(k)}\Vert =0\). Though every weak cluster point of the subsequence \(\{x_{n(k)}\}\) is surely in S, we are yet unaware of whether any of the weak cluster points of \(\{x_{n(k)}\}\) can make (13) valid. So the standard argument then gets stuck, and an alternative argument is forced to replace. This is a feature of our methodology in this paper.

We will adopt a new technique of argument to overcome the difficulties described above. Our idea is that we blend Steps 1 and 2 (not one after another in the traditional way) for the reason that we can only work with subsequences of \(\{x_n\}\) (not the full sequence \(\{x_n\}\)). We then appropriately select a subsequence \(\{x_{n_k}\}\) of \(\{x_n\}\) that is weakly convergent to some point \(x^*\). Although we do not know whether this point would satisfy (13), we can still manage to prove strong convergence of \(\{x_n\}\). In a summary, our new technique avoids Step 2 and satisfies Step 1 only for a subsequence of \(\{x_n\}\).

3.1 First Algorithm

Our first algorithm is of viscosity nature because we introduce a contraction into the algorithm. More precisely, our algorithm starts with an arbitrary initial guess \(x_{0}\in H\) and generates \(x_{n+1}\) according to the recursion process:

$$\begin{aligned} x_{n+1}=\alpha _{n}h(x_{n})+\beta _{n}x_{n}+\lambda _{n}J_{c_{n}}x_{n}+e_{n}, \end{aligned}$$
(14)

where \(\{\alpha _{n}\},~~\{\beta _{n}\},~~\{\lambda _{n}\}\subset ]0,1[\), \(\alpha _{n}+\beta _{n}+\lambda _{n}=1\) for all n, and \(h: H\rightarrow H\) is a \(\rho \)-contraction for some \(\rho \in [0,1[\); i.e., \(\Vert h(x)-h(y)\Vert \le \rho \Vert x-y\Vert \) for all \(x,y\in H\). Note that h has a unique fixed point.

Theorem 3.1

Suppose the following conditions hold:

  1. (i)

    \(c_{n}\ge c>0\) for all n and some constant c;

  2. (ii)

    \(\lim _{n\rightarrow \infty }\alpha _{n}=0\) and \(\sum _{n=0}^\infty \alpha _{n}=\infty \);

  3. (iii)

    \(\lim _{n\rightarrow \infty }\alpha _{n}/\lambda _{n}=0\);

  4. (iv)

    either \(\sum _{n=0}^\infty \Vert e_{n}\Vert <\infty \) or \(\lim _{n\rightarrow \infty }\Vert e_{n}\Vert /\alpha _{n}=0\).

Then, the sequence \(\{x_{n}\}\) generated by Algorithm (14) converges strongly to a solution z of (2). This solution is also the unique solution to the variational inequality \(\mathrm {(VI)}\):

$$\begin{aligned} z\in S,~~~\langle (I-h)z, x-z \rangle \ge 0,\quad x\in S. \end{aligned}$$
(15)

Alternatively, z is the unique fixed point of the contraction \(P_{S}h\), that is, \(z=(P_{S}h)z\).

Proof

Consider the exact iteration of algorithm (14):

$$\begin{aligned} v_{n+1}=\alpha _{n}h(v_{n})+\beta _{n}v_{n}+\lambda _{n}J_{c_{n}}v_{n}. \end{aligned}$$

We first prove \(\Vert x_{n}-v_{n}\Vert \rightarrow 0\) as \(n\rightarrow \infty \). Since \(J_{c_{n}}\) is nonexpansive, we have

$$\begin{aligned} \Vert x_{n+1}-v_{n+1}\Vert&\le \alpha _{n}\Vert h(x_{n})-h(v_{n})\Vert +\beta _{n}\Vert x_{n}-v_{n}\Vert +\lambda _{n}\Vert J_{c_{n}}x_{n}-J_{c_{n}}v_{n}\Vert +\Vert e_{n}\Vert \\&\le \alpha _{n}\rho \Vert x_{n}-v_{n}\Vert + (\beta _{n}+\lambda _{n})\Vert x_{n}-v_{n}\Vert +\Vert e_{n}\Vert \\&=(1-\alpha _{n}(1-\rho ))\Vert x_{n}-v_{n}\Vert +\Vert e_{n}\Vert . \end{aligned}$$

By Condition \(\mathrm {(iv)}\) and applying Lemma 2.3 to the last inequality, we obtain that \(\Vert x_{n}-v_{n}\Vert \rightarrow 0\).

It remains to prove \(v_{n}\rightarrow z\) in norm, where z is the unique fixed point of the contraction \(P_{S}h\). To see this, we proceed as follows. Using again the fact that the resolvent \(J_c\) is nonexpansive, we get

$$\begin{aligned} \Vert v_{n+1}-z\Vert&\le \alpha _{n}\Vert h(v_{n})-z\Vert +\beta _{n}\Vert v_{n}-z\Vert +\lambda _{n}\Vert J_{c_{n}}v_{n}-z\Vert \\&\le \alpha _{n}\rho \Vert v_{n}-z\Vert +\alpha _{n}\Vert h(z)-z\Vert +(1-\alpha _{n})\Vert v_{n}-z\Vert \\&=(1-\alpha _{n}(1-\rho ))\Vert v_{n}-z\Vert +\alpha _{n}(1-\rho )(\Vert h(z)-z\Vert /(1-\rho ))\\&\le \max \{\Vert v_{n}-z\Vert ,\Vert h(z)-z\Vert /(1-\rho )\}. \end{aligned}$$

By induction, we get \(\Vert v_{n}-z\Vert \le \max \{\Vert v_{0}-z\Vert ,\Vert h(z)-z\Vert /(1-\rho )\}\) for all \(n\ge 0\). Hence, \(\{v_{n}\}\) is bounded.

By the fact that the resolvent \(J_c\) is firmly nonexpansive and Lemma 2.4, we derive

$$\begin{aligned}&\Vert \beta _{n}(v_{n}-z)+\lambda _{n}(J_{c_{n}}v_{n}-z)\Vert ^{2}\nonumber \\&\quad =\beta _{n}(\beta _{n}+\lambda _{n})\Vert v_{n}-z\Vert ^{2}+\lambda _{n}(\beta _{n}+\lambda _{n})\Vert J_{c_{n}}v_{n}-z\Vert ^{2} -\beta _{n}\lambda _{n}\Vert J_{c_{n}}v_{n}-v_{n}\Vert ^{2}\nonumber \\&\quad \le \beta _{n}(\beta _{n}+\lambda _{n})\Vert v_{n}-z\Vert ^{2}+\lambda _{n}(\beta _{n} +\lambda _{n})\Vert v_{n}-z\Vert ^{2} -\lambda _{n}(\beta _{n}+\lambda _{n})\Vert J_{c_{n}}v_{n}-v_{n}\Vert ^{2}\nonumber \\&\quad \quad -\beta _{n}\lambda _{n}\Vert J_{c_{n}}v_{n}-v_{n}\Vert ^{2}\nonumber \\&\quad =(\beta _{n}+\lambda _{n})^{2}\Vert v_{n}-z\Vert ^{2}-(\lambda _{n}^{2}+2\beta _{n}\lambda _{n}) \Vert v_{n}-J_{c_{n}}v_{n}\Vert ^{2}. \end{aligned}$$
(16)

We next further derive [using (16)]

$$\begin{aligned}&\Vert v_{n+1}-z\Vert ^{2} =\Vert \alpha _{n}(h(v_{n})-z)+\beta _{n}(v_{n}-z)+\lambda _{n}(J_{c_{n}}v_{n}-z)\Vert ^{2} \nonumber \\&\quad =\Vert \alpha _{n}(h(v_{n})-h(z)+h(z)-z)+\beta _{n}(v_{n}-z)+\lambda _{n}(J_{c_{n}}v_{n}-z)\Vert ^{2}\nonumber \\&\quad \le \Vert \alpha _{n}(h(v_{n})-h(z))+\beta _{n}(v_{n}-z)+\lambda _{n}(J_{c_{n}}v_{n}-z)\Vert ^{2}\nonumber \\&\quad \quad +2\alpha _{n} \langle h(z)-z,v_{n+1}-z \rangle \nonumber \\&\quad \le \alpha _{n}\Vert h(v_{n})-h(z)\Vert ^{2}+(1-\alpha _{n})\Vert \frac{\beta _{n}}{\beta _{n}+\lambda _{n}}(v_{n}-z) +\frac{\lambda _{n}}{\beta _{n}+\lambda _{n}}(J_{c_{n}}v_{n}-z)\Vert ^{2}\nonumber \\&\quad \quad +2\alpha _{n} \langle h(z)-z,v_{n+1}-z \rangle \nonumber \\&\quad \le \alpha _{n}\rho ^{2}\Vert v_{n}-z\Vert ^{2}+\frac{1}{1-\alpha _{n}}\Vert \beta _{n}(v_{n}-z)+\lambda _{n}(J_{c_{n}}v_{n}-z)\Vert ^{2}\nonumber \\&\quad \quad +2\alpha _{n} \langle h(z)-z,v_{n+1}-z \rangle \nonumber \\&\quad \le (\alpha _{n}\rho ^{2}+\beta _{n}+\lambda _{n})\Vert v_{n}-z\Vert ^{2}-\frac{\lambda _{n}^{2}+2\beta _{n}\lambda _{n}}{1-\alpha _{n}}\Vert J_{c_{n}}v_{n}-v_{n}\Vert ^{2} \nonumber \\&\quad \quad +2\alpha _{n} \langle h(z)-z,v_{n+1}-z \rangle \nonumber \\&\quad =(1-\alpha _{n}(1-\rho ^{2}))\Vert v_{n}-z\Vert ^{2}-\frac{\lambda _{n}(2-\lambda _{n}-2\alpha _{n})}{1-\alpha _{n}}\Vert J_{c_{n}}v_{n}-v_{n}\Vert ^{2} \nonumber \\&\quad \quad +2\alpha _{n} \langle h(z)-z,v_{n+1}-z \rangle \nonumber \\&\quad =(1-\tilde{\alpha }_n)\Vert v_{n}-z\Vert ^{2}+\tilde{\alpha }_n\gamma _{n}, \end{aligned}$$
(17)

where \(\tilde{\alpha }_n=\alpha _{n}(1-\rho ^{2})\) and

$$\begin{aligned} \gamma _{n}:=\frac{2}{1-\rho ^{2}}\langle h(z)-z,v_{n+1}-z\rangle - \frac{\lambda _{n}(2-\lambda _{n}-2\alpha _{n})}{\alpha _{n}(1-\alpha _{n})(1-\rho ^{2})} \Vert J_{c_{n}}v_{n}-v_{n}\Vert ^{2}. \end{aligned}$$

Since \(\lambda _n\in ]0,1[\) and \(\alpha _n\rightarrow 0\), we may assume \(2-\lambda _{n}-2\alpha _{n}>0\) for all n.

Now since \(\{v_{n}\}\) is bounded, \(\{\gamma _{n}\}\) is bounded from above. In fact, we have

$$\begin{aligned} \sup _{n\ge 0}\, \gamma _n\, \le \frac{2}{1-\rho ^{2}}\Vert h(z)-z\Vert \left( \sup _{n\ge 0}\Vert v_{n}\Vert +\Vert z\Vert \right) <\infty . \end{aligned}$$

Furthermore, we can take a subsequence \(\{n_k\}\) such that

$$\begin{aligned} \limsup \limits _{n\rightarrow \infty }\gamma _{n} =\lim \limits _{k\rightarrow \infty }\gamma _{n_k}&=\lim \limits _{k\rightarrow \infty }\bigg (\frac{2}{1-\rho ^{2}}\langle h(z)-z, v_{n_{k}+1}-z\rangle \nonumber \\&\quad -\frac{\lambda _{n_{k}}(2-\lambda _{n_{k}}-2\alpha _{n_{k}})}{\alpha _{n_{k}}(1-\alpha _{n_{k}})(1-\rho ^{2})} \Vert J_{c_{n_{k}}}v_{n_{k}}-v_{n_{k}}\Vert ^{2}\bigg ). \end{aligned}$$
(18)

As \(\{v_n\}\) is bounded, \(\{\langle h(z)-z, v_{n_{k}+1}-z\rangle \}\) is a bounded sequence of real numbers. Thus, without any loss of generality, we may assume (by selecting a further subsequence if necessary) that the following limit exists:

$$\begin{aligned} \lim _{k\rightarrow \infty }\langle h(z)-z, v_{n_{k}+1}-z\rangle . \end{aligned}$$
(19)

Consequently, it turns out from (18) that the following limit exists as well:

$$\begin{aligned} \lim _{k\rightarrow \infty } \frac{\lambda _{n_{k}}(2-\lambda _{n_{k}}-2\alpha _{n_{k}})}{\alpha _{n_{k}}(1-\alpha _{n_{k}})(1-\rho ^{2})} \Vert J_{c_{n_{k}}}v_{n_{k}}-v_{n_{k}}\Vert ^{2}. \end{aligned}$$
(20)

Since \(\alpha _n\rightarrow 0\), without any loss of generality, we may assume that \((1-2\alpha _n)/(1-\alpha _n)>1/2\) for all n. Thus, noting \(1-\lambda _n>0\), we obtain

$$\begin{aligned} \frac{\lambda _{n}(2-\lambda _{n}-2\alpha _{n})}{\alpha _{n}(1-\alpha _{n})(1-\rho ^{2})} \ge \frac{\lambda _n}{\alpha _n}\cdot \frac{1}{(1-\rho ^{2})}\frac{1-2\alpha _{n}}{1-\alpha _{n}} \ge \frac{1}{2(1-\rho ^{2})}\cdot \frac{\lambda _n}{\alpha _n} \end{aligned}$$

for all n. This together with (20) implies that the subsequence

$$\begin{aligned} \left\{ \frac{\lambda _{n_{k}}}{\alpha _{n_{k}}}\cdot \Vert J_{c_{n_{k}}}v_{n_{k}}-v_{n_{k}}\Vert ^{2}\right\} _{k=1}^\infty \end{aligned}$$

is a bounded sequence of real numbers. Consequently, by Condition (iii), we get

$$\begin{aligned} \lim _{k\rightarrow \infty }\Vert J_{c_{n_k}}v_{n_{k}}-v_{n_{k}}\Vert ^2= \lim _{k\rightarrow \infty }\frac{\alpha _{n_{k}}}{\lambda _{n_{k}}}\cdot \left( \frac{\lambda _{n_{k}}}{\alpha _{n_{k}}}\cdot \Vert J_{c_{n_{k}}}v_{n_{k}}-v_{n_{k}}\Vert ^{2}\right) =0. \end{aligned}$$
(21)

This together with Lemma 2.1(\(\mathrm {iii}\)) immediately implies that

$$\begin{aligned} \lim _{k\rightarrow \infty }\Vert J_{c}v_{n_{k}}-v_{n_{k}}\Vert =0. \end{aligned}$$
(22)

[Here \(c>0\) is a constant satisfying Condition (i).]

Lemma 2.2 then guarantees that any weak cluster points of \(\{v_{n_{k}}\}\) belong to S. By the definition of \(v_{n_{k}+1}\), we deduce that

$$\begin{aligned} \Vert v_{n_{k}+1}-v_{n_{k}}\Vert&=\Vert \alpha _{n_{k}}h(v_{n_{k}}) +\lambda _{n_{k}}J_{c_{n_{k}}}v_{n_{k}}-(\alpha _{n_{k}}+\lambda _{n_{k}})v_{n_{k}}\Vert \nonumber \\&\le \alpha _{n_{k}}\Vert h(v_{n_{k}})-v_{n_{k}}\Vert +\lambda _{n_{k}}\Vert J_{c_{n_{k}}}v_{n_{k}}-v_{n_{k}}\Vert \rightarrow 0. \end{aligned}$$
(23)

This implies that any weak cluster points of \(\{v_{n_{k}+1}\}\) also belong to S. Without any loss of generality, we assume that \(\{v_{n_{k}+1}\}\) weakly converges to \(\bar{v}\); hence, \(\bar{v}\in S\). Now by (18) and (19), we infer that

$$\begin{aligned} \limsup \limits _{n\rightarrow \infty }\gamma _{n}&\le \lim \limits _{k\rightarrow \infty } \frac{2}{1-\rho ^{2}}\langle h(z)-z, v_{n_{k}+1}-z\rangle =\frac{2}{1-\rho ^{2}}\langle h(z)-z, \bar{v}-z\rangle \le 0 \end{aligned}$$
(24)

due to the fact that \(z=P_S(h(z))\) and the characterization (1) of projections.

Finally, from (24), we can apply Lemma 2.3 to (17) to conclude that \(\Vert v_{n}-z\Vert \rightarrow 0\) as \(n\rightarrow \infty \). This ends the proof. \(\square \)

Consider next a special case of Algorithm (14): Given \(u\in H\) and an initial guess \(x_{0}\in H\),

$$\begin{aligned} x_{n+1}=\alpha _{n}u+\beta _{n}x_{n}+\lambda _{n}J_{c_{n}}x_{n}+e_{n}, \end{aligned}$$
(25)

where \(\{\alpha _{n}\},~~\{\beta _{n}\},~~\{\lambda _{n}\}\subset ]0,1[\) are such that \(\alpha _{n}+\beta _{n}+\lambda _{n}=1\) for all n.

Corollary 3.1

Suppose there hold the conditions:

  1. (i)

    \(c_{n}\ge c>0\) for all n;

  2. (ii)

    \(\lim \limits _{n\rightarrow \infty }\alpha _{n}=0\) and \(\sum _{n=0}^\infty \alpha _{n}=\infty \);

  3. (iii)

    \(\lim \limits _{n\rightarrow \infty }\frac{\alpha _{n}}{\lambda _{n}}=0\);

  4. (iv)

    either \(\sum _{n=0}^\infty \Vert e_{n}\Vert <\infty \) or \(\lim _{n\rightarrow \infty }\Vert e_{n}\Vert /\alpha _{n}=0\).

Then, the sequence \(\{x_{n}\}\) generated by Algorithm (25) converges strongly to a solution of (2), which is closest from u to S.

Remark 3.1

An example of \(\{\alpha _{n}\}\) and \(\{\lambda _{n}\}\), which satisfy the conditions of Theorem 3.1 and Corollary 3.1, is given by \(\alpha _{n}=(n+1)^{-q_{1}}\) and \(\lambda _{n}=(n+1)^{-q_{2}},\) where \(q_{1}\in ]0,1],~~q_{2}\in ]0,q_{1}[\). Since Corollary 3.1 includes the case of \(\liminf _{n\rightarrow \infty }\lambda _{n}=0\), it improves the results of [13, 14].

3.2 Second Algorithm

Next we consider the following algorithm:

$$\begin{aligned} x_{n+1}=\alpha _{n}u+\beta _{n}x_{n}+\lambda _{n}J_{c_{n}}(x_{n}+e_{n}), \end{aligned}$$
(26)

where \(\{\alpha _{n}\},~~ \{\beta _{n}\},~~\{\lambda _{n}\}\subset ]0,1[\) are such that \(\alpha _{n}+\beta _{n}+\lambda _{n}=1\) for all n.

The following two lemmas, which are easy to be proved, will be useful in our subsequent argument.

Lemma 3.1

Given \(\mu >0\) and let \(\{s_{n}\}\) be a sequence of nonnegative real numbers satisfying the condition:

$$\begin{aligned} s_{n+1}\le \alpha _{n}\mu +[\beta _{n}+\lambda _{n}(1+\varepsilon _{n})]s_{n}, \end{aligned}$$
(27)

where \(\{\alpha _{n}\}, \{\beta _{n}\}, \{\lambda _{n}\}\subset ]0,1[\) are such that \(\alpha _{n}+\beta _{n}+\lambda _{n}=1\) for all n, and \(\{\varepsilon _{n}\}\in \ell _1\) is a sequence of nonnegative numbers. Then \(\{s_{n}\}\) is bounded. More precisely, we have, for all n,

$$\begin{aligned} s_{n}\le \max \{\mu ,s_{0}\}\exp \left( \sum \limits _{k=0}^{\infty }\varepsilon _{k}\right) <\infty . \end{aligned}$$
(28)

The assumption that \((\varepsilon _{n})\in \ell _1\) in Lemma 3.1 may be relaxed as follows.

Lemma 3.2

Let \(\mu >0\) and let \(\{s_{n}\}\) be a sequence of nonnegative numbers satisfying (27), where \(\{\alpha _{n}\}, \{\beta _{n}\}, \{\lambda _{n}\}\subset ]0,1[\) are such that \(\alpha _{n}+\beta _{n}+\lambda _{n}=1\) for all n, and \(\{\varepsilon _{n}\}\subset [0,\infty [\). If \(2\varepsilon _{n}(1-\alpha _{n})\le \alpha _{n}\), then \(\{s_{n}\}\) is bounded. In fact, we have \(s_{n}\le \max \{2\mu , s_{0}\}<\infty \) for all \(n\ge 0\).

Lemma 3.3

([7, 17]) Let \(\eta \in ]0,1/2[\), \(x,e\in H\) and \(\tilde{x}:=J_{c}(x+e)\) with \(c>0\), and assume that \(S=A^{-1}(0)=Fix(J_c)\). If \(\Vert e\Vert \le \eta \Vert x-\tilde{x}\Vert \), then

$$\begin{aligned} \Vert \tilde{x}-z\Vert ^{2}\le (1+(2\eta )^{2})\Vert x-z\Vert ^{2}-\frac{1}{2}\Vert \tilde{x}-x\Vert ^{2},\quad z\in S. \end{aligned}$$
(29)

Theorem 3.2

Assume the conditions:

  1. (i)

    \(c_{n}\ge c>0\) for all n;

  2. (ii)

    \(\lim _{n\rightarrow \infty }\alpha _{n}=0\) and \(\sum _{n=0}^{\infty }\alpha _{n}=\infty \);

  3. (iii)

    \(\lim _{n\rightarrow \infty }\alpha _{n}/\lambda _{n}=0\);

  4. (iv)

    \(\Vert e_{n}\Vert \le \eta _{n}\Vert J_{c_{n}}(x_{n}+e_{n})-x_{n}\Vert \) and \(\sum _{n=0}^{\infty }\eta ^{2}_{n}<\infty \).

Then, the sequence \(\{x_n\}\) generated by Algorithm (26) converges strongly to \(P_{S}(u)\).

Proof

Let \(z:=P_{S}(u)\). Without any loss of generality, we assume that \(\eta _{n}\in \,]0,1/2[\). Then, by Lemma 3.3, we have

$$\begin{aligned} \Vert J_{c_{n}}(x_{n}+e_{n})-z\Vert ^{2}\le (1+\varepsilon _{n})\Vert x_{n}-z\Vert ^{2}-\frac{1}{2}\Vert J_{c_{n}}(x_{n}+e_{n})-x_{n}\Vert ^{2}, \end{aligned}$$
(30)

where \(\varepsilon _{n}:=(2\eta _{n})^{2}\) satisfying \(\sum _{n=0}^{\infty }\varepsilon _{n}<\infty \). By (30), we deduce

$$\begin{aligned} \Vert x_{n+1}-z\Vert ^{2}&=\Vert \alpha _{n}u+\beta _{n}x_{n}+\lambda _{n}J_{c_{n}}(x_{n}+e_{n})-z\Vert ^{2}\nonumber \\&\le \alpha _{n}\Vert u-z\Vert ^{2}+\beta _{n}\Vert x_{n}-z\Vert ^{2}+\lambda _{n}\Vert J_{c_{n}}(x_{n}+e_{n})-z\Vert ^{2}\nonumber \\&\le \alpha _{n}\Vert u-z\Vert ^{2}+\beta _{n}\Vert x_{n}-z\Vert ^{2} +\lambda _{n} [(1+\varepsilon _{n})\Vert x_{n}-z\Vert ^{2}\nonumber \\&\quad -\frac{1}{2}\Vert J_{c_{n}}(x_{n}+e_{n})-x_{n}\Vert ^{2}]\nonumber \\&\le \alpha _{n}\Vert u-z\Vert ^{2}+[\beta _{n}+\lambda _{n}(1+\varepsilon _{n})]\Vert x_{n}-z\Vert ^{2}. \end{aligned}$$
(31)

Applying Lemma 3.1 to the last inequality, we conclude that \(\{x_{n}\}\) is bounded. It follows from Lemma 2.4 that

$$\begin{aligned} \Vert x_{n+1}-z\Vert ^{2}&=\Vert \alpha _{n}(u-z)+\beta _{n}(x_{n}-z)+\lambda _{n}(J_{c_{n}}(x_{n}+e_{n})-z)\Vert ^{2}\nonumber \\&\le \Vert \beta _{n}(x_{n}-z)+\lambda _{n}(J_{c_{n}}(x_{n}+e_{n})-z)\Vert ^{2} +2\alpha _{n}\langle u-z,x_{n+1}-z\rangle \nonumber \\&=\beta _{n}(\beta _{n}+\lambda _{n})\Vert x_{n}-z\Vert ^{2}+\lambda _{n}(\beta _{n}+\lambda _{n}) \Vert J_{c_{n}}(x_{n}+e_{n})-z\Vert ^{2}\nonumber \\&\quad -\beta _{n}\lambda _{n}\Vert J_{c_{n}}(x_{n}+e_{n})-x_{n}\Vert ^{2}+2\alpha _{n} \langle u-z,x_{n+1}-z\rangle \nonumber \\&\le \beta _{n}(\beta _{n}+\lambda _{n})\Vert x_{n}-z\Vert ^{2} +\lambda _{n}(\beta _{n}+\lambda _{n}) [(1+\varepsilon _{n})\Vert x_{n}-z\Vert ^{2}\nonumber \\&\quad -\frac{1}{2}\Vert J_{c_{n}}(x_{n}+e_{n})-x_{n}\Vert ^{2}] -\beta _{n}\lambda _{n}\Vert J_{c_{n}}(x_{n}+e_{n})-x_{n}\Vert ^{2}\nonumber \\&\quad +2\alpha _{n}\langle u-z,x_{n+1}-z\rangle \nonumber \\&=(1-\alpha _{n})^{2}\Vert x_{n}-z\Vert ^{2}-\frac{1}{2}(3\beta _{n}\lambda _{n}+\lambda _{n}^{2}) \Vert J_{c_{n}}(x_{n}+e_{n})-x_{n}\Vert ^{2} \nonumber \\&\quad +2\alpha _{n} \langle u-z,x_{n+1}-z\rangle +\varepsilon _{n}\lambda _{n}(\beta _{n} +\lambda _{n})\Vert x_{n}-z\Vert ^{2}\nonumber \\&\le (1-\alpha _{n})\Vert x_{n}-z\Vert ^{2}-\frac{1}{2}(3\beta _{n}\lambda _{n}+\lambda _{n}^{2}) \Vert J_{c_{n}}(x_{n}+e_{n})-x_{n}\Vert ^{2} \nonumber \\&\quad +2\alpha _{n} \langle u-z,x_{n+1}-z\rangle +\varepsilon _{n}M, \end{aligned}$$
(32)

where \(M>0\) is such that \(M\ge \sup \{\Vert x_{n}-z\Vert ^{2}: n\ge 0\}\). Put

$$\begin{aligned} t_{n}:=\sum \limits _{k=n}^{\infty }\varepsilon _{k}\quad \mathrm{and}\quad p_{n}:=\Vert x_{n}-z\Vert ^{2}+Mt_{n}. \end{aligned}$$

It is obvious that \(t_{n}\rightarrow 0\) as \(\sum _{n=0}^{\infty }\varepsilon _{n}<\infty \). Then, it is not hard to see that (32) is reduced to the inequality:

$$\begin{aligned} p_{n+1}&\le (1-\alpha _{n})p_{n}-\frac{1}{2}(3\beta _{n}\lambda _{n}+\lambda _{n}^{2}) \Vert J_{c_{n}}(x_{n}+e_{n})-x_{n}\Vert ^{2}\nonumber \\&\quad +2\alpha _{n} \langle u-z,x_{n+1}-z\rangle +\alpha _{n}t_{n}M. \end{aligned}$$
(33)

If we set

$$\begin{aligned} \gamma _{n}:=2\langle u-z,x_{n+1}-z\rangle +Mt_{n}-\frac{1}{2\alpha _{n}}(3\beta _{n}\lambda _{n}+\lambda _{n}^{2}) \Vert J_{c_{n}}(x_{n}+e_{n})-x_{n}\Vert ^{2}, \end{aligned}$$
(34)

then (33) can further be rewritten as

$$\begin{aligned} p_{n+1}&\le (1-\alpha _{n})p_{n}+\alpha _{n}\gamma _{n}. \end{aligned}$$
(35)

As \(\{x_n\}\) is bounded and \(\{t_n\}\) is null, it is easy to see that \(\{\gamma _n\}\) is bounded above; thus, \(\limsup _{n\rightarrow \infty }\gamma _n\) exists. Take a subsequence \(\{n_k\}\) such that (using \(t_n\rightarrow 0\) and \(\Vert e_n\Vert \rightarrow 0\))

$$\begin{aligned} \limsup _{n\rightarrow \infty }\gamma _n=\lim _{k\rightarrow \infty }\gamma _{n_k}&=\lim _{k\rightarrow \infty }\bigg (2\langle u-z,x_{n_k+1}-z\rangle \nonumber \\&\quad - \frac{\lambda _{n_k}(3\beta _{n_k}+\lambda _{n_k})}{2\alpha _{n_k}} \Vert J_{c_{n_k}}(x_{n_k})-x_{n_k}\Vert ^{2}\bigg ). \end{aligned}$$
(36)

Since \(\{x_n\}\) is bounded, we may assume, without any loss of generality, that the following limit exists:

$$\begin{aligned} \lim _{k\rightarrow \infty }\langle u-z,x_{n_k+1}-z\rangle . \end{aligned}$$
(37)

It then turns out from (36) that the following limit exists:

$$\begin{aligned} \lim _{k\rightarrow \infty }\frac{1}{2}(3\beta _{n_k}+\lambda _{n_k}) \cdot \frac{\lambda _{n_k}}{\alpha _{n_k}}\Vert J_{c_{n_k}}(x_{n_k})-x_{n_k}\Vert ^{2}. \end{aligned}$$
(38)

Now observing \(3\beta _{n}+\lambda _{n}=3(1-\alpha _n-\lambda _n)+\lambda _n =2(1-\lambda _n)+1-3\alpha _n\ge 1-3\alpha _n\rightarrow 1\) as \(n\rightarrow \infty \), we may assume that \(3\beta _{n}+\lambda _{n}>\frac{1}{2}\) for all n. We then deduce from (38) that the sequence

$$\begin{aligned} \left\{ \frac{\lambda _{n_k}}{\alpha _{n_k}}\Vert J_{c_{n_k}}(x_{n_k})-x_{n_k}\Vert ^{2}\right\} _{k=1}^\infty \end{aligned}$$
(39)

is bounded. The assumption (iii) then ensures that

$$\begin{aligned} \lim _{k\rightarrow \infty }\Vert J_{c_{n_k}}(x_{n_k})-x_{n_k}\Vert ^{2} =\lim _{k\rightarrow \infty } \frac{\alpha _{n_k}}{\lambda _{n_k}}\cdot \left( \frac{\lambda _{n_k}}{\alpha _{n_k}}\Vert J_{c_{n_k}}(x_{n_k})-x_{n_k}\Vert ^{2}\right) =0. \end{aligned}$$
(40)

Apply Lemma 2.1 \((\mathrm {iii})\) to get

$$\begin{aligned} \lim _{k\rightarrow \infty }\Vert J_{c}(x_{n_k})-x_{n_k}\Vert ^{2}=0. \end{aligned}$$
(41)

We also observe that

$$\begin{aligned} \Vert x_{n_{k}+1}-x_{n_{k}}\Vert&=\Vert \alpha _{n_{k}}u+\beta _{n_{k}}x_{n_{k}}+\lambda _{n_{k}}J_{c_{n_{k}}}(x_{n_{k}}+e_{n_{k}})-x_{n_{k}}\Vert \\&\le \alpha _{n_{k}}\Vert u-x_{n_{k}}\Vert +\lambda _{n_{k}}\Vert J_{c_{n_{k}}}(x_{n_{k}}+e_{n_{k}})-x_{n_{k}}\Vert \\&\le \alpha _{n_{k}}\Vert u-x_{n_{k}}\Vert +\lambda _{n_{k}}(\Vert J_{c_{n_{k}}}(x_{n_{k}}+e_{n_{k}})-J_{c_{n_{k}}}(x_{n_{k}})\Vert \\&\quad + \Vert J_{c_{n_{k}}}(x_{n_{k}})-x_{n_{k}}\Vert )\\&\le \alpha _{n_{k}}\Vert u-x_{n_{k}}\Vert +\lambda _{n_{k}}\Vert J_{c_{n_{k}}}(x_{n_{k}})-x_{n_{k}}\Vert +\Vert e_{n_{k}}\Vert \rightarrow 0. \end{aligned}$$

This implies that \(\{x_{n_{k}+1}\}\) and \(\{x_{n_{k}}\}\) have the same weak limit points. Now take a subsequence of \(\{x_{n_k}\}\), which is still denoted by \(\{x_{n_k}\}\), weakly converging to a point \(x^*\). By Lemma 2.2 and (41), we have \(x^*\in Fix(J_c)=S\). It then follows from (36) and (37) that

$$\begin{aligned} \limsup _{n\rightarrow \infty }\gamma _n \le \lim _{k\rightarrow \infty }2\langle u-z,x_{n_k+1}-z\rangle =2\langle u-z,x^*-z\rangle \le 0 \end{aligned}$$
(42)

due to the fact that \(z=P_Su\) and (1).

We are now guaranteed by (42) that Lemma 2.3 is applicable to (35) and conclude that \(\lim \limits _{n\rightarrow \infty }p_{n}=0\). Therefore, \(\Vert x_{n}-z\Vert \rightarrow 0\) as required. \(\square \)

Theorem 3.3

Let us assume that Conditions (i)–(iv) of Theorem 3.2 hold with the condition \(\lim _{n\rightarrow \infty }\eta _{n}^{2}=0\) being replaced with the condition \(\lim _{n\rightarrow \infty }\eta _{n}^{2}/\alpha _{n}=0\). Then, the sequence \(\{x_n\}\) generated by Algorithm (26) converges strongly to \(P_{S}(u)\).

Proof

We shall follow the main streams of the proof of Theorem 3.2 and hence omit the details of the proof, with focusing only on the differences from the proof of Theorem 3.2.

Let again \(z:=P_{S}(u)\) and, due to the condition \(\eta _{n}^{2}/\alpha _{n}\rightarrow 0\), we may assume that \(\eta _{n}\in ]0,1/2[\) and \(2\varepsilon _{n}(1-\alpha _{n})\le \alpha _{n}\), where \(\varepsilon _{n}:=(2\eta _{n})^{2}\) satisfies \(\varepsilon _{n}/\alpha _{n}\rightarrow 0\).

Recall that we still have (31); that is,

$$\begin{aligned} \Vert x_{n+1}-z\Vert ^{2}&\le \alpha _{n}\Vert u-z\Vert ^{2}+[\beta _{n}+\lambda _{n}(1+\varepsilon _{n})]\Vert x_{n}-z\Vert ^{2}. \end{aligned}$$

The boundedness of \(\{x_{n}\}\) is now guaranteed by Lemma 3.2. We can rewrite (32) as

$$\begin{aligned} \Vert x_{n+1}-z\Vert ^{2}\le (1-\alpha _{n})\Vert x_{n}-z\Vert ^{2}+\alpha _{n}\tilde{\gamma }_{n}, \end{aligned}$$
(43)

where

$$\begin{aligned} \tilde{\gamma }_{n}:=2\langle u-z,x_{n+1}-z\rangle +\frac{\varepsilon _{n}}{\alpha _{n}}M -\frac{\lambda _{n}(3-3\alpha _{n}-2\lambda _{n})}{2\alpha _{n}} \Vert J_{c_{n}}(x_{n}+e_{n})-x_{n}\Vert ^{2}.\quad \end{aligned}$$
(44)

Now, the coefficient \(\varepsilon _{n}/\alpha _{n}\) in (44), which tends to zero as \(n\rightarrow \infty \), plays the same role as \(t_n\) in (34). We can therefore repeat the argument for proving (42) to get \(\limsup _{n\rightarrow \infty }\tilde{\gamma }_n\le 0.\) It turns out that we can apply Lemma 2.3 to (43) to get \(\Vert x_n-z\Vert \rightarrow 0\). This completes the proof. \(\square \)

A special case of Algorithm (26) is the algorithm below [17]:

$$\begin{aligned} x_{n+1}=\alpha _{n}u+(1-\alpha _{n})J_{c_{n}}(x_{n}+e_{n}), \end{aligned}$$
(45)

where \(\{\alpha _{n}\}\subset ]0,1[\). Therefore, the results of [17] are direct consequences of Theorems 3.2 and 3.3.

Corollary 3.2

Assume the following conditions hold:

  • \(c_{n}\ge c>0\) for all n, \(\lim \limits _{n\rightarrow \infty }\alpha _{n}=0\), and \(\sum _{n=0}^{\infty }\alpha _{n}=\infty \);

  • \(\Vert e_{n}\Vert \le \eta _{n}\Vert J_{c_{n}}(x_{n}+e_{n})-x_{n}\Vert \), where \(\{\eta _n\}\) is such that either \(\sum _{n=0}^{\infty }\eta ^{2}_{n}<\infty \) or \(\lim _{n\rightarrow \infty }\frac{\eta _{n}^{2}}{\alpha _{n}}=0\).

Then, the sequence \(\{x_n\}\) generated by Algorithm (45) converges strongly to \(P_{S}(u)\).

4 Conclusions

We have obtained two modifications of the proximal point algorithm (PPA) of Rockafellar [1] and proved their strong convergence in a general (possibly infinite-dimensional) Hilbert space under weaker conditions on the errors by employing a new technique of argument for cleverly choosing subsequences to prove the relation (24) or (42), which plays a key role in the strong convergence of our modifications of PPA.

Our contribution in this paper is theoretical. Numerical simulations on performance of our Algorithms (14) and (26) will be discussed elsewhere.