1 Introduction

In a fundamental work, Rockafellar [1] proved that the subdifferential of a proper, convex and lower semicontinuous function is a maximal monotone operator. Since the zeroes of the subdifferential correspond to the minimizers of the function, then finding and approximating those zeroes is a problem of fundamental importance in optimization. One of the most effective iterative methods for solving this problem is the proximal point algorithm (abbreviated PPA), which was studied by Rockafellar [2]. The first mean ergodic theorem for nonexpansive self-mappings of a nonempty closed and convex subset of a Hilbert space was proved by Baillon [3]. For more details on maximal monotone operators and nonexpansive mappings, we refer the reader to [4,5,6,7]. In this paper, motivated by our approach in [8,9,10,11,12], we consider the regularization of two PPA with errors for a maximal monotone operator in a real Hilbert space, previously studied, respectively, by Xu, and by Boikanyo and Morosanu, where they assumed the zero set of the operator to be nonempty. First, by providing a counter example, we point out an error in the proof of Xu’s theorem concerning the uniformity of the strong convergence with respect to a parameter. Subsequently, we prove a correct extended version of that theorem with appropriate conditions, giving also a necessary and sufficient condition for the zero set of the operator to be nonempty and showing the strong convergence of the regularized scheme to a zero of the operator. We note in passing that Song and Yang [13] had already noticed and corrected an error in Xu’s paper, and Wang [14] had extended a theorem in Xu’s paper. However, none of these authors had noticed the error that we point out and correct in our paper. This will give a first affirmative answer to the open question raised by Boikanyo and Morosanu concerning the design of a PPA, where the error sequence tends to zero and a parameter sequence remains bounded. Then we investigate the second PPA with various new conditions on the parameter sequences and prove similar theorems as above, providing also a second affirmative answer to the open question of Boikanyo and Morosanu. We note also that Yao and Shahzad [15] had already provided an affirmative answer to the open question raised by Boikanyo and Morosanu, however, with a PPA different from the one considered by them and with conditions stronger than ours on the parameter sequences, whereas our paper answers affirmatively the open question for the same PPA with mild conditions on the parameter sequences. Finally, we present some applications of our new convergence results to optimization and variational inequalities.

2 Preliminaries

Let H be a real Hilbert space with inner product \(\langle \cdot ,\cdot \rangle \) and norm \(\Vert \cdot \Vert \). An operator \(T:D(T) \subseteq H \rightrightarrows H\) is said to be monotone if its graph G(T) is a monotone subset of \(H \times H\), that is,

$$\begin{aligned} \langle y_2-y_1,x_2-x_1 \rangle \ge 0, \end{aligned}$$

for all \(x_1,x_2 \in D(T)\) and all \(y_1 \in T(x_1)\) and \(y_2 \in T(x_2)\). Clearly, if T is monotone, then its inverse defined by \(T^{-1}:= \{(y,x): (x,y) \in G(T)\}\) is also a monotone operator. We say that T is maximal monotone if T is monotone and the graph of T is not properly contained in the graph of any other monotone operator. It is known that in Hilbert space, this is equivalent to the range of the operator \((I+T)\) being all of H, where I is the identity operator on H, i.e., \(R(I+T)=H\). Also it is clear that T is maximal monotone if and only if \(T^{-1}\) is maximal monotone. For a maximal monotone operator T, and for every \(t > 0\), the operator \(J_{t}: H \longrightarrow H\) defined by \(J_{t}^{\mathrm{T}}(x):=(I+tT)^{-1}(x)\) is well defined, single-valued and nonexpansive on H. It is called the resolvent of T. Consider the following set valued problem: find \(\, x \in D(T) \, {\mathrm{such}} \, {\mathrm{that}} \, 0 \in T(x).\) One of the most effective iterative methods for solving this problem is the proximal point algorithm (abbreviated PPA). The PPA generates an iterative sequence \((x_n)\) as follows: \(x_{n+1}=J_{\gamma _n}^{\mathrm{T}}(x_n+e_n),\) for all \(n \ge 0\), where \(x_0 \in H\) is a given starting point, \((\gamma _n) \subset ]0,+ \infty [\) and \((e_n)\) is a sequence of computational errors. In 2006, Xu [16] proposed the following regularization for the proximal point algorithm:

$$\begin{aligned} x_{n+1}=J_{c_n}^{\mathrm{T}}((1-t_n)x_n+ t_n u+e_n), \end{aligned}$$
(1)

where \(x_0,u \in H\), \(t_n \in ]0,1[\), \(c_n \in ]0,+ \infty [\), for all \(n \ge 0\). He showed that if \(T^{-1}(0) \ne \emptyset \) and \(t_n \rightarrow 0\), \(\sum _{n=0}^{\infty } t_n = \infty \), \(\sum _{n=0}^{\infty } \Vert e_n\Vert < \infty \), \(t_{n+1} \le \frac{c_{n+1}}{c_n}\) and either \(\lim _{n \rightarrow \infty }\frac{1}{t_n}|\frac{c_{n+1}t_n}{c_nt_{n+1}}-1|=0\) or \(\sum _{n=0}^{\infty }|\frac{c_{n+1}t_n}{c_nt_{n+1}}-1| < \infty \), then \((x_n)\) converges strongly to an element of \(T^{-1}(0)\) which is nearest to u. This algorithm essentially includes the algorithm that was introduced by Lehdili and Moudafi [17].

Recently, Boikanyo and Morosanu [18] considered the sequence generated by the following algorithm:

$$\begin{aligned} x_{n+1}= \alpha _n u+(1-\alpha _n)J_{\beta _n}^{\mathrm{T}}(x_n)+e_n, \, \, n \ge 0 \end{aligned}$$
(2)

where \(x_0,u \in H\), \(\alpha _n \in ]0,1[\), \(\beta _n \in ]0,+ \infty [\), for all \(n \ge 0\). They showed that if \(T^{-1}(0) \ne \emptyset \) and \(\alpha _n \rightarrow 0\), \(\beta _n \rightarrow +\infty \), \(\sum _{n=0}^{\infty } \alpha _n = \infty \) and either \(\frac{\Vert e_n \Vert }{\alpha _n} \rightarrow 0\) or \(\sum _{n=0}^{\infty } \Vert e_n \Vert < \infty \), then \((x_n)\) converges strongly to an element of \(T^{-1}(0)\) which is nearest to u. It is easy to see that algorithms (1) and (2) are in fact equivalent (see, e.g., [18]). For more information on PPA, we refer the reader to [2, 8, 16, 17, 19, 20], and the references therein.

3 Main Results

In this section, we use contractions as the Tikhonov regularization of the resolvent \(J^{\mathrm{T}}_{c}\) by considering the mapping \(V_t\) defined by

$$\begin{aligned} V_{t}x:=J^{\mathrm{T}}_{c}((1-t)x+tu+e), \,\,\,\, x \in H, \end{aligned}$$
(3)

where \(t \in ]0,1[\), \(c > 0\) and \(u,e \in H\) are fixed. Since \(J^{\mathrm{T}}_{c}\) is nonexpansive, it is easy to see that \(V_t\) is a contraction. By using the Banach contraction principle, \(V_t\) has a unique fixed point which is denoted by \(v_t\). Hence,

$$\begin{aligned} v_t:=J^{\mathrm{T}}_{c}((1-t)v_t+tu+e). \end{aligned}$$
(4)

In the sequel, we denote \(F:=T^{-1}(0)\) and \(P_F\) denotes the metric projection map of H onto F.

3.1 The Regularization Method

In the following theorem, we consider the algorithm (5) below, and we show that the sequence \((v_n)\) converges strongly to \(P_Fu\). We provide also a necessary and sufficient condition for the zero set of T to be nonempty.

Theorem 3.1

Let \(T:D(T) \subseteq H \rightrightarrows H\) be a maximal monotone operator. Let \(u \in H\), \((t_n)_{n=1}^{\infty } \subset ]0,1[\), \((c_n)_{n=1}^{\infty } \subset ]0,\infty [\) and \((e_n)_{n=1}^{\infty } \subset H\). Then the following statements hold:

  1. (i)

    For every \(n \in \mathbb {N}\), there exists a unique \(v_n \in H\) such that

    $$\begin{aligned} v_n:=J^{\mathrm{T}}_{c_n}((1-t_n)v_n+t_n u+e_n). \end{aligned}$$
    (5)
  2. (ii)

    If either one of the following two conditions hold, then \(F:=T^{-1}(0) \ne \emptyset \) if and only if \(\underset{n \rightarrow +\infty }{\liminf } \Vert v_n \Vert < + \infty \) if and only if \((v_n )\) is bounded.

    1. (a)

      \((\frac{e_n}{t_n})\) is bounded and \(\lim _{n \rightarrow \infty }c_n= \infty \).

    2. (b)

      \(\lim _{n \rightarrow \infty }t_n=0\), \((\frac{e_n}{t_n})\) is bounded and \(\liminf _{n \rightarrow \infty } c_n \ge \beta \) for some \(\beta >0 \).

  3. (iii)

    If either one of the following two conditions hold, and if \(F=T^{-1}(0) \ne \emptyset \), then \(s-\lim _{n \rightarrow +\infty }v_n = P_{F}u\).

    1. (a)

      \(\lim _{n \rightarrow \infty }\frac{\Vert e_n\Vert }{t_n}=0\) and \(\lim _{n \rightarrow \infty }c_n= \infty \),

    2. (b)

      \(\lim _{n \rightarrow \infty }t_n=0\), \(\lim _{n \rightarrow \infty }\frac{\Vert e_n\Vert }{t_n}=0\) and \(\liminf _{n \rightarrow \infty } c_n \ge \beta \) for some \(\beta >0 \).

Proof

  1. (i)

    From (4), it follows that for every \(n \in \mathbb {N}\) there exists a unique \(v_n \in H\) such that (5) holds.

  2. (ii)

    Assume that \(F:=T^{-1}(0) \ne \emptyset \) and let \(p \in T^{-1}(0)\). Since \(J_{c_n}(p)=p\), from (5) and the nonexpansiveness of the resolvent operator, we have:

    $$\begin{aligned} \Vert v_{n}-p\Vert \le&(1-t_n) \Vert v_n -p\Vert + t_n \Vert u-p \Vert + \Vert e_n \Vert . \end{aligned}$$
    (6)

    Hence, \(\Vert v_n -p \Vert \le \Vert u-p \Vert + \frac{e_n}{t_n}\), and this shows that \((v_n)\) is bounded and hence \(\liminf _{n \rightarrow +\infty } \Vert v_n \Vert < + \infty \).

Conversely, assume that \(\liminf _{n \rightarrow +\infty } \Vert v_n \Vert < + \infty \). Then, there exists a subsequence (n(k)) of \(\mathbb {N}\) and some \(v_{\infty } \in H\) such that \(v_{n(k)} \rightharpoonup v_{\infty }\). Now for every \(x \in D(T)\) and \(y \in T(x)\), by the monotonicity of T, we get:

$$\begin{aligned} \left\langle \frac{t_{n(k)}(u-v_{n(k)})+e_{n(k)}}{c_{n(k)}}-y, v_{n(k)}-x \right\rangle \ge 0. \end{aligned}$$
(7)

By letting \(k \rightarrow \infty \) in (7), we have \(\langle 0-y,v_{\infty }-x \rangle \ge 0\), and therefore, by the maximality of T, we conclude that \(v_{\infty } \in D(T)\) and \(0 \in T(v_{\infty })\). Hence, \(T^{-1}(0) \ne \emptyset \). The above proof shows that \(T^{-1}(0) \ne \emptyset \), if and only if \((v_n )\) is bounded and this completes the proof of (ii).

(iii) It follows from (ii) that \(F=T^{-1}(0) \ne \emptyset \) and \((v_n)\) is bounded. Let \(p \in T^{-1}(0)\). By the monotonicity of T, we have: \(\langle \frac{t_{n}(u-v_{n})+e_{n}}{c_{n}}-0, v_{n}-p \rangle \ge 0\) and hence \( \langle v_n-p,v_n - u \rangle \le \langle \frac{e_n}{t_n},v_n-p \rangle .\) Therefore,

$$\begin{aligned} \Vert v_n -p \Vert ^{2}= & {} \langle v_n-p,v_n -u \rangle + \langle v_n-p,u-p \rangle \nonumber \\\le & {} \left\langle \frac{e_n}{t_n}, v_n-p \right\rangle + \langle v_n-p,u-p \rangle \nonumber \\\le & {} \frac{\Vert e_n \Vert \cdot \Vert v_n-p \Vert }{t_n} + \langle v_n-p,u-p \rangle . \end{aligned}$$
(8)

Since \((v_n)\) is bounded, there exists a subsequence (n(k)) of \(\mathbb {N}\) and \({\overline{v}} \in H\) such that \(v_{n(k)} \rightharpoonup {\overline{v}}\), and

$$\begin{aligned} \limsup _{n \rightarrow \infty } \langle v_n-p,u -p \rangle = \lim _{k \rightarrow \infty } \langle v_{n(k)}-p,u -p \rangle = \langle {\overline{v}}-p,u -p \rangle . \end{aligned}$$

Now for every \((x,y) \in G(T)\):

$$\begin{aligned} \left\langle \frac{t_{n(k)}(u-v_{n(k)})+e_{n(k)}}{c_{n(k)}}-y, v_{n(k)}-x \right\rangle \ge 0. \end{aligned}$$
(9)

Letting \(k \longrightarrow \infty \) in the above inequality, we get \(\langle 0-y,{\overline{v}}-x \rangle \ge 0\), and hence, by the maximality of T, we conclude that \({\overline{v}} \in D(T)\) and \(0 \in T({\overline{v}})\). Also from (8) we get:

$$\begin{aligned} \limsup _{n \rightarrow \infty } \Vert v_n -p \Vert ^2 \le 0+ \langle {\overline{v}}-p,u-p \rangle . \end{aligned}$$
(10)

For \(q=P_{F}u\), since \({\overline{v}} \in T^{-1}(0)\), we have: \(\langle {\overline{v}}-q,u-q \rangle \le 0.\) Now replacing p with q in (10), and using the above inequality, we conclude that \( \limsup _{n \rightarrow \infty } \Vert v_n -q \Vert =0\) and hence \(s-\lim _{n \rightarrow +\infty }v_n =q= P_{F}u\). This completes the proof. \(\square \)

Remark 3.1

In his Theorem 3.1, Xu [16] claims that the strong convergence in this theorem is uniform with respect to \(c > 0\). However, his proof is not correct because the choice of \(v_t\) in this theorem depends on the choice of c, and actually, the result is not true, as shown by the following example. It is true if the condition \(c> 0\) is replaced by \(c > \alpha \) for some \(\alpha >0\). Therefore, our Theorem 3.1 is a true extension of Xu [16, Theorem 3.1].

Example 3.1

Let \(T: \mathbb {R} \longrightarrow \mathbb {R}\) be defined by \(Tx=x\). Obviously \(F=T^{-1}(0)= \{ 0 \}\). Let \(u \in \mathbb {R}\). For every \(t \in ]0,1[\) and \(c >0\) there exists a unique \(v_{t}^{c} \in \mathbb {R}\) such that \(v_{t}^{c}=J_{c}^{\mathrm{T}}((1-t)v_{t}^{c}+tu)\) and hence \(v_{t}^{c}=\frac{t}{t+c}u\). Suppose \(u=1\). If \(t_n \longrightarrow 0\) and \(c_n= t_n\), then we have \(v_{t_n}^{c_n}=\frac{1}{2} \nrightarrow P_F(u)=0\).

Remark 3.2

Our regularization method introduced in Theorem 3.1 provides an affirmative answer to an open question raised by Boikanyo and Morosanu [18, p. 640], concerning the design of a PPA where \({\lim \nolimits _{{n \rightarrow \infty }}} \Vert e_n \Vert =0\) and the sequence \((c_n )\) is bounded.

3.2 Proximal Point Algorithm

In the following theorem, we give a necessary and sufficient condition for the zero set of T to be nonempty, in which case we show the strong convergence of the sequence generated by (1).

Theorem 3.2

Let \(T:D(T) \subseteq H \rightrightarrows H\) be a maximal monotone operator. For any fixed \(x_0,u \in H\), let the sequence \((x_n)\) be generated by

$$\begin{aligned} x_{n+1}=J_{c_n}^{\mathrm{T}}((1-t_n)x_n+t_nu+e_n), \end{aligned}$$
(11)

for all \(n \ge 0\), where \(t_n \in ]0,1[\), \(c_n \in ]0,+\infty [\) and \(e_n \in H\) for all \(n \ge 0\). Then the following statements hold:

  1. (i)

    If \(\lim _{n \rightarrow +\infty } c_n= +\infty \), \((e_n) \subset H\) is bounded and

    $$\begin{aligned} \underset{{m \rightarrow \infty }}{\limsup } \sum _{k=1}^{m}(1-t_k)(1-t_{k+1})\cdots (1-t_m) < \infty , \end{aligned}$$
    (12)

    then \(T^{-1}(0) \ne \emptyset \) if and only if \(\liminf _{n \rightarrow \infty }(\Vert x_{n+1} \Vert +\Vert x_n \Vert ) < \infty \) if and only if \((x_n)\) is bounded.

  2. (ii)

    If \(F:=T^{-1}(0) \ne \emptyset \), then for every sequence \((e_n) \subset H\), \((t_n)_{n=1}^{\infty } \subset ]0,1[\) and \((c_n)_{n=1}^{\infty } \subset ]0,\infty [\) where (12) holds, \(\lim _{n \rightarrow +\infty } c_n= +\infty \) and \(\lim _{n \rightarrow +\infty }\frac{\Vert e_n \Vert }{t_n}=0\), we have that the sequence \((x_n)\) generated by (11) converges strongly to \(P_{F}u\).

Proof

(i) Assume that \(T^{-1}(0)\ne \emptyset \), and let \(p \in T^{-1}(0)\). Since \(J_{c_m}(p)=p\), from (11), and the nonexpansiveness of the resolvent operator, we have:

$$\begin{aligned} \Vert x_{m+1}-p \Vert \le (1-t_m)\Vert x_m-p\Vert +t_m \Vert u-p \Vert +\Vert e_m\Vert . \end{aligned}$$
(13)

Let \(M >0\) be such that \(\Vert e_n \Vert < M\) for all \(n \in \mathbb {N}\). Then from (13) we get: \(\Vert x_{m+1}-p\Vert \le (1-t_m)\Vert x_m-p\Vert + \Vert u-p \Vert +M\) for all \(m \ge 0\). Hence, by using induction, for all \(n \ge 0\), we get:

$$\begin{aligned} \Vert x_{n+1}-p\Vert \le&(1-t_n) (1-t_{n-1})\cdots (1-t_0) \Vert x_{0}-p \Vert \\&+\left( 1+\sum _{k=1}^{n}(1-t_k)(1-t_{k+1})\cdots (1-t_n) \right) (\Vert u-p \Vert +M ). \end{aligned}$$

This shows that \((x_n)\) is bounded and so \(\liminf _{n \rightarrow \infty }(\Vert x_{n+1} \Vert +\Vert x_n \Vert ) < \infty \).

Conversely, assume that \(\liminf _{n \rightarrow \infty }(\Vert x_{n+1} \Vert +\Vert x_n \Vert ) < \infty \). Then there exists a subsequence (n(k)) of \(\mathbb {N}\) such that \(\Vert x_{n(k)+1} \Vert +\Vert x_{n(k)} \Vert \) is bounded. Therefore, there exists a subsequence (m(l)) of (n(k)) such that \(x_{m(l)+1} \rightharpoonup p\) as \(l \rightarrow +\infty \) for some \(p \in H\). Also \(( x_{m(l)} )\) is a bounded sequence. Now for every \(x \in D(T)\) and \(y \in T(x)\), by the monotonicity of T, we get:

$$\begin{aligned} \left\langle y- \frac{(1-t_{m(l)})x_{m(l)}+t_{m(l)}u+e_{m(l)}-x_{m(l)+1}}{c_{m(l)}},x-x_{m(l)+1} \right\rangle \ge 0. \end{aligned}$$
(14)

Now letting \(m \rightarrow \infty \) in (14), we get \(\langle y-0,x-p \rangle \ge 0\), and hence, by the maximality of T, we conclude that \(p \in D(T)\) and \(0 \in T(p)\). Moreover, the above proof shows that \(T^{-1}(0) \ne \emptyset \) if and only if \((x_n)\) is bounded, and this completes the proof of (i).

(ii) Since (12) holds, there exists some \(M < \infty \) such that for all \(n \in \mathbb {N}\), \( \sum _{k=1}^{n}(1-t_k)(1-t_{k+1})\cdots (1-t_n) < M\).

We know from Theorems 3.1 and 3.2 (i) that the two sequences \((v_n)\) and \((x_n)\) are bounded. Also for all \(m \in \mathbb {N}\)

$$\begin{aligned} \Vert x_{m+1}-v_{m+1} \Vert&\le \Vert x_{m+1}-v_m\Vert + \Vert v_{m+1}-v_m\Vert \nonumber \\&= \Vert J_{c_m}^{\mathrm{T}}((1-t_m)x_m+t_mu+e_m)-J_{c_m}^{\mathrm{T}}((1-t_m)v_m+t_mu+e_m) \Vert \nonumber \\&\quad +\, \Vert v_{m+1}-v_m\Vert \le (1-t_m)\Vert x_m-v_m \Vert + \Vert v_{m+1}-v_m\Vert . \end{aligned}$$
(15)

Let \(\varepsilon >0\) be arbitrary. Since by Theorem 3.1 we have \(s-\lim _{n \longrightarrow \infty } v_n=P_{F}u\), there exists some \(p \in \mathbb {N}\) such that for every \(i \ge p\) we have \(\Vert v_i- v_{i-1} \Vert < \varepsilon \). Hence, from (15) and by using induction, for every \(n \ge p\), we have:

$$\begin{aligned} \Vert x_{n+1}-v_{n+1} \Vert \le&(1-t_n)\Vert x_n-v_n \Vert + \varepsilon \le \cdots \\ \le&(1-t_n)(1-t_{n-1})\cdots (1-t_p)\Vert x_{p-1}-v_{p-1} \Vert \\&+ \left( 1+ \sum _{k=p}^{n}(1-t_k)(1-t_{k+1})...(1-t_n) \right) \varepsilon \\ \le&(1-t_n)(1-t_{n-1})\cdots (1-t_p)\Vert x_{p-1}-v_{p-1} \Vert \\&+ \left( 1+ \sum _{k=1}^{n}(1-t_k)(1-t_{k+1})\cdots (1-t_n) \right) \varepsilon \\ \le&(1-t_n)(1-t_{n-1})\cdots (1-t_p)\Vert x_{p-1}-v_{p-1} \Vert + (1+ M ) \varepsilon . \end{aligned}$$

By the above inequality, we have: \(\underset{n \longrightarrow \infty }{\limsup } \Vert x_{n+1}-v_{n+1} \Vert \le 0+ (1+M)\varepsilon .\) Since \(\varepsilon >0\) is arbitrary, from the above inequality we get \(\lim _{n \rightarrow \infty }\Vert x_n-v_n \Vert =0\), and hence, since \(s-\lim _{n \rightarrow \infty }v_n =P_{F}u\) (Theorem 3.1), we have \(s-\lim _{n \rightarrow \infty }x_n =P_{F}u\). This completes the proof. \(\square \)

Remark 3.3

If \((t_n)_{n=1}^{\infty } \subset ]0,1[\) and \(\liminf _{n\rightarrow +\infty }t_n \ge \alpha \) for some \(\alpha \in ]0,1[\), then clearly (12) holds. However, from (12) we cannot conclude that \({\lim }{\mathrm{inf}}_{{n\rightarrow +\infty }}t_n \ge \alpha \) for some \(\alpha \in ]0,1[\). For example, if \(t_n= \frac{1}{ln(n+2)}\) then the condition (12) holds, but \(\lim _{n \rightarrow \infty } t_n=0\). In fact, condition (12) holds for any sequence \((t_n)_{n=1}^{\infty } \subset ]0,1[\) that converges to zero as \(n\rightarrow \infty \) and satisfies \(t_n > \frac{1}{ln(n+2)}\), e.g., \(t_n = \frac{1}{ln (ln(n+2))}\). Therefore, the following corollary is a direct consequence of Theorem 3.2.

Corollary 3.1

Let \(T:D(T) \subseteq H \rightrightarrows H\) be a maximal monotone operator. For any fixed \(x_0,u \in H\), let the sequence \((x_n)\) be generated by (11), for all \(n \ge 0\), where \(t_n \in ]0,1[\), \(c_n \in ]0,+\infty [\) and \(e_n \in H\) for all \(n \ge 0\). Then the following statements hold.

  1. (i)

    If \(\liminf _{n\rightarrow +\infty }t_n \ge \alpha \) for some \(\alpha \in ]0,1[\), \(\lim _{n \rightarrow +\infty } c_n= +\infty \) and \((e_n) \subset H\) is bounded, then \(T^{-1}(0) \ne \emptyset \) if and only if \(\underset{n\rightarrow +\infty }{\liminf }(\Vert x_{n+1} \Vert +\Vert x_n \Vert ) < \infty \) if and only if \((x_n)\) is bounded.

  2. (ii)

    If \(F:=T^{-1}(0) \ne \emptyset \), \(\underset{n\rightarrow +\infty }{\liminf }t_n \ge \alpha \) for some \(\alpha >0\), \(\lim _{n \rightarrow +\infty } c_n= +\infty \), and \(\lim _{n \rightarrow +\infty } \Vert e_n \Vert =0\), then the sequence \((x_n)\) generated by (11) converges strongly to \(P_{F}u\).

In the following corollary, we prove a similar theorem for the scheme (2) considered by Boikanyo and Morosanu [18].

Corollary 3.2

Let \(T:D(T) \subseteq H \rightrightarrows H\) be a maximal monotone operator. For any fixed \(y_0,u \in H\), let the sequence \((y_n)\) be generated by

$$\begin{aligned} y_{n+1}=t_n u+(1-t_n)J_{c_n}^{\mathrm{T}}(y_n)+e_n, \end{aligned}$$
(16)

for all \(n \ge 0\), where \(t_n \in ]0,1[\), \(c_n \in ]0,+\infty [\) and \(e_n \in H\) for all \(n \ge 0\). Then the following statements hold:

  1. (i)

    If the condition (12) holds, \({\lim }{\mathrm{sup}}_{{n\rightarrow +\infty }}t_n < 1\), \(\lim _{n \rightarrow +\infty } c_n= +\infty \) and \((e_n) \subset H\) is bounded, then \(T^{-1}(0) \ne \emptyset \) if and only if \({\lim }{\mathrm{inf}}_{{n\rightarrow +\infty }}(\Vert y_{n+1} \Vert +\Vert y_n \Vert ) < \infty \) if and only if \((y_n)\) is bounded.

  2. (ii)

    If \(F:=T^{-1}(0) \ne \emptyset \), \(\lim _{n \rightarrow +\infty } \Vert e_n \Vert =0\), \(\lim _{n\rightarrow +\infty }t_n = t\) for some \(t \in ]0,1[\), and \(\lim \nolimits _{n\rightarrow +\infty }c_n = +\infty \), then the sequence \((y_n)\) generated by (16) converges strongly to \(tu+(1-t)P_{F}u\).

  3. (iii)

    If \(F:=T^{-1}(0) \ne \emptyset \), (12) holds, \(\lim _{n\rightarrow +\infty }t_n = 0\), \(\lim _{n \rightarrow +\infty }\frac{\Vert e_n \Vert }{t_n}=0\) and \(\lim _{n\rightarrow +\infty }c_n = +\infty \), then the sequence \((y_n)\) generated by (16) converges strongly to \(P_{F}u\).

Proof

(i) For every \(n \ge 0\), define

$$\begin{aligned} x_n:= \frac{y_{n+1}-t_nu-e_n}{1-t_n}, \end{aligned}$$
(17)

then we have \( x_n:= J_{c_n}^{\mathrm{T}}(y_n)=J_{c_n}^{\mathrm{T}}(t_{n-1} u+(1-t_n)x_{n-1}+e_{n-1})\).

Also \({\lim }\mathrm{inf}_{{n\rightarrow +\infty }}(\Vert y_{n+1} \Vert +\Vert y_n \Vert ) < \infty \), if and only if \({\lim }\mathrm{inf}_{{n\rightarrow +\infty }}(\Vert x_{n+1} \Vert +\Vert x_n \Vert ) < \infty \). Hence, the conclusion follows from a similar proof as in Theorem 3.2 (i).

(ii) and (iii) Let \((x_n)\) be generated by (17). Then, by a similar proof as in Theorem 3.2 (ii), \(s-\lim _{n \rightarrow \infty }x_n =P_{F}u\). Hence, by

$$\begin{aligned} y_n:= t_{n-1}u+(1-t_{n-1})x_{n-1}+e_{n-1} \end{aligned}$$

we conclude that \(s-\lim _{n \rightarrow \infty }y_n =tu+(1-t)P_{F}u\) and this completes the proof. \(\square \)

The following examples show that without additional assumptions, we cannot replace the condition \(\lim _{n\rightarrow +\infty }c_n = +\infty \) with the boundedness condition for \((c_n)\). In the first example \(T^{-1}(0)=\emptyset \), and in the second one \(T^{-1}(0) \ne \emptyset \).

Example 3.2

Let \(T: \mathbb {R} \longrightarrow \mathbb {R}\) be defined by \(Tx=1\). Obviously T is a maximal monotone operator. By taking \(c_n=1\), \( e_n=0\), \(t_n=\frac{1}{2}\) for all \(n \ge 0\), \(x_0=0\) and \(u=0\), we have \(x_{n+1}=\frac{1}{2} x_n-1\). (The sequence \((x_n)\) is generated by (11).) Then \((x_n)\) is a decreasing sequence, and obviously \(\lim _{n \rightarrow +\infty } x_n=-2\), but \(T^{-1}(0)=\emptyset \).

Example 3.3

Let \(T: \mathbb {R} \longrightarrow \mathbb {R}\) be defined by \(Tx=x+1\). Obviously T is a maximal monotone operator. By taking \(c_n=1\), \(e_n=0\), \(t_n=\frac{1}{2}\) for all \(n \ge 0\), \(x_0=1\) and \(u=0\) we have \(x_{n+1}=\frac{1}{4} x_n- \frac{1}{2}\) (the sequence \((x_n)\) is generated by (11)). Then \((x_n)\) is a decreasing sequence, and obviously \(\lim _{n \rightarrow +\infty } x_n=\frac{-2}{3}\), but \(T^{-1}(0)=\{ -1 \}\).

In the following theorem, we give another necessary and sufficient condition for the zero set of T to be nonempty, and show the strong convergence of the corresponding PPA.

Theorem 3.3

Let \(T:D(T) \subseteq H \rightrightarrows H\) be a maximal monotone operator. For any fixed \(x_0,y_0,z_0,u \in H\), let the sequences \((x_n)\), \((y_n)\) and \((z_n)\) be generated by

$$\begin{aligned} x_{n+1}&=J_{c_n}^{\mathrm{T}}((1-t_n)x_n+t_nu+e_n) \\ y_{n+1}&=J_{c_n}^{\mathrm{T}}((1-t_n)y_n)\\ z_{n+1}&=J_{\gamma }^{\mathrm{T}}((1-t_n)z_n), \end{aligned}$$

for all \(n \ge 0\), where \(t_n \in ]0,1[\), \(c_n \in [\gamma ,+\infty [\) for some \(\gamma \in ]0,+\infty [\), and \(e_n \in H\) for all \(n \ge 0\). Suppose that (12) holds. Then the following statements hold:

  1. (i)

    If \((e_n)\) is bounded, then \((x_n)\) is bounded if and only if \((y_n)\) is bounded. Also if \((y_n)\) is bounded, then \((z_n)\) is bounded too, but the converse is not true, as Remark 3.6 shows.

  2. (ii)

    If \((e_n)\) is bounded, \(\lim _{n \longrightarrow \infty }t_n=0\) and \(\sum _{n=1}^{\infty } |t_n-t_{n-1}| < +\infty \) then \(F=T^{-1}(0)\ne \emptyset \) if and only if \((x_n)\) is bounded.

  3. (iii)

    If \(\lim _{n \rightarrow \infty }t_n=0\), \(\lim _{n \rightarrow \infty }\frac{\Vert e_n\Vert }{t_n}=0\), \(\sum _{n=1}^{\infty } |t_n-t_{n-1}| < +\infty \) and \(F=T^{-1}(0)\ne \emptyset \), then \(s-\lim _{n \longrightarrow \infty }x_n=s-\lim _{n \longrightarrow \infty }y_n=s-\lim _{n \longrightarrow \infty }z_n=P_{F}(0)\).

Proof

(i) For all \(n \ge 0\), by using the fact that the resolvent operator is nonexpansive and by induction, we get:

$$\begin{aligned} \Vert x_{n+1}-y_{n+1} \Vert \le&(1-t_n)\Vert x_n -y_n \Vert +t_n \Vert u\Vert +\Vert e_n \Vert \nonumber \\ \le&\cdots \le (1-t_n)(1-t_{n-1})\cdots (1-t_0) \Vert x_{0}-y_{0} \Vert \nonumber \\&+(t_n+t_{n-1}(1-t_n)+\cdots +t_0(1-t_1)\cdots (1-t_n) )\Vert u \Vert \nonumber \\&+ ( \Vert e_n \Vert +\Vert e_{n-1}\Vert (1-t_n)+\cdots +\Vert e_{0} \Vert (1-t_1) \cdots (1-t_n) ). \end{aligned}$$
(18)

Since \((e_n)\) is bounded, there exists some \(M_1 >0\) such that \(\Vert e_n \Vert \le M_1\), for all \(n \ge 0\) and \(\Vert u \Vert \le M_1\). Hence, from (18) we conclude that

$$\begin{aligned} \Vert x_{n+1}-y_{n+1} \Vert\le & {} (1-t_n)(1-t_{n-1})\cdots (1-t_0) \Vert x_{0}-y_{0}\Vert \\&+(t_n+t_{n-1}(1-t_n)+\cdots +t_0(1-t_1)\cdots (1-t_n) ) M_1\\&+ ( 1+ (1-t_n)+\cdots +(1-t_1) \cdots (1-t_n) )M_1 \\\le & {} \Vert x_{0}-y_{0} \Vert + 2M_1 \left( 1+ \sum _{k=1}^{n}(1-t_k)(1-t_{k+1}) \cdots (1-t_n) \right) . \end{aligned}$$

Since (12) holds, the above inequality shows that \((x_n)\) is bounded if and only if \((y_n)\) is bounded.

Now for all \(m \ge 0\), by using the resolvent identity, and the fact that the resolvent operator is nonexpansive, we have:

$$\begin{aligned} \Vert y_{m+1} -z_{m+1} \Vert= & {} \Vert J_{c_m}^{\mathrm{T}}((1-t_m)y_m)-J_{\gamma }^{\mathrm{T}}((1-t_m)z_m) \Vert \nonumber \\= & {} \Vert J_{\gamma }^{\mathrm{T}}\left( \frac{\gamma }{c_m}(1-t_m)y_m+\left( 1-\frac{\gamma }{c_m}\right) y_{m+1} \right) -J_{\gamma }^{\mathrm{T}}((1-t_m)z_m)\Vert \nonumber \\\le & {} \Vert \frac{\gamma }{c_m}(1-t_m)y_m+\left( 1-\frac{\gamma }{c_m}\right) y_{m+1} -(1-t_m)z_m\Vert \nonumber \\= & {} \Vert (1-t_m)(y_m-z_m) + \left( 1-\frac{\gamma }{c_m}\right) (y_{m+1}-(1-t_m)y_m)\Vert \nonumber \\\le & {} (1-t_m) \Vert y_m-z_m \Vert + \left( 1-\frac{\gamma }{c_m}\right) \Vert y_{m+1}-(1-t_m)y_m\Vert . \end{aligned}$$
(19)

If \((y_n)\) is bounded, then there exists some \(M_2 >0\) such that for all \(m \ge 0\):

$$\begin{aligned} \left( 1-\frac{\gamma }{c_m}\right) \Vert y_{m+1}-(1-t_m)y_m\Vert \le M_2. \end{aligned}$$
(20)

From the inequalities (19) and (20), and by induction, for all \(n \ge 0\), we get:

$$\begin{aligned} \Vert y_{n+1}-z_{n+1} \Vert \le&(1-t_n)\Vert y_n -z_n \Vert +M_2 \\ \le&\cdots \le (1-t_n)(1-t_{n-1}) \cdots (1-t_0)\Vert y_{0}-z_{0}\Vert \\&+ M_2 ( 1+(1-t_n)+(1-t_n)(1-t_{n-1})+ \cdots \\&+(1-t_n) \cdots (1-t_0)). \end{aligned}$$

Since (12) holds, the above inequality shows that \((z_n)\) is bounded.

(ii) Assume that \(T^{-1}(0) \ne \emptyset \). By a similar proof as in Theorem 3.2(i), \((x_n)\) is bounded. Conversely, assume that \((x_n)\) is bounded. Then from (i), \((z_n)\) is bounded too. First, let’s show that \((z_n)\) converges. Let \(M_3 > 0\) be such that \(\Vert z_n \Vert \le M_3\) for all \(n \ge 0\). Since the resolvent operator is nonexpansive, by induction, we have for all \(n \ge 0\):

$$\begin{aligned} \Vert z_{n+1}-z_n \Vert \le&\Vert (1-t_n)z_n - (1-t_{n-1})z_{n-1}\Vert \nonumber \\ =&\Vert (1-t_n)(z_n-z_{n-1}) - (t_n-t_{n-1})z_{n-1}\Vert \nonumber \\ \le&(1-t_n)\Vert z_n-z_{n-1} \Vert + |t_n-t_{n-1}| \Vert z_{n-1}\Vert \nonumber \\ \le&(1-t_n)\Vert z_n-z_{n-1} \Vert + M_3 |t_n-t_{n-1}| \nonumber \\ \le&\cdots \le (1-t_n)(1-t_{n-1}) \cdots (1-t_1)\Vert z_1-z_0 \Vert \nonumber \\&+ M_3 \left( |t_n-t_{n-1}|+\sum _{k=1}^{n-1} |t_{k}-t_{k-1}|(1-t_{k+1})\cdots (1-t_n) \right) . \end{aligned}$$
(21)

From the above inequality, we have

$$\begin{aligned} \sum _{n=2}^{\infty }\Vert z_{n+1}-z_n \Vert\le & {} \left( \sum _{n=2}^{\infty }(1-t_1)(1-t_2) \cdots (1-t_n) \right) \Vert z_1-z_0 \Vert \nonumber \\&+ M_3 \sum _{n=2}^{\infty } \left( |t_n-t_{n-1}|+\sum _{k=1}^{n-1} |t_{k}-t_{k-1}|(1-t_{k+1})\cdots (1-t_n) \right) \nonumber \\\le & {} \left( \sum _{n=2}^{\infty }(1-t_1)(1-t_2) \cdots (1-t_n) \right) \Vert z_1-z_0 \Vert \nonumber \\&+ M_3 \sum _{k=1}^{\infty }|t_k-t_{k-1}|\left( 1+ \sum _{n=k+1}^{\infty } (1-t_{k+1})(1-t_{k+2})\cdots (1-t_n) \right) .\nonumber \\ \end{aligned}$$
(22)

We are going to show that \(\sum _{n=2}^{\infty }(1-t_1)(1-t_2) \cdots (1-t_n) < +\infty \) and that there exists some \(M_4 \in ]0,+\infty [\) such that

$$\begin{aligned} \sum _{n=k+1}^{\infty } (1-t_{k+1})(1-t_{k+2})\cdots (1-t_n) \le M_4, \end{aligned}$$

for all \(k \ge 1\). Then, we can conclude from (22) that \((z_n)\) is convergent. First, let us show that \(\sum _{n=2}^{\infty }(1-t_1)(1-t_2) \cdots (1-t_n) < +\infty \). For every \(l \in \mathbb {N}\), since \(\lim _{n\longrightarrow \infty } t_n=0\), there exists an integer \(n \in \mathbb {N}\) such that \(n > 2l\) and \(1-t_i < 1-t_{n-i+1}\) for all \(1 \le i \le l\). Then,

$$\begin{aligned}&\sum _{m=1}^{l}(1-t_1)(1-t_2)\cdots (1-t_l) \\&\quad = (1-t_1)+(1-t_1)(1-t_2)+ \cdots + (1-t_1)(1-t_2)\cdots (1-t_l) \\&\quad \le (1-t_n)+(1-t_n)(1-t_{n-1})+\cdots +(1-t_n)(1-t_{n-1})\cdots (1-t_{n-l+1}) \\&\quad \le \sum _{k=1}^{n}(1-t_k)(1-t_{k+1}) \cdots (1-t_n). \end{aligned}$$

Therefore, by (12) and the above inequality, we get: \(\sum _{n=2}^{\infty }(1-t_1)(1-t_2) \cdots (1-t_n) < +\infty \).

Also, for all \(k \ge 1\) and all \(m > k\), there exists an integer \(N \in \mathbb {N}\) (large enough) such that \(1-t_{k+i}<1-t_{N-i+1}\) for all \(1 \le i \le m-k\) and hence

$$\begin{aligned}&\sum _{n=k+1}^{m}(1-t_{k+1})(1-t_{k+2})\cdots (1-t_n) \\&\quad =(1-t_{k+1})+(1-t_{k+1})(1-t_{k+2})+ \cdots + (1-t_{k+1})(1-t_{k+2})\cdots (1-t_m) \\&\quad \le (1-t_N)+(1-t_N)(1-t_{N-1})+\cdots +(1-t_N)(1-t_{N-1})\cdots (1-t_{N-m+1}) \\&\quad \le \sum _{l=1}^{N}(1-t_l)(1-t_{l+1}) \cdots (1-t_N). \end{aligned}$$

Therefore, by (12) and the above inequality, we conclude that there exists some \(M_4 \in ]0,+\infty [\) such that \(\sum _{n=k+1}^{\infty } (1-t_{k+1})(1-t_{k+2})\cdots (1-t_n) \le M_4\), for all \(k \ge 1\). Then (22) shows that \((z_n)\) is convergent. Let \(s-\lim _{n\longrightarrow \infty }z_n=p\). For every \((x,y) \in {\mathrm{GraphT}}\), by the monotonicity of T, we have for all \(n \in \mathbb {N}\):

$$\begin{aligned} \left\langle y- \frac{(1-t_n)z_n-z_{n+1}}{\gamma }, x-z_{n+1}\right\rangle \ge 0. \end{aligned}$$

By letting \(n \longrightarrow \infty \) in the above inequality, we get \(\langle y- 0, x-p\rangle \ge 0\), and hence, by the maximality of T, \(p \in D(T)\) and \(0 \in T(p)\). So \(T^{-1}(0) \ne \emptyset \).

(iii) It follows from Theorem 3.1 (iii) that \(s-\lim _{n\longrightarrow \infty }v_n=P_{F}(0)\), where

\(v_n=J_{c_n}^{\mathrm{T}}((1-t_n)v_n+t_n\times 0+0)\) for all \(n \ge 0\).

For every \(\varepsilon >0\), there exists an integer \(n_0 \in \mathbb {N}\) such that for all \(m \ge n_0\): \(\Vert v_{m+1}-v_m \Vert \le \varepsilon \). First we show that \(s-\lim _{n \longrightarrow \infty }y_n=P_{F}(0)\).

By induction and the fact that the resolvent operator is nonexpansive, for all \(m \ge n_0\), we have:

$$\begin{aligned} \Vert y_{m+1}-v_{m+1} \Vert\le & {} \Vert y_{m+1}-v_m\Vert +\Vert v_m-v_{m+1}\Vert \le (1-t_m)\Vert y_m- v_m\Vert + \varepsilon \nonumber \\\le & {} \cdots \le (1-t_m)(1-t_{m-1}) \cdots (1-t_{n_0})\Vert y_{n_0}-v_{n_0}\Vert \nonumber \\&+\,\varepsilon ( 1+(1-t_m)+\cdots +(1-t_m)\cdots (1-t_{n_0+1}) ) \nonumber \\\le & {} (1-t_m)(1-t_{m-1}) \cdots (1-t_{n_0})\Vert y_{n_0}-v_{n_0}\Vert \nonumber \\&+\,\varepsilon \left( 1+\sum _{k=1}^{m}(1-t_k)(1-t_{k+1})\cdots (1-t_{m}) \right) . \end{aligned}$$
(23)

Since (12) holds, there exists some \(M_5 \in ]0,+\infty [\) such that, for all \(n \ge 1\).

$$\begin{aligned} \sum _{k=1}^{n}(1-t_k)(1-t_{k+1})\cdots (1-t_n) \le M_5. \end{aligned}$$
(24)

Hence, from (23) and (24), for all \(n \ge n_0\) we have \(\Vert y_{n+1}-v_{n+1} \Vert \le (1-t_n)(1-t_{n-1})\cdots (1-t_{n_0}) \Vert y_{n_0}-v_{n_0}\Vert +\varepsilon (M_5+1)\), and therefore, \(\limsup _{n\longrightarrow \infty }\Vert y_{n+1}-v_{n+1} \Vert \le \varepsilon (M_5+1)\), for every \(\varepsilon >0\). Hence, \(\lim _{n \longrightarrow \infty }\Vert y_n-v_n\Vert =0\). Since \(s-\lim _{n \longrightarrow \infty }v_n=P_{F}(0)\), we conclude that \(s-\lim _{n \longrightarrow \infty }y_n=P_{F}(0)\). Also, \(s-\lim _{n \longrightarrow \infty }z_n=P_{F}(0)\), since \((z_n)\) is a special case of \((y_n)\), with constant sequence \((c_n)\). Finally, we prove that \(s-\lim _{n \longrightarrow \infty }x_n=P_{F}(0)\).

Let \(\varepsilon >0\) be arbitrary. Since \(\lim _{n \longrightarrow \infty }\Vert e_n\Vert =\lim _{n \longrightarrow \infty }t_n=0\), there exists an integer \(N_0 \in \mathbb {N}\) such that for all \(k \ge N_0\), \(\Vert e_k \Vert < \varepsilon \) and \(t_k < \varepsilon \).

For all \(n \ge N_0\), from the inequality (18), we deduce that

$$\begin{aligned} \Vert x_{n+1}-y_{n+1}\Vert\le & {} (1-t_n)(1-t_{n-1})\cdots (1-t_0) \Vert x_0-y_0\Vert \\&+\, ( \varepsilon +\varepsilon (1-t_n)+ \cdots +\varepsilon (1-t_{N_0+1})(1-t_{N_0+2})\cdots (1-t_n) \\&+\,t_{N_0-1}(1-t_{N_0})\cdots (1-t_n)+\cdots +t_0(1-t_1)(1-t_2)\cdots (1-t_n) ) \Vert u \Vert \\&+\, ( \varepsilon +\varepsilon (1-t_n)+ \cdots +\varepsilon (1-t_{N_0+1})(1-t_{N_0+2})\cdots (1-t_n) \\&+\,\Vert e_{N_0-1}\Vert (1-t_{N_0})\cdots (1-t_n)+\cdots +\Vert e_0\Vert (1-t_1)(1-t_2)\cdots (1-t_n) ) \\= & {} (1-t_n)(1-t_{n-1})\cdots (1-t_0) \Vert x_0-y_0\Vert \\&+\, (1+\Vert u\Vert )\varepsilon ( 1+(1-t_n)+ \cdots +(1-t_{N_0+1})(1-t_{N_0+2})\cdots (1-t_n) ) \\&+\,(t_{N_0-1}\Vert u\Vert +\Vert e_{N_0-1}\Vert )(1-t_{N_0})\cdots (1-t_n) \\&+\, (t_{N_0-2}\Vert u\Vert +\Vert e_{N_0-2}\Vert )(1-t_{N_0-1})\cdots (1-t_n) \\&+\cdots +(t_0\Vert u\Vert +\Vert e_0\Vert )(1-t_1)(1-t_2)\cdots (1-t_n) \\\le & {} (1-t_n)(1-t_{n-1})\cdots (1-t_0) \Vert x_0-y_0\Vert +(1+\Vert u\Vert )\varepsilon (1+M_5) \\&+\,(t_{N_0-1}\Vert u\Vert +\Vert e_{N_0-1}\Vert )(1-t_{N_0})\cdots (1-t_n) \\&+\, (t_{N_0-2}\Vert u\Vert +\Vert e_{N_0-2}\Vert )(1-t_{N_0-1})\cdots (1-t_n) \\&+\, \cdots +(t_0\Vert u\Vert +\Vert e_0\Vert )(1-t_1)(1-t_2)\cdots (1-t_n). \end{aligned}$$

It follows from the above inequality that:

$$\begin{aligned} \limsup _{n\longrightarrow \infty }\Vert x_{n+1}-y_{n+1}\Vert \le (1+\Vert u\Vert )\varepsilon (1+M_5) \end{aligned}$$

and since \(\varepsilon >0\) is arbitrary, we conclude that \(\lim _{n\longrightarrow \infty }\Vert x_n-y_n\Vert =0\). Since \(s-\lim _{n\longrightarrow \infty }y_n=P_F(0)\), then \(s-\lim _{n\longrightarrow \infty }x_n=P_F(0)\) and this completes the proof. \(\square \)

Remark 3.4

The above theorem provides another affirmative answer to the open question raised by Boikanyo and Morosanu [18, p. 640].

Remark 3.5

Theorem 3.3 also shows that the convergence of the iterative sequences \((x_n)\) and \((y_n)\) is equivalent, implying therefore that the choice of u in the convex combination, as well as the error sequence \((e_n)\), as long as it satisfies the assumptions of the theorem, is irrelevant for the convergence. However, they can be of course used as control parameters to speed up the convergence in each case.

Remark 3.6

The following example shows that the boundedness of the sequence \((z_n)\) in Theorem 3.3(i) does not imply that of the sequence \((y_n)\).

Example 3.4

Let \(H=\mathbb {R}\) and \(T:\mathbb {R} \longrightarrow \mathbb {R}\) be defined by \(Tx=1\). Suppose that \(\gamma =1\), \(c_n=n+1\) and \(t_n=\dfrac{1}{2}\) for all \(n\ge 1\). By choosing \(z_0=y_0=0\), we have

$$\begin{aligned} z_{n+1}=\dfrac{1}{2}z_n-1 \end{aligned}$$

and so

$$\begin{aligned} \lim _{n\longrightarrow \infty }z_n=-2. \end{aligned}$$

Therefore, \((z_n)\) is bounded. Also, for all \(m \ge 1\):

$$\begin{aligned} \sum _{k=1}^{m}(1-t_k)(1-t_{k+1})\cdots (1-t_m) = \sum _{k=1}^{m}\dfrac{1}{2^{m-k+1}}\le \sum _{n=1}^{\infty }\dfrac{1}{2^n}=1. \end{aligned}$$

Therefore,

$$\begin{aligned} \underset{{m \rightarrow \infty }}{\limsup } \sum _{k=1}^{m}(1-t_k)(1-t_{k+1})\cdots (1-t_m) < \infty , \end{aligned}$$

and so the condition (12) holds. Also, all the assumptions of Theorem 3.2 hold. Thus, by Theorem 3.2(i), \((y_n)\) is not bounded, since \(T^{-1}(0)=\emptyset \).

On the other hand, all the assumptions of Theorem 3.3 are satisfied too. However, \((z_n)\) is bounded, but \((y_n)\) is not bounded.

4 Applications

4.1 Applications to Optimization

We can apply Theorems 3.13.3 and Corollary 3.1 to find a minimizer of a function f. Let H be a real Hilbert space and \(f:H \longrightarrow ]-\infty ,+\infty ]\) be a proper, convex and lower semicontinuous function. Then the subdifferential \(\partial f\) of f is the multivalued operator \(\partial f:H \rightrightarrows H\) defined for \(z \in H\) as follows:

$$\begin{aligned} \partial f(z):= \{ \zeta \in H: f(y)-f(z) \ge (\zeta ,y-z), \forall y \in H \}. \end{aligned}$$
(25)

We know from [1] that the subdifferential of a proper, convex and lower semicontinuous function is maximal monotone and the zeroes of \(\partial f\) correspond to the minimizers of f. Therefore, the proximal point algorithm for \(\partial f\) provides a scheme for approximating a minimizer of f.

In this section, suppose H is a real Hilbert space and D is a nonempty, closed and convex subset of H.

Theorem 4.1

Let \(f:H \longrightarrow ]-\infty ,+\infty ]\) be a proper, convex and lower semicontinuous function. For any \(x_0,u \in H\), let the sequence \((x_n)\) be generated by (11) for \(T= \partial f\), where \(u \in H\), \((t_n)_{n=1}^{\infty } \subset ]0,1[\) and \((c_n)_{n=1}^{\infty } \subset ]0,\infty [\) such that (12) holds and \(c_n \rightarrow +\infty \) as \(n\rightarrow +\infty \). Suppose that \((e_n)_{n=1}^{\infty } \subset H\) is a sequence with \(\lim _{n \rightarrow +\infty }\frac{\Vert e_n \Vert }{t_n}=0\). If \((x_n)\) is bounded, then \({\mathrm{argmin}} f \ne \emptyset \) and \((x_n)\) converges strongly to \(P_{F}u\), the metric projection of u onto \(F:=\partial f^{-1}(0)= {\mathrm{argmin}} f\).

Proof

This follows from Theorem 3.2. \(\square \)

Theorem 4.2

Let \(f:H \longrightarrow (-\infty ,+\infty ]\) be a proper, convex and lower semicontinuous function. For any \(x_0,u \in H\), let the sequence \((x_n)\) be generated by (11) for \(T= \partial f\), where \(u \in H\), \((t_n)_{n=1}^{\infty } \subset ]0,1[\) with \(\liminf _{n\rightarrow +\infty }t_n \ge \alpha \) for some \(\alpha >0\), and \((c_n)_{n=1}^{\infty } \subset ]0,\infty [\) with \(c_n \rightarrow +\infty \) as \(n\rightarrow +\infty \). Suppose that \((e_n)_{n=1}^{\infty } \subset H\) is a sequence with \(\lim _{n \rightarrow +\infty } \Vert e_n \Vert =0\). If \((x_n)\) is bounded, then \({\mathrm{argmin}} f \ne \emptyset \) and \((x_n)\) converges strongly to \(P_{F}u\), the metric projection of u onto \(F:=\partial f^{-1}(0)= {\mathrm{argmin}} f\).

Proof

This follows from Corollary 3.1. \(\square \)

Theorem 4.3

Let \(f:H \longrightarrow ]-\infty ,+\infty ]\) be a proper, convex and lower semicontinuous function. For any \(x_0,u \in H\), let the sequence \((x_n)\) be generated by (1) for \(T= \partial f\), where \(u \in H\), \((t_n)_{n=1}^{\infty } \subset ]0,1[\) and \((c_n)_{n=1}^{\infty } \subset ]0,\infty [\) such that (12) holds, \(\lim _{n \rightarrow \infty }t_n=0\) and \(\lim _{n \rightarrow \infty }\frac{\Vert e_n\Vert }{t_n}=0\) and \(\sum _{n=1}^{\infty } |t_n-t_{n-1}| < +\infty \). If \((x_n)\) is bounded, then \(s-\lim _{n \longrightarrow \infty }x_n=P_{F}(0)\), where \(F:=\partial f^{-1}(0)= {\mathrm{argmin}} f\) is nonempty.

Proof

This follows from Theorem 3.3. \(\square \)

Subgradient Proximal Algorithm

Initialization: select \(x_0 \in H\)

Iterative step: The iterative sequence \(w_n \in H\) is calculated by

$$\begin{aligned} w_0= & {} x_0 \\ w_{n}= & {} (1-t_n)x_n+ t_nu+e_n-c_nz_n, \end{aligned}$$

where \(z_n\) is such that

$$\begin{aligned} c_nz_n \in c_n \partial f(x_{n+1}) \cap B\left( \alpha u-\alpha P_Fu,\dfrac{\varepsilon }{4}\right) \end{aligned}$$

for all \(n \ge 0\), \(t_n \rightarrow \alpha \in ]0,1[\), and B(xr) denotes the open ball in H centered at x with radius r. Now we can prove the following result which provides an approximation to \(P_Fu\), by using Theorem 3.2.

Theorem 4.4

Let H be a real Hilbert space and \(f:H \longrightarrow ]-\infty ,+\infty ]\) be a proper, convex and lower semicontinuous function. For any \(x_0,u \in H\), let \(\varepsilon >0\) and the sequence \((x_n)\) be generated by (11) with \(T= \partial f\), where \(t_n \in ]0,1[\) and \(c_n \in ]0,+\infty [\) for all \(n \ge 0\). Also let the sequence \(w_n\) be generated by the iterative procedure

$$\begin{aligned} w_0= & {} x_0 \\ w_{n}= & {} (1-t_n)x_n+ t_nu+e_n-c_nz_n, \end{aligned}$$

where \(z_n\) is such that

$$\begin{aligned} c_nz_n \in c_n \partial f(x_{n+1}) \cap B\left( \alpha u-\alpha P_Fu,\dfrac{\varepsilon }{4}\right) \end{aligned}$$

for all \(n \ge 0\). If \(F:=\partial f^{-1}(0)= {\mathrm{argmin}} f \ne \emptyset \), \(t_n \rightarrow \alpha \in ]0,1[\), \(c_n \rightarrow +\infty \), and \(e_n \rightarrow 0\) as \(n \rightarrow +\infty \), then there exists some \(N_0 \in \mathbb {N}\) such that

$$\begin{aligned} \Vert w_n-P_Fu \Vert \le \varepsilon \end{aligned}$$

for all \(n \ge N_0\).

Proof

For all \(n \in \mathbb {N}\) from

$$\begin{aligned} (1-t_n)x_n+t_nu+e_n \in x_{n+1}+c_n \partial f(x_{n+1}) \end{aligned}$$

we deduce that there exists \(\xi _{n} \in \partial f(x_{n+1})\) such that

$$\begin{aligned} (1-t_n)x_n+t_nu+e_n = x_{n+1}+c_n \xi _n. \end{aligned}$$

Letting \(n \longrightarrow \infty \) in the above equality, by Theorem 3.2 we get:

$$\begin{aligned} \lim _{n \rightarrow \infty }c_n \xi _n=\alpha u- \alpha P_Fu. \end{aligned}$$
(26)

Therefore, there exists some \(N_1 \in \mathbb {N}\) such that for all \(n \ge N_1\)

$$\begin{aligned} c_n \xi _n \in c_n \partial f(x_{n+1}) \cap B\left( \alpha u-\alpha P_Fu,\dfrac{\varepsilon }{4}\right) . \end{aligned}$$

On the other hand, since by the hypothesis we have

$$\begin{aligned} c_n z_n \in c_n \partial f(x_{n+1}) \cap B\left( \alpha u- \alpha P_Fu,\dfrac{\varepsilon }{4}\right) , \end{aligned}$$

it follows that there exists some \(N_0 \ge N_1\) such that for all \(n\ge N_0\)

$$\begin{aligned} \Vert x_{n+1}-P_Fu \Vert < \dfrac{\varepsilon }{2} \end{aligned}$$

and

$$\begin{aligned} \Vert c_n z_n-c_n \xi _n \Vert < \dfrac{\varepsilon }{2}. \end{aligned}$$

Then for all \(n \ge N_0\)

$$\begin{aligned} \Vert w_n-P_Fu \Vert\le & {} \Vert w_n-x_{n+1}\Vert +\Vert x_{n+1}-P_Fu \Vert \\= & {} \Vert c_n z_n-c_n \xi _n \Vert + \Vert x_{n+1}-P_Fu \Vert \\< & {} \dfrac{\varepsilon }{2}+ \dfrac{\varepsilon }{2}=\varepsilon , \end{aligned}$$

and this completes the proof. \(\square \)

4.2 Applications to Variational Inequalities

Let \(T_0 :D \longrightarrow H\) be a single-valued, monotone and hemicontinuous (i.e., continuous along each line segment in H with respect to the weak topology) operator. Suppose that \(N_{D}(z)\) is the normal cone to D at z:

$$\begin{aligned} N_{D}(z):= \{ w \in H: \langle w,z-u \rangle \ge 0, \forall u \in D \}, \end{aligned}$$
(27)

and \(T: H \rightrightarrows H\) be defined by:

$$\begin{aligned} T(z):= \left\{ \begin{array}{l l} T_0(z)+N_D(z),&{}\quad z \in D, \\ \emptyset , &{}\quad z \notin D. \end{array} \right. \end{aligned}$$

The maximal monotonicity of such a multifunction T was proved by Rockafellar [1]. The relation \(0 \in T(z)\) reduces to \(-T_0(z) \in N_D(z)\), or the so-called variational inequality:

$$\begin{aligned} {\mathrm{find}} \,\, z\in D \,\, \mathrm{such}\,\, {\mathrm{that}}\,\, \langle z-u,T_0(z) \rangle \le 0 \,\,\mathrm{for}\,\, \mathrm{all}\,\, u \in D. \end{aligned}$$
(28)

Let \(VI(T_0,D)\) denote the solution set of the above variational inequality. If D is a cone, then \(VI(T_0,D)\) is the set of all \(z \in D\) such that \(-T_0(z) \in D^{\circ }\) (the polar of D) and \(\langle z,T_0(z) \rangle =0\), and the problem of finding such z is an important instance of the well-known complementarity problem of mathematical programming. We can then apply Theorems 3.23.3 and Corollary 3.1 to find a solution for the variational inequality of a single-valued monotone and hemicontinuous function \(T_0\).

Theorem 4.5

Let \(T_0 :D \longrightarrow H\) be a single-valued, monotone and hemicontinuous operator, and let \(N_D(z)\) be the normal cone defined above. For any \(x_0,u \in H\), let the sequence \((x_n)\) be generated by (11) for T defined as above, where \((t_n)_{n=1}^{\infty } \subset ]0,1[\), \((c_n)_{n=1}^{\infty } \subset ]0,\infty [\), (12) holds and \(c_n \rightarrow +\infty \) as \(n\rightarrow +\infty \). Suppose that \((e_n)_{n=1}^{\infty } \subset H\) is a sequence with \(\lim _{n \rightarrow +\infty }\frac{\Vert e_n \Vert }{t_n}=0\). If \((x_n)\) is bounded, then \(VI(T_0,D) \ne \emptyset \) and \((x_n)\) converges strongly to an element of \(VI(T_0,D)\).

Proof

Since T is a maximal monotone operator and \(VI(T_0,D)=T^{-1}(0)\), then the conclusion follows from Theorem 3.2. \(\square \)

Theorem 4.6

Let \(T_0 :D \longrightarrow H\) be a single-valued, monotone and hemicontinuous operator, and let \(N_D(z)\) be the normal cone defined above. For any \(x_0,u \in H\), let the sequence \((x_n)\) be generated by (11) for T defined as above, where \((t_n)_{n=1}^{\infty } \subset ]0,1[\) with \(\liminf _{n\rightarrow +\infty }t_n \ge \alpha \) for some \(\alpha >0\), and \((c_n)_{n=1}^{\infty } \subset ]0,\infty [\) with \(c_n \rightarrow +\infty \) as \(n\rightarrow +\infty \). Suppose that \((e_n)_{n=1}^{\infty } \subset H\) is a sequence with \(\lim _{n \rightarrow +\infty } \Vert e_n \Vert =0\). If \((x_n)\) is bounded, then \(VI(T_0,D) \ne \emptyset \) and \((x_n)\) converges strongly to an element of \(VI(T_0,D)\).

Proof

The conclusion follows from Corollary 3.1. \(\square \)

Theorem 4.7

Let \(T_0 :D \longrightarrow H\) be a single-valued, monotone and hemicontinuous operator, and let \(N_D(z)\) be the normal cone defined above. For any \(x_0,u \in H\), let the sequence \((x_n)\) be generated by (1) for T defined as above, where \((t_n)_{n=1}^{\infty } \subset ]0,1[\) and \((c_n)_{n=1}^{\infty } \subset ]0,\infty [\). Suppose that (12) holds, \(\lim _{n \rightarrow \infty }t_n=0\) and \(\lim _{n \rightarrow \infty }\frac{\Vert e_n\Vert }{t_n}=0\) and \(\sum _{n=1}^{\infty } |t_n-t_{n-1}| < +\infty \). If \((x_n)\) is bounded, then \(VI(T_0,D) \ne \emptyset \) and \((x_n)\) converges strongly to an element of \(VI(T_0,D)\).

Proof

The conclusion follows from Theorem 3.3. \(\square \)

5 Conclusions

In this paper, we considered the regularized proximal point algorithms with errors (5) and (11), with various conditions on the parameters and the error sequence, more general than those considered by previous authors. We provided necessary and sufficient conditions for the zero set of the operator to be nonempty and showed the strong convergence of the scheme to a zero of the operator in this case. We presented also some applications of our results to optimization and variational inequalities. In particular, we provided two affirmative answers to an open question raised by Boikanyo and Morosanu [18, p. 640], concerning the design of a PPA where the error sequence tends to zero and a parameter sequence remains bounded. Moreover, since verifying that the zero set of the operator is nonempty might be a difficult task, then our criteria, which is related only to the boundedness of the iterates, may provide a very useful and convenient way for this task, especially from a computational point of view. As a future direction for research, since numerous other PPA have been developed and their convergence studied by many authors, it might be interesting to investigate the possibility of implementing the ideas and methods developed in this paper to these other PPA, as well as extending these methods to more general spaces, such as Banach and metric spaces. In particular, in this connection, we can mention the recent work of Wang et al. [21], as well as Cui and Ceng [22], dealing with the so-called contraction proximal point algorithm.