1 Introduction

In a fundamental work in 1970, Rockafellar [1] proved that the subdifferential of a proper, convex and lower semicontinuous function is a maximal monotone operator. The proximal point algorithm (PPA) was introduced by Martinet [2] for minimizing a convex function on \(\mathbb {R}^n\). One of the most effective iterative methods for approximating the zeroes of a maximal monotone operator is the proximal point algorithm (PPA) which was initiated by Martinet [2] and subsequently improved by Rockafellar [3]. For more information on the PPA, we refer the reader to [48] and the references therein.

The first mean ergodic theorem for nonexpansive self-mappings of a nonempty, closed and convex subset of H was proved by Baillon [9]. For more details on maximal monotone operators and nonexpansive mappings, we refer the reader to [1013]. The notion of nonexpansive sequences arose in connection with the study of iterates of nonexpansive mappings in situations, where the domain of such mappings may not be convex; see [1416].

Recently, Boikanyo and Morosanu [17] studied the convergence of a PPA with errors for a maximal monotone operator A. They assumed that the zero set of A is nonempty and the error sequence is bounded. In this paper, motivated by our approach in [5, 1416], we significantly improve their results by giving a necessary and sufficient condition for the zero set of A to be nonempty, and by showing that in this case, this PPA converges strongly to the metric projection of u onto the zero set of A, without assuming the boundedness of the error sequence. We also introduce a new PPA and prove similar results for this PPA. Finally, we present some applications of our results to optimization and variational inequalities.

2 Preliminaries

Let H be a real Hilbert space with inner product \(\langle .,. \rangle \) and norm \(\Vert .\Vert \). We denote weak convergence in H by \(\rightharpoonup \) and strong convergence by \(\rightarrow \). An operator \(A:D(A) \subseteq H \rightrightarrows H\) is said to be monotone if its graph is a monotone subset of \(H \times H\), that is,

$$\begin{aligned} \langle y_2-y_1,x_2-x_1 \rangle \ge 0, \end{aligned}$$
(1)

for all \(x_1,x_2 \in D(A)\) and all \(y_1 \in Ax_1\) and \(y_2 \in Ax_2\). A is maximal monotone if A is monotone and the graph of A is not properly contained in the graph of any other monotone operator. It is known that in Hilbert space, this is equivalent to the range of the operator \((I+A)\), denoted by \(R(I+A)\), being all of H, where I is the identity operator on H, i.e., \(R(I+A)=H\).

Let \(D \subseteq H\) be nonempty. A mapping \(T:D \longrightarrow H\) is said to be nonexpansive if

$$\begin{aligned} \Vert T(x)-T(y) \Vert \le \Vert x -y \Vert , \end{aligned}$$
(2)

for all \(x,y \in D\).

For a maximal monotone operator A, and for every \(t > 0\), the operator

\(J_{t}: H \longrightarrow H\) defined by \(J_{t}(x):=(I+tA)^{-1}(x)\) is well-defined, single-valued and nonexpansive on H. It is called the resolvent of A.

Consider the following set-valued problem:

$$\begin{aligned} \mathrm{find} \, x \in D(A) \,\,\, \mathrm{such} \, \mathrm{that} \,\,\, 0 \in A(x). \end{aligned}$$
(3)

One of the most effective iterative methods for solving problem (3) is the proximal point algorithm. The PPA generates an iterative sequence \((x_n)\) as follows:

$$\begin{aligned} x_{n+1}=J_{\gamma _n}(x_n+e_n), \end{aligned}$$
(4)

for all \(n \ge 0\), where \(x_0 \in H\) is a given starting point, \((\gamma _n) \subset ]0,+ \infty [\) and \((e_n)\) is a sequence of computational errors, which are always present when experimental measurements are performed.

In 2013, Boikanyo and Morosanu [17] considered the sequence generated by the following algorithm:

$$\begin{aligned} x_{n+1}=J_{\gamma _n}\left( \lambda _n u+ (1-\lambda _n)(x_n+e_n)\right) , \forall n \ge 0. \end{aligned}$$
(5)

where \(x_{0}, u \in H\), \(\lambda _{n}\in ]0,1[\), and \(\gamma _{n}\in ]0,+\infty [\), for all \(n\ge 0\). They showed that if \(A^{-1}(0)\ne \emptyset \) and \(\lambda _n \rightarrow 1\), \(\gamma _n \rightarrow + \infty \) and \((e_n)\) is bounded, then \(( x_{n} )\) converges strongly to an element of \(A^{-1}(0)\), which is nearest to u.

In Sect. 3, we consider the sequence generated by (5) and give a necessary and sufficient condition for the zero set of A to be nonempty, and we show that in this case, the sequence \((x_{n})\) converges strongly to the metric projection of u onto \(A^{-1}(0)\) without assuming the boundedness of \((e_n)\). These results significantly improve upon the results of Boikanyo and Morosanu [17] who assumed that the zero set of A is nonempty and the error sequence \((e_{n})\) is bounded. We also introduce a new PPA defined by the iterative sequence: \(y_{n+1}=(I+\gamma _{n}A)^{-1}(\lambda _{n}u +(1-\lambda _{n})(y_0+e_n))\), where \(y_{0}, u \in H\), and prove similar results for the sequence \(( y_{n} )\). Finally in Sect. 4, we present some applications of our results to optimization and variational inequalities.

3 Main Results

For the proof of our main results in this paper, we need the following useful lemma, and we refer the reader to [18] or [19] for its proof.

Lemma 3.1

Let \(A:D(A) \subseteq H \rightrightarrows H\) be a maximal monotone operator with \(A^{-1}(0)=F \ne \emptyset \). Then for any \(u \in H\), \((I+tA)^{-1}u \rightarrow P_{F}u\) as \(t \rightarrow +\infty \), where \(P_{F}u\) denotes the metric projection of u onto F.

3.1 The Proximal Point Algorithm Defined by (6)

In the following theorem, we give a necessary and sufficient condition for the zero set of A to be nonempty.

Theorem 3.1

Let \(A:D(A) \subseteq H \rightrightarrows H\) be a maximal monotone operator. For any fixed \(x_0,u \in H\), let the sequence \((x_n)\) be generated by

$$\begin{aligned} x_{n+1}=J_{\gamma _n}\left( \lambda _n u+ (1-\lambda _n)(x_n+e_n)\right) , \end{aligned}$$
(6)

for all \(n \ge 0\), where \(\lambda _n \in ]0,1[\) and \(\gamma _n \in ]0,+\infty [\) for all \(n \ge 0\). Then \(A^{-1}(0) \ne \emptyset \) if and only if there exist two sequences \((\lambda _n) \subset ]0,1[\) and \((\gamma _n) \subset ]0,+\infty [\), with \(\lambda _n \rightarrow 1\) and \(\gamma _n \rightarrow +\infty \) as \(n \rightarrow +\infty \), such that the sequence \(((1-\lambda _n)e_n)\) is bounded, and \(\liminf _{n \rightarrow \infty }(\Vert x_{n+1} \Vert +(1-\lambda _n)\Vert x_n \Vert ) < \infty \), or equivalently, \(\liminf _{n \rightarrow \infty }(\Vert x_{n+1} \Vert +(1-\lambda _n)\Vert x_{n+1}-x_n \Vert ) < \infty .\)

Proof

Assume that \(A^{-1}(0)\ne \emptyset \), and let \(p \in A^{-1}(0)\). From (6), and the fact that the resolvent operator is nonexpansive, for all \(m \ge 0\) we have:

$$\begin{aligned} \Vert x_{m+1}-p\Vert= & {} \Vert J_{\gamma _m}\left( \lambda _m u+ (1-\lambda _m)(x_m+e_m)\right) -J_{\gamma _m}(p)\Vert \nonumber \\\le & {} \Vert \lambda _m u+ (1-\lambda _m)(x_m+e_m)-p \Vert \nonumber \\\le & {} \lambda _m \Vert u-p \Vert + (1-\lambda _m)\Vert e_m \Vert +(1-\lambda _m)\Vert x_m -p \Vert . \end{aligned}$$
(7)

From the above inequality, for all \(n \ge 0\), we get:

$$\begin{aligned} \Vert x_{n+1}-p\Vert\le & {} \lambda _n \Vert u-p \Vert + (1-\lambda _n)\Vert e_n \Vert +(1-\lambda _n)\Vert x_n -p \Vert \nonumber \\\le & {} \lambda _n \Vert u-p \Vert + (1-\lambda _n)\Vert e_n\Vert \nonumber \\&+(1-\lambda _n) \Big [\lambda _{n-1} \Vert u-p \Vert + (1-\lambda _{n-1})\Vert e_{n-1} \Vert +(1-\lambda _{n-1})\Vert x_{n-1} -p \Vert \Big ] \nonumber \\= & {} \Big [ \lambda _n +\lambda _{n-1}(1-\lambda _{n})\Big ] \Vert u-p \Vert + \Big [ (1-\lambda _n) \Vert e_n \Vert +(1-\lambda _n) (1-\lambda _{n-1}) \Vert e_{n-1} \Vert \Big ] \nonumber \\&+ (1-\lambda _n) (1-\lambda _{n-1}) \Vert x_{n-1} -p \Vert \nonumber \\\le & {} \cdots \nonumber \\\le & {} \Big [ \lambda _n +\lambda _{n-1}(1-\lambda _{n})+ \lambda _{n-2}(1-\lambda _{n-1})(1-\lambda _{n})+ \cdots \nonumber \\&+\lambda _{0}(1-\lambda _{1})\ldots (1-\lambda _{n}) \Big ] \Vert u-p \Vert \nonumber \\&+ \Big [ (1-\lambda _n) \Vert e_n \Vert +(1-\lambda _n) (1-\lambda _{n-1}) \Vert e_{n-1} \Vert + \ldots \nonumber \\&+ (1-\lambda _n) (1-\lambda _{n-1})\ldots (1-\lambda _0) \Vert e_{0} \Vert \Big ] \nonumber \\&+ (1-\lambda _n) (1-\lambda _{n-1})\ldots (1-\lambda _0) \Vert x_0 -p \Vert . \end{aligned}$$
(8)

Since \(\lambda _n \rightarrow 1\) and \((1-\lambda _n) e_n\) is bounded, there exist \(0< \alpha <1\) and \(M > 0\) such that for all \( n \in \mathbb {N}\), we have \(1-\lambda _n \le \alpha \) and \((1-\lambda _n) \Vert e_n \Vert \le M\). Hence from (8), we have

$$\begin{aligned} \Vert x_{n+1}-p\Vert\le & {} \Big [ 1+ \alpha + \alpha ^2 +\cdots +\alpha ^n \Big ]\Vert u-p \Vert \nonumber \\&+\Big [ M+ M \alpha + M \alpha ^2 +\cdots +M \alpha ^n \Big ]+ \alpha ^n \Vert x_0-p \Vert \nonumber \\\le & {} \frac{\Vert u-p \Vert }{1- \alpha } + \frac{M}{1- \alpha }+ \Vert x_0 -p \Vert . \end{aligned}$$
(9)

This inequality shows that \((x_n)\) is bounded and therefore

$$\begin{aligned} \liminf _{n \rightarrow \infty }\left( \Vert x_{n+1} \Vert +(1-\lambda _n)\Vert x_n \Vert \right) < \infty . \end{aligned}$$

Conversely, assume that \(\liminf _{n \rightarrow \infty }(\Vert x_{n+1} \Vert +(1-\lambda _n)\Vert x_n \Vert ) < \infty \). Then there exists a subsequence (n(k)) of \(\mathbb {N}\) such that \(\Vert x_{n(k)+1} \Vert +(1-\lambda _{n(k)})\Vert x_{n(k)} \Vert \) is bounded. Therefore, there exists a subsequence (m(l)) of (n(k)) such that \(x_{m(l)+1} \rightharpoonup p\) as \(l \rightarrow +\infty \) for some \(p \in H\). Also \(((1-\lambda _{m(l)}) x_{m(l)} )\) is a bounded sequence. Now for every \(x \in D(A)\) and \(y \in A(x)\), since

$$\begin{aligned} \frac{\lambda _n u+(1- \lambda _n)(x_n+e_n)-x_{n+1}}{\gamma _n} \in A(x_{n+1}) \end{aligned}$$

we get

$$\begin{aligned} \left\langle y- \frac{\lambda _{m(l)} u+(1- \lambda _{m(l)})(x_{m(l)}+e_{m(l)})-x_{m(l)+1}}{\gamma _{m(l)}},x-x_{m(l)+1} \right\rangle \ge 0. \end{aligned}$$
(10)

Since \((1-\lambda _{m(l)}) x_{m(l)} \) and \(x_{m(l)+1}\) are bounded, \(\lambda _n \rightarrow 1\), \(\gamma _n \rightarrow +\infty \) and \(((1-\lambda _n)e_n)\) is bounded, it follows that:

$$\begin{aligned} \frac{\lambda _{m(l)} u+(1- \lambda _{m(l)})(x_{m(l)}+e_{m(l)})-x_{m(l)+1}}{\gamma _{m(l)}} \rightarrow 0, \, as \, l \rightarrow \infty . \end{aligned}$$

Now letting \(l \rightarrow \infty \) in (10), we get:

$$\begin{aligned} \langle y,x-p \rangle \ge 0. \end{aligned}$$

Therefore, by the maximality of A, we conclude that \(p \in D(A)\) and \(0 \in A(p)\). This completes the proof. \(\square \) \(\square \)

Remark 3.1

The proof of Theorem 3.1 shows that \(A^{-1}(0) \ne \emptyset \) if and only if \((x_n)\) is bounded.

Corollary 3.1

Let \(A:D(A) \subseteq H \rightrightarrows H\) be a maximal monotone operator. For any fixed \(x_0,u \in H\), let the sequence \((x_n)\) be generated by (6), where \(\lambda _n \in ]0,1[\) and \(\gamma _n \in ]0,+\infty [\) for all \(n \ge 0\). Assume that there exist two sequences \((\lambda _n) \subset ]0,1[\) and \((\gamma _n) \subset ]0,+\infty [\), with \(\lambda _n \rightarrow 1\) and \(\gamma _n \rightarrow +\infty \) as \(n \rightarrow +\infty \), such that the sequence \(((1-\lambda _n)e_n)\) is bounded. Then \(A^{-1}(0) \ne \emptyset \) if \(\liminf _{n \rightarrow \infty }\Vert x_{n} \Vert < \infty \) and the sequence \((x_{n+1}-x_n)\) is bounded.

Proof

This follows from Theorem 3.1, since in this case, the condition

$$\begin{aligned} \liminf _{n \rightarrow \infty }(\Vert x_{n+1} \Vert +(1-\lambda _n)\Vert x_{n+1}-x_n \Vert ) < \infty \end{aligned}$$

is satisfied. \(\square \)

The following theorem extends the previous results by Boikanyo and Morosanu [17] who assumed that the zero set of A is nonempty, and the error sequence \((e_{n})\) is bounded.

Theorem 3.2

Suppose that \(A:D(A) \subseteq H \rightrightarrows H\)is a maximal monotone operator. For any \(x_0,u \in H\), let the sequence \((x_n)\) be generated by (6), where \(\lambda _n \in ]0,1[\) and \(\gamma _n \in ]0,+\infty [\) for all \(n \ge 0\). If \(\lambda _n \rightarrow 1\), \(\gamma _n \rightarrow +\infty \) and \(((1-\lambda _n)e_n) \rightarrow 0\) as \(n \rightarrow +\infty \), then \((x_n)\) converges strongly to \(P_{F}u\), the metric projection of u onto \(F:=A^{-1}(0)\), if and only if

$$\begin{aligned} \liminf _{n \rightarrow \infty }\left( \Vert x_{n+1} \Vert +(1-\lambda _n)\Vert x_n \Vert \right) < \infty . \end{aligned}$$

Proof

We know from Theorem 3.1 and Remark 3.1 that \(A^{-1}(0) \ne \emptyset \) and the sequence \((x_n)\) is bounded. Moreover, from (6), we have

$$\begin{aligned} \Vert x_{n+1}-P_Fu \Vert\le & {} \Vert x_{n+1}- J_{\gamma _n}u \Vert + \Vert J_{\gamma _n}u-P_Fu \Vert \nonumber \\= & {} \Vert J_{\gamma _n}\left( \lambda _n u+(1-\lambda _n)(x_n + e_n)\right) - J_{\gamma _n}u \Vert + \Vert J_{\gamma _n}u-P_Fu \Vert \nonumber \\\le & {} (1-\lambda _n)\Vert x_n -u +e_n \Vert +\Vert J_{\gamma _n}u-P_Fu \Vert \nonumber \\\le & {} (1-\lambda _n)\Vert x_n -u \Vert +(1-\lambda _n) \Vert e_n \Vert +\Vert J_{\gamma _n}u-P_Fu \Vert \end{aligned}$$
(11)

where the second inequality follows from the fact that the resolvent operator is nonexpansive. Now the result follows immediately by using Lemma 3.1 and letting \(n \rightarrow +\infty \) in the above inequality. This completes the proof. \(\square \)

Remark 3.2

For every sequence \((e_n)\) in H (not necessarily bounded), there exists a sequence \((\lambda _n)\) in ]0, 1[ such that \(\lambda _n \rightarrow 1\) and \(((1-\lambda _n)e_n) \rightarrow 0\) as \(n \rightarrow +\infty \). For example we can consider \(\lambda _n:=1-\frac{1}{(n+1)(\Vert e_n \Vert +1)}\), for all \(n \ge 0\). Hence for any maximal monotone operator \(A:D(A) \subseteq H \rightrightarrows H\) in a real Hilbert space H with \(A^{-1}(0) \ne \emptyset \), the following iterative sequence converges strongly to the metric projection of u onto \(A^{-1}(0)\), where \(\gamma _n \in ]0,+\infty [\) for all \(n \ge 0\) and \(\gamma _n \rightarrow +\infty \) as \(n \rightarrow +\infty \).

$$\begin{aligned} x_{n+1}=J_{\gamma _n}\left( \left( 1-\frac{1}{(n+1)(\Vert e_n \Vert +1)}\right) u+ \frac{1}{(n+1)(\Vert e_n \Vert +1)}(x_n+e_n)\right) . \end{aligned}$$

The following example shows that, if \(A^{-1}(0) = \emptyset \), then the sequence \((x_n)\) is unbounded.

Example 3.1

Let \(H=\mathbb {R}\) and \(A:H \rightarrow H\) be defined by \(Ax=-e^{-x}\). Obviously, A is maximal monotone and \(A^{-1}(0) = \emptyset \). For every \(x_0,u \in H\) let the sequence \((x_n)\) be generated by (6). Then

$$\begin{aligned} \lambda _n u +(1-\lambda _n)(x_n+e_n)=x_{n+1}- \gamma _n e^{-x_{n+1}}. \end{aligned}$$
(12)

If for two sequences \((\lambda _n)\) and \((\gamma _n)\) with \(\lambda _n \rightarrow 1\), \(\gamma _n \rightarrow +\infty \) and \((1-\lambda _n)e_n \rightarrow 0\), the sequence \((x_n)\) is bounded, then from (12), it follows that: \(\limsup _{n \rightarrow \infty }\gamma _n e^{-x_{n+1}} \le |u|+sup_{n \ge 0} |x_{n}| < +\infty \). Since \(\gamma _n \rightarrow +\infty \), we must have \(\lim _{n \rightarrow \infty } e^{-x_{n+1}}=0\), and hence, \(x_{n+1} \rightarrow \infty \), which is a contradiction. Therefore, \((x_n)\) is unbounded.

3.2 A New Proximal Point Algorithm

In this section, we introduce a new algorithm, (13), and show that the sequence \((y_n)\) converges strongly to \(P_Fu\). First we give a necessary and sufficient condition for the zero set of A to be nonempty.

Theorem 3.3

Let \(A:D(A) \subseteq H \rightrightarrows H\) be a maximal monotone operator. For any fixed \(y_0,u \in H\), let the sequence \((y_n)\) be generated by

$$\begin{aligned} y_{n+1}=J_{\gamma _n}(\lambda _n u+ (1-\lambda _n)(y_0+e_n)), \end{aligned}$$
(13)

for all \(n \ge 0\), where \(\lambda _n \in ]0,1[\) and \(\gamma _n \in ]0,+\infty [\) for all \(n \ge 0\). Then \(A^{-1}(0) \ne \emptyset \) if and only if there exist two sequences \((\lambda _n) \subset ]0,1[\) and \((\gamma _n) \subset ]0,+\infty [\) with \(\lambda _n \rightarrow 1\) and \(\gamma _n \rightarrow +\infty \) as \(n \rightarrow +\infty \), such that the sequence \(((1-\lambda _n)e_n)\) is bounded, and \(\liminf _{n \rightarrow \infty } \Vert y_{n} \Vert < \infty \).

Proof

Assume that \(A^{-1}(0) \ne \emptyset \) and let \(p \in A^{-1}(0)\). From (13) and the fact that the resolvent operator is nonexpansive, for all \(m \ge 0\) we have:

$$\begin{aligned} \Vert y_{m+1}-p\Vert= & {} \Vert J_{\gamma _m}(\lambda _m u+ (1-\lambda _m)(y_0+e_m))-J_{\gamma _m}(p)\Vert \nonumber \\\le & {} \Vert \lambda _m u+ (1-\lambda _m)(y_0+e_m)-p \Vert \nonumber \\\le & {} \lambda _m \Vert u-p \Vert + (1-\lambda _m)\Vert e_m \Vert +(1-\lambda _m)\Vert y_0 -p \Vert . \end{aligned}$$
(14)

This inequality shows that the sequence \((y_n)\) is bounded. Hence

$$\begin{aligned} \liminf _{n \rightarrow \infty } \Vert y_{n} \Vert < \infty . \end{aligned}$$

Conversely, assume that \(\liminf _{n \rightarrow \infty } \Vert y_{n} \Vert < \infty \). Then there exists a subsequence (n(k)) of \(\mathbb {N}\) such that \((y_{n(k)})\) is bounded. Therefore, there exists a subsequence (m(l)) of (n(k)) such that \(y_{m(l)} \rightharpoonup p\) as \(l \rightarrow +\infty \), for some \(p \in H\). Now for every \(x \in D(A)\) and \(y \in A(x)\), since

$$\begin{aligned} \frac{\lambda _n u+(1- \lambda _n)(y_0+e_n)-y_{n+1}}{\gamma _n} \in A(y_{n+1}) \end{aligned}$$

we get

$$\begin{aligned} \left\langle y- \frac{\lambda _{m(l)-1} u+(1- \lambda _{m(l)-1})(y_{0}+e_{m(l)-1})-y_{m(l)}}{\gamma _{m(l)-1}},x-y_{m(l)} \right\rangle \ge 0. \end{aligned}$$
(15)

Since \(\lambda _n \rightarrow 1\), \(\gamma _n \rightarrow +\infty \) and \(((1-\lambda _n)e_n)\) is bounded, it follows that:

$$\begin{aligned} \frac{\lambda _{m(l)-1} u+(1- \lambda _{m(l)-1})(y_{0}+e_{m(l)-1})-y_{m(l)}}{\gamma _{m(l)-1}} \rightarrow 0, \, \mathrm{as} \, l \rightarrow \infty . \end{aligned}$$

Now letting \(l \rightarrow \infty \) in (15), we get:

$$\begin{aligned} \langle y,x-p \rangle \ge 0. \end{aligned}$$

Therefore, by the maximality of A, we conclude that \(p \in D(A)\) and \(0 \in A(p)\). This completes the proof. \(\square \)

Remark 3.3

The proof of Theorem 3.3 shows that \(A^{-1}(0) \ne \emptyset \), if and only if \((y_n)\) is bounded.

Theorem 3.4

Let \(A:D(A) \subseteq H \rightrightarrows H\) be a maximal monotone operator. For any \(y_0, u \in H\), let the sequence \((y_n)\) be generated by (13), where \(\lambda _n \in ]0,1[\) and \(\gamma _n \in ]0,+\infty [\) for all \(n \ge 0\). If \(\lambda _n \rightarrow 1\), \(\gamma _n \rightarrow +\infty \) and \(((1-\lambda _n)e_n) \rightarrow 0\) as \(n \rightarrow +\infty \), then \((y_n)\) converges strongly to \(P_{F}u\), the metric projection of u onto \(F:= A^{-1}(0)\), if and only if \(\liminf _{n \rightarrow \infty }\Vert y_{n} \Vert < \infty \)

Proof

The proof is done along similar lines as in the proof of Theorem 3.2, and therefore, we omit it here. \(\square \)

3.3 Comparison of Algorithms (6) and (13)

In the following theorems, we compare Algorithms (6) and (13).

Theorem 3.5

Let \((x_n)\) and \((y_n)\) be generated by (6) and (13), respectively. If \(1>\lambda _n \ge \alpha >0\), for some \(\alpha >0\) and all \(n \ge 0\), then \((x_n)\) is bounded if and only if \((y_n)\) is bounded.

Proof

Assume that \((x_n)\) is bounded. Then for all \(n \ge 0\);

$$\begin{aligned} \Vert y_{n+1}-x_{n+1} \Vert \le (1- \lambda _n) \Vert y_0-x_n \Vert \end{aligned}$$
(16)

and this shows that \((y_n)\) is bounded.

Conversely, assume that \((y_n)\) is bounded. Then there exist \(\beta \in ]0,1[\) and \(M >0\) such that \(\Vert y_0-y_n \Vert \le M\) and \(1- \lambda _n < \beta \) for all \(n \ge 0\).

Therefore, for all \(m \ge 0\), we have:

$$\begin{aligned} \Vert y_{m+1}-x_{m+1}\Vert\le & {} (1- \lambda _m) \Vert y_0-x_m \Vert \\\le & {} (1- \lambda _m) \Vert y_0-y_m \Vert + (1- \lambda _m) \Vert y_m-x_m \Vert \\\le & {} (1- \lambda _m) M+ (1- \lambda _m) \Vert y_m-x_m \Vert . \end{aligned}$$

Using the above inequality, by induction, we get for all \(n \ge 0\),

$$\begin{aligned} \Vert y_{n+1}-x_{n+1}\Vert\le & {} (1- \lambda _n) M+ (1- \lambda _n) \Vert y_n-x_n \Vert \nonumber \\\le & {} (1- \lambda _n) M+ (1- \lambda _n)(1- \lambda _{n-1})M \nonumber \\&+ (1- \lambda _n)(1- \lambda _{n-1}) \Vert y_{n-1}-x_{n-1} \Vert \nonumber \\= & {} (1- \lambda _n) \Big [ 1 + (1- \lambda _{n-1}) \Big ] M+ (1- \lambda _n)(1- \lambda _{n-1}) \Vert y_{n-1}-x_{n-1} \Vert \nonumber \\\le & {} \cdots \nonumber \\\le & {} (1- \lambda _n) \Big [ 1 + (1- \lambda _{n-1})+(1- \lambda _{n-1})(1- \lambda _{n-2})+\cdots \nonumber \\&+ (1- \lambda _{n-1})(1- \lambda _{n-2})\ldots (1- \lambda _{0}) \Big ] M \nonumber \\&+ (1- \lambda _n)(1- \lambda _{n-1})\ldots (1- \lambda _{0}) \Vert y_0-x_0 \Vert \nonumber \\\le & {} (1- \lambda _n) (1+ \beta +\beta ^2+\cdots )M +\beta ^{n+1} \Vert y_0-x_0 \Vert \nonumber \\= & {} (1- \lambda _n) \dfrac{1}{1-\beta } +\beta ^{n+1} \Vert y_0-x_0 \Vert . \end{aligned}$$
(17)

This inequality shows that \((x_n)\) is bounded, and the proof is now complete. \(\square \)

Theorem 3.6

Let \((x_n)\) and \((y_n)\) be generated by (6) and (13), respectively. If \(\lambda _n \rightarrow 1\), as \(n \rightarrow +\infty \), then \((x_n)\) converges strongly to \(P_{F}(u)\) if and only if \((y_n)\) converges strongly to \(P_{F}(u)\).

Proof

If \((x_n)\) converges strongly to \(P_{F}(u)\), then it follows from (16) that \((y_n)\) converges strongly to \(P_{F}(u)\).

Conversely, if \((y_n)\) converges strongly to \(P_{F}(u)\), then letting \(n \rightarrow +\infty \) in (17), we conclude that \((x_n)\) converges strongly to \(P_{F}(u)\). This completes the proof. \(\square \)

The following example shows that the speed of convergence of the two algorithms cannot be compared with each other, and therefore, depending on the problem studied, one may prove more useful than the other.

Example 3.2

Let \(A:\mathbb {R} \longrightarrow \mathbb {R}\) be defined by \(Ax=x-1\). Obviously \(A^{-1}(0)=\{ 1 \}\). Let \( \lambda _n=1-\dfrac{1}{n^2}\), \(\gamma _n=n\) and \(e_n=n\). By taking \(x_0=y_0=u=0\), we have \(| x_{n+1}-1 |=\dfrac{1-\dfrac{1}{n}-\dfrac{x_n}{n^2}}{1+n}\) and \(| y_{n+1}-1 |=\dfrac{1-\dfrac{1}{n}}{1+n}\). Therefore, \(|x_{n+1}-1| \le |y_{n+1}-1|\). On the other hand, by taking \(x_0=y_0=2\) and \(u=0\), we have \(| x_{n+1}-1 |=\dfrac{1-\dfrac{1}{n}-\dfrac{x_n}{n^2}}{1+n}\) and \(| y_{n+1}-1 |=\dfrac{1-\dfrac{1}{n}-\dfrac{2}{n^2}}{1+n}\). Since \(x_n \rightarrow 1\) as \(n \rightarrow +\infty \), there exists some \(N_0 \in \mathbb {N}\) such that for all \(n \ge N_0 \), \(\dfrac{x_n}{n^2} < \dfrac{2}{n^2}\). Then we have \(|y_{n+1}-1| \le |x_{n+1}-1|\) for all \(n \ge N_0 \) .

4 Applications

4.1 Application to Optimization

Let H be a real Hilbert space and \(f:H \longrightarrow ]-\infty ,+\infty ]\) be a proper, convex and lower semicontinuous function. We know from [1] that \(A= \partial f\) is a maximal monotone operator in H, and the zero set of A coincides with the set of minimizers of f. Then Theorems 3.2 and 3.4 provide approximation schemes for a minimizer of f (argminf).

Theorem 4.1

Let H be a real Hilbert space and \(f:H \longrightarrow ]-\infty ,+\infty ]\) be a proper, convex and lower semicontinuous function. For any \(x_0,u \in H\), let the sequence \((x_n)\) be generated by (6) with \(A= \partial f\), where \(\lambda _n \in ]0,1[\) and \(\gamma _n \in ]0,+\infty [\) for all \(n \ge 0\). If \(\lambda _n \rightarrow 1\), \(\gamma _n \rightarrow +\infty \), and \(((1-\lambda _n)e_n) \rightarrow 0\) as \(n \rightarrow +\infty \), then \((x_n)\) converges strongly to \(P_{F}u\), the metric projection of u onto \(F:=\partial f^{-1}(0)= argmin f\), if and only if \((x_n)\) is bounded.

Proof

Since \(A=\partial f\) is a maximal monotone operator, then the conclusion follows from Theorem 3.2. \(\square \)

Remark 4.1

Similarly, if \(( y_n )\) is generated by (13) with \(A= \partial f\), then \((y_n)\) converges strongly to \(P_{F}u\), the metric projection of u onto \(F:=\partial f^{-1}(0)= argmin f\), if and only if \((y_n)\) is bounded.

To show again the usefulness of our new PPA (13) introduced in this paper, we provide another example of application of this scheme to optimization, namely the subgradient proximal algorithm which is described as follows:

We select \(y_{0} \in H\). Then with \((y_{n})\) being generated by (13), we calculate the iterative scheme \(w_{n} \in H\) by

$$\begin{aligned} w_0= & {} y_0 \\ w_{n}= & {} \lambda _nu+(1-\lambda _n)(w_0+e_n)-\gamma _nz_n, \end{aligned}$$

where \(z_n\) is such that

$$\begin{aligned} z_n \in \partial f(y_{n+1}) \cap B\left( u-P_Fu,\dfrac{\varepsilon }{4 \gamma _n}\right) \end{aligned}$$

for all \(n \ge 0\), and B(xr) denotes the open ball in H centered at x with radius r.

Now as an application of Theorem 3.4, we prove the following result.

Theorem 4.2

Let H be a real Hilbert space and \(f:H \longrightarrow ]-\infty ,+\infty ]\) be a proper, convex and lower semicontinuous function. Let \(y_0,u \in H\), \(\varepsilon >0\) be given, and the sequence \((y_n)\) be generated by (13) with \(A= \partial f\), where \(\lambda _n \in ]0,1[\) and \(\gamma _n \in ]0,+\infty [\) for all \(n \ge 0\). Let the sequence \(w_n\) be generated by the subgradient proximal algorithm:

$$\begin{aligned} w_0= & {} y_0 \\ w_{n}= & {} \lambda _nu+(1-\lambda _n)(w_0+e_n)-\gamma _nz_n, \end{aligned}$$

where \(z_n\) is such that

$$\begin{aligned} z_n \in \partial f(y_{n+1}) \cap B\left( u-P_Fu,\dfrac{\varepsilon }{4 \gamma _n}\right) \end{aligned}$$

for all \(n \ge 0\). If \(F:=\partial f^{-1}(0)= \mathrm{argmin} f \ne \emptyset \), \(\lambda _n \rightarrow 1\), \(\gamma _n \rightarrow +\infty \), and \(((1-\lambda _n)e_n) \rightarrow 0\) as \(n \rightarrow +\infty \), then there exists some \(N_0 \in \mathbb {N}\) such that

$$\begin{aligned} \Vert w_n-P_Fu \Vert \le \varepsilon \end{aligned}$$

for all \(n \ge N_0\).

Proof

By the definition of \(y_{n}\), we have:

$$\begin{aligned} \lambda _n u+(1-\lambda _n)(y_0+e_n) \in y_{n+1}+\gamma _n \partial f(y_{n+1}). \end{aligned}$$

Therefore, there exists \(\xi _{n} \in \partial f(y_{n+1})\) such that:

$$\begin{aligned} \lambda _n u+(1-\lambda _n)(y_0+e_n) = y_{n+1}+\gamma _n \xi _n. \end{aligned}$$

Letting \(n \longrightarrow \infty \) in the above equality, we get:

$$\begin{aligned} \lim _{n \rightarrow \infty }\gamma _n \xi _n=u-P_Fu. \end{aligned}$$
(18)

Therefore, there exists some \(N_1 \in \mathbb {N}\) such that for all \(n \ge N_1\)

$$\begin{aligned} \gamma _n \xi _n \in \gamma _n \partial f(y_{n+1}) \cap B\left( u-P_Fu,\dfrac{\varepsilon }{4}\right) . \end{aligned}$$

On the other hand, since by the hypothesis we have:

$$\begin{aligned} \gamma _n z_n \in \gamma _n \partial f(y_{n+1}) \cap B\left( u-P_Fu,\dfrac{\varepsilon }{4}\right) , \end{aligned}$$

it follows that there exists some \(N_0 \ge N_1\) such that for all \(n \ge N_0\)

$$\begin{aligned} \Vert y_{n+1}-P_Fu \Vert < \dfrac{\varepsilon }{2} \end{aligned}$$

and

$$\begin{aligned} \Vert \gamma _n z_n-\gamma _n \xi _n \Vert < \dfrac{\varepsilon }{2}. \end{aligned}$$

Then for all \(n \ge N_0\), we have:

$$\begin{aligned} \Vert w_n-P_Fu \Vert\le & {} \Vert w_n-y_{n+1}\Vert +\Vert y_{n+1}-P_Fu \Vert \\= & {} \Vert \gamma _n z_n-\gamma _n \xi _n \Vert + \Vert y_{n+1}-P_Fu \Vert \\< & {} \dfrac{\varepsilon }{2}+ \dfrac{\varepsilon }{2}=\varepsilon . \end{aligned}$$

This completes the proof. \(\square \)

4.2 Application to Variational Inequalities

Let D be a nonempty, closed and convex subset of a Hilbert space H, and let \(A_0 :D \longrightarrow H\) be a single-valued, monotone and hemicontinuous (i.e., continuous along each line segment in H with respect to the weak topology) mapping. Let \(N_{D}(z)\) denote the normal cone to D at z:

$$\begin{aligned} N_{D}(z):= \left\{ w \in H: \langle w,z-u \rangle \ge 0, \forall u \in D \right\} , \end{aligned}$$
(19)

and let \(A: H \rightrightarrows H\) be defined by:

$$\begin{aligned} A(z):= {\left\{ \begin{array}{ll} A_0(z)+N_D(z), &{} \quad \mathrm{if} \quad z \in D, \\ \emptyset , &{} \quad \mathrm{if} \quad z \notin D. \end{array}\right. } \end{aligned}$$
(20)

The maximal monotonicity of the multivalued mapping A was proved by Rockafellar [1]. The relation \(0 \in A(z)\) reduces to \(-A_0(z) \in N_D(z)\), or the so-called variational inequality:

$$\begin{aligned} \mathrm{find} \,\, z\in D \,\, \mathrm{such}\,\, \mathrm{that }\,\, \langle z-u,A_0(z) \rangle \le 0 \,\,\mathrm{for}\,\, \mathrm{all}\,\, u \in D. \end{aligned}$$
(21)

We define \(VI(A_0,D)\) as follows:

$$\begin{aligned} VI(A_0,D):=\left\{ z \in D: \langle z-u,A_0(z) \rangle \le 0, \,\,\, \forall u \in D \right\} . \end{aligned}$$
(22)

If D is a cone, this condition can be written as

$$\begin{aligned} z \in D, -A_0(z) \in D^{\circ }(\mathrm{the} \,\, \mathrm{polar}\,\, \mathrm{of} D),\, \mathrm{and} \,\langle z,A_0(z) \rangle =0, \end{aligned}$$
(23)

and the problem of finding such z is an important instance of the well-known complementarity problem of mathematical programing. Then Theorems 3.2 and 3.4 provide approximation schemes for a solution to the variational inequality for a single-valued monotone and hemicontinuous map \(A_0:D \longrightarrow H\).

Theorem 4.3

Let H be a real Hilbert space, D be a nonempty closed and convex subset of H, \(A_0 :D \longrightarrow H\) be a single-valued, monotone and hemicontinuous map, and \(N_D(z)\) be the normal cone to D at z. For any \(x_0,u \in H\), let the sequence \((x_n)\) be generated by (6), with A being defined by (20), where \(\lambda _n \in ]0,1[\) and \(\gamma _n \in ]0,+\infty [\) for all \(n \ge 0\). If \(\lambda _n \rightarrow 1\), \(\gamma _n \rightarrow +\infty \), and \(((1-\lambda _n)e_n) \rightarrow 0\) as \(n \rightarrow +\infty \), then \(VI(A_0,D) \ne \emptyset \) if and only if \((x_n)\) is bounded, and in this case, \((x_n)\) converges strongly to an element of \(VI(A_0,D)\), which is the metric projection of u onto \(A^{-1}(0)\).

Proof

Since we know that A is a maximal monotone operator and \(VI(A_0,D)=A^{-1}(0)\), then the conclusion follows from Theorem 3.2. \(\square \)

Remark 4.2

Similarly, if \(( y_n )\) is generated by (13) with A being defined by (20), then \(VI(A_0,D) \ne \emptyset \) if and only if \((y_n )\) is bounded, and in this case, \((y_n)\) converges strongly to \(P_{F}u\), the metric projection of u onto \(F:=A^{-1}(0)\), which is an element of \(VI(A_0,D)\).

5 Conclusions

In this paper, we studied two PPA with error sequences for a maximal monotone operator A in a real Hilbert space H. In each case, we provided necessary and sufficient conditions for the zero set of A to be nonempty, and proved the strong convergence of the PPA to a zero of A in this case, without assuming the boundedness of the error sequence. Moreover, we showed the usefulness of both schemes, by showing that their speed of convergence cannot be compared with each other, and therefore, depending on the problem studied, one may prove more useful than the other. Since verifying that the zero set of A is nonempty might be a difficult task, then our criterion may provide a very useful and convenient way for this task, especially from a computational point of view. Moreover, the fact that in our paper, the error sequence maybe unbounded, is more realistic both from a theoretical and from a practical point of view, and provides a significant improvement compared to the existing literature. As a future direction for research, since numerous other PPA have been developed and their convergence studied by many authors, it might be interesting to investigate the possibility of implementing the ideas and methods developed in this paper to these other PPA.