Abstract
We consider a proximal point algorithm with errors for a maximal monotone operator in a real Hilbert space, previously studied by Boikanyo and Morosanu, where they assumed that the zero set of the operator is nonempty and the error sequence is bounded. In this paper, by using our own approach, we significantly improve the previous results by giving a necessary and sufficient condition for the zero set of the operator to be nonempty, and by showing that in this case, this iterative sequence converges strongly to the metric projection of some point onto the zero set of the operator, without assuming the boundedness of the error sequence. We study also in a similar way the strong convergence of a new proximal point algorithm and present some applications of our results to optimization and variational inequalities.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In a fundamental work in 1970, Rockafellar [1] proved that the subdifferential of a proper, convex and lower semicontinuous function is a maximal monotone operator. The proximal point algorithm (PPA) was introduced by Martinet [2] for minimizing a convex function on \(\mathbb {R}^n\). One of the most effective iterative methods for approximating the zeroes of a maximal monotone operator is the proximal point algorithm (PPA) which was initiated by Martinet [2] and subsequently improved by Rockafellar [3]. For more information on the PPA, we refer the reader to [4–8] and the references therein.
The first mean ergodic theorem for nonexpansive self-mappings of a nonempty, closed and convex subset of H was proved by Baillon [9]. For more details on maximal monotone operators and nonexpansive mappings, we refer the reader to [10–13]. The notion of nonexpansive sequences arose in connection with the study of iterates of nonexpansive mappings in situations, where the domain of such mappings may not be convex; see [14–16].
Recently, Boikanyo and Morosanu [17] studied the convergence of a PPA with errors for a maximal monotone operator A. They assumed that the zero set of A is nonempty and the error sequence is bounded. In this paper, motivated by our approach in [5, 14–16], we significantly improve their results by giving a necessary and sufficient condition for the zero set of A to be nonempty, and by showing that in this case, this PPA converges strongly to the metric projection of u onto the zero set of A, without assuming the boundedness of the error sequence. We also introduce a new PPA and prove similar results for this PPA. Finally, we present some applications of our results to optimization and variational inequalities.
2 Preliminaries
Let H be a real Hilbert space with inner product \(\langle .,. \rangle \) and norm \(\Vert .\Vert \). We denote weak convergence in H by \(\rightharpoonup \) and strong convergence by \(\rightarrow \). An operator \(A:D(A) \subseteq H \rightrightarrows H\) is said to be monotone if its graph is a monotone subset of \(H \times H\), that is,
for all \(x_1,x_2 \in D(A)\) and all \(y_1 \in Ax_1\) and \(y_2 \in Ax_2\). A is maximal monotone if A is monotone and the graph of A is not properly contained in the graph of any other monotone operator. It is known that in Hilbert space, this is equivalent to the range of the operator \((I+A)\), denoted by \(R(I+A)\), being all of H, where I is the identity operator on H, i.e., \(R(I+A)=H\).
Let \(D \subseteq H\) be nonempty. A mapping \(T:D \longrightarrow H\) is said to be nonexpansive if
for all \(x,y \in D\).
For a maximal monotone operator A, and for every \(t > 0\), the operator
\(J_{t}: H \longrightarrow H\) defined by \(J_{t}(x):=(I+tA)^{-1}(x)\) is well-defined, single-valued and nonexpansive on H. It is called the resolvent of A.
Consider the following set-valued problem:
One of the most effective iterative methods for solving problem (3) is the proximal point algorithm. The PPA generates an iterative sequence \((x_n)\) as follows:
for all \(n \ge 0\), where \(x_0 \in H\) is a given starting point, \((\gamma _n) \subset ]0,+ \infty [\) and \((e_n)\) is a sequence of computational errors, which are always present when experimental measurements are performed.
In 2013, Boikanyo and Morosanu [17] considered the sequence generated by the following algorithm:
where \(x_{0}, u \in H\), \(\lambda _{n}\in ]0,1[\), and \(\gamma _{n}\in ]0,+\infty [\), for all \(n\ge 0\). They showed that if \(A^{-1}(0)\ne \emptyset \) and \(\lambda _n \rightarrow 1\), \(\gamma _n \rightarrow + \infty \) and \((e_n)\) is bounded, then \(( x_{n} )\) converges strongly to an element of \(A^{-1}(0)\), which is nearest to u.
In Sect. 3, we consider the sequence generated by (5) and give a necessary and sufficient condition for the zero set of A to be nonempty, and we show that in this case, the sequence \((x_{n})\) converges strongly to the metric projection of u onto \(A^{-1}(0)\) without assuming the boundedness of \((e_n)\). These results significantly improve upon the results of Boikanyo and Morosanu [17] who assumed that the zero set of A is nonempty and the error sequence \((e_{n})\) is bounded. We also introduce a new PPA defined by the iterative sequence: \(y_{n+1}=(I+\gamma _{n}A)^{-1}(\lambda _{n}u +(1-\lambda _{n})(y_0+e_n))\), where \(y_{0}, u \in H\), and prove similar results for the sequence \(( y_{n} )\). Finally in Sect. 4, we present some applications of our results to optimization and variational inequalities.
3 Main Results
For the proof of our main results in this paper, we need the following useful lemma, and we refer the reader to [18] or [19] for its proof.
Lemma 3.1
Let \(A:D(A) \subseteq H \rightrightarrows H\) be a maximal monotone operator with \(A^{-1}(0)=F \ne \emptyset \). Then for any \(u \in H\), \((I+tA)^{-1}u \rightarrow P_{F}u\) as \(t \rightarrow +\infty \), where \(P_{F}u\) denotes the metric projection of u onto F.
3.1 The Proximal Point Algorithm Defined by (6)
In the following theorem, we give a necessary and sufficient condition for the zero set of A to be nonempty.
Theorem 3.1
Let \(A:D(A) \subseteq H \rightrightarrows H\) be a maximal monotone operator. For any fixed \(x_0,u \in H\), let the sequence \((x_n)\) be generated by
for all \(n \ge 0\), where \(\lambda _n \in ]0,1[\) and \(\gamma _n \in ]0,+\infty [\) for all \(n \ge 0\). Then \(A^{-1}(0) \ne \emptyset \) if and only if there exist two sequences \((\lambda _n) \subset ]0,1[\) and \((\gamma _n) \subset ]0,+\infty [\), with \(\lambda _n \rightarrow 1\) and \(\gamma _n \rightarrow +\infty \) as \(n \rightarrow +\infty \), such that the sequence \(((1-\lambda _n)e_n)\) is bounded, and \(\liminf _{n \rightarrow \infty }(\Vert x_{n+1} \Vert +(1-\lambda _n)\Vert x_n \Vert ) < \infty \), or equivalently, \(\liminf _{n \rightarrow \infty }(\Vert x_{n+1} \Vert +(1-\lambda _n)\Vert x_{n+1}-x_n \Vert ) < \infty .\)
Proof
Assume that \(A^{-1}(0)\ne \emptyset \), and let \(p \in A^{-1}(0)\). From (6), and the fact that the resolvent operator is nonexpansive, for all \(m \ge 0\) we have:
From the above inequality, for all \(n \ge 0\), we get:
Since \(\lambda _n \rightarrow 1\) and \((1-\lambda _n) e_n\) is bounded, there exist \(0< \alpha <1\) and \(M > 0\) such that for all \( n \in \mathbb {N}\), we have \(1-\lambda _n \le \alpha \) and \((1-\lambda _n) \Vert e_n \Vert \le M\). Hence from (8), we have
This inequality shows that \((x_n)\) is bounded and therefore
Conversely, assume that \(\liminf _{n \rightarrow \infty }(\Vert x_{n+1} \Vert +(1-\lambda _n)\Vert x_n \Vert ) < \infty \). Then there exists a subsequence (n(k)) of \(\mathbb {N}\) such that \(\Vert x_{n(k)+1} \Vert +(1-\lambda _{n(k)})\Vert x_{n(k)} \Vert \) is bounded. Therefore, there exists a subsequence (m(l)) of (n(k)) such that \(x_{m(l)+1} \rightharpoonup p\) as \(l \rightarrow +\infty \) for some \(p \in H\). Also \(((1-\lambda _{m(l)}) x_{m(l)} )\) is a bounded sequence. Now for every \(x \in D(A)\) and \(y \in A(x)\), since
we get
Since \((1-\lambda _{m(l)}) x_{m(l)} \) and \(x_{m(l)+1}\) are bounded, \(\lambda _n \rightarrow 1\), \(\gamma _n \rightarrow +\infty \) and \(((1-\lambda _n)e_n)\) is bounded, it follows that:
Now letting \(l \rightarrow \infty \) in (10), we get:
Therefore, by the maximality of A, we conclude that \(p \in D(A)\) and \(0 \in A(p)\). This completes the proof. \(\square \) \(\square \)
Remark 3.1
The proof of Theorem 3.1 shows that \(A^{-1}(0) \ne \emptyset \) if and only if \((x_n)\) is bounded.
Corollary 3.1
Let \(A:D(A) \subseteq H \rightrightarrows H\) be a maximal monotone operator. For any fixed \(x_0,u \in H\), let the sequence \((x_n)\) be generated by (6), where \(\lambda _n \in ]0,1[\) and \(\gamma _n \in ]0,+\infty [\) for all \(n \ge 0\). Assume that there exist two sequences \((\lambda _n) \subset ]0,1[\) and \((\gamma _n) \subset ]0,+\infty [\), with \(\lambda _n \rightarrow 1\) and \(\gamma _n \rightarrow +\infty \) as \(n \rightarrow +\infty \), such that the sequence \(((1-\lambda _n)e_n)\) is bounded. Then \(A^{-1}(0) \ne \emptyset \) if \(\liminf _{n \rightarrow \infty }\Vert x_{n} \Vert < \infty \) and the sequence \((x_{n+1}-x_n)\) is bounded.
Proof
This follows from Theorem 3.1, since in this case, the condition
is satisfied. \(\square \)
The following theorem extends the previous results by Boikanyo and Morosanu [17] who assumed that the zero set of A is nonempty, and the error sequence \((e_{n})\) is bounded.
Theorem 3.2
Suppose that \(A:D(A) \subseteq H \rightrightarrows H\)is a maximal monotone operator. For any \(x_0,u \in H\), let the sequence \((x_n)\) be generated by (6), where \(\lambda _n \in ]0,1[\) and \(\gamma _n \in ]0,+\infty [\) for all \(n \ge 0\). If \(\lambda _n \rightarrow 1\), \(\gamma _n \rightarrow +\infty \) and \(((1-\lambda _n)e_n) \rightarrow 0\) as \(n \rightarrow +\infty \), then \((x_n)\) converges strongly to \(P_{F}u\), the metric projection of u onto \(F:=A^{-1}(0)\), if and only if
Proof
We know from Theorem 3.1 and Remark 3.1 that \(A^{-1}(0) \ne \emptyset \) and the sequence \((x_n)\) is bounded. Moreover, from (6), we have
where the second inequality follows from the fact that the resolvent operator is nonexpansive. Now the result follows immediately by using Lemma 3.1 and letting \(n \rightarrow +\infty \) in the above inequality. This completes the proof. \(\square \)
Remark 3.2
For every sequence \((e_n)\) in H (not necessarily bounded), there exists a sequence \((\lambda _n)\) in ]0, 1[ such that \(\lambda _n \rightarrow 1\) and \(((1-\lambda _n)e_n) \rightarrow 0\) as \(n \rightarrow +\infty \). For example we can consider \(\lambda _n:=1-\frac{1}{(n+1)(\Vert e_n \Vert +1)}\), for all \(n \ge 0\). Hence for any maximal monotone operator \(A:D(A) \subseteq H \rightrightarrows H\) in a real Hilbert space H with \(A^{-1}(0) \ne \emptyset \), the following iterative sequence converges strongly to the metric projection of u onto \(A^{-1}(0)\), where \(\gamma _n \in ]0,+\infty [\) for all \(n \ge 0\) and \(\gamma _n \rightarrow +\infty \) as \(n \rightarrow +\infty \).
The following example shows that, if \(A^{-1}(0) = \emptyset \), then the sequence \((x_n)\) is unbounded.
Example 3.1
Let \(H=\mathbb {R}\) and \(A:H \rightarrow H\) be defined by \(Ax=-e^{-x}\). Obviously, A is maximal monotone and \(A^{-1}(0) = \emptyset \). For every \(x_0,u \in H\) let the sequence \((x_n)\) be generated by (6). Then
If for two sequences \((\lambda _n)\) and \((\gamma _n)\) with \(\lambda _n \rightarrow 1\), \(\gamma _n \rightarrow +\infty \) and \((1-\lambda _n)e_n \rightarrow 0\), the sequence \((x_n)\) is bounded, then from (12), it follows that: \(\limsup _{n \rightarrow \infty }\gamma _n e^{-x_{n+1}} \le |u|+sup_{n \ge 0} |x_{n}| < +\infty \). Since \(\gamma _n \rightarrow +\infty \), we must have \(\lim _{n \rightarrow \infty } e^{-x_{n+1}}=0\), and hence, \(x_{n+1} \rightarrow \infty \), which is a contradiction. Therefore, \((x_n)\) is unbounded.
3.2 A New Proximal Point Algorithm
In this section, we introduce a new algorithm, (13), and show that the sequence \((y_n)\) converges strongly to \(P_Fu\). First we give a necessary and sufficient condition for the zero set of A to be nonempty.
Theorem 3.3
Let \(A:D(A) \subseteq H \rightrightarrows H\) be a maximal monotone operator. For any fixed \(y_0,u \in H\), let the sequence \((y_n)\) be generated by
for all \(n \ge 0\), where \(\lambda _n \in ]0,1[\) and \(\gamma _n \in ]0,+\infty [\) for all \(n \ge 0\). Then \(A^{-1}(0) \ne \emptyset \) if and only if there exist two sequences \((\lambda _n) \subset ]0,1[\) and \((\gamma _n) \subset ]0,+\infty [\) with \(\lambda _n \rightarrow 1\) and \(\gamma _n \rightarrow +\infty \) as \(n \rightarrow +\infty \), such that the sequence \(((1-\lambda _n)e_n)\) is bounded, and \(\liminf _{n \rightarrow \infty } \Vert y_{n} \Vert < \infty \).
Proof
Assume that \(A^{-1}(0) \ne \emptyset \) and let \(p \in A^{-1}(0)\). From (13) and the fact that the resolvent operator is nonexpansive, for all \(m \ge 0\) we have:
This inequality shows that the sequence \((y_n)\) is bounded. Hence
Conversely, assume that \(\liminf _{n \rightarrow \infty } \Vert y_{n} \Vert < \infty \). Then there exists a subsequence (n(k)) of \(\mathbb {N}\) such that \((y_{n(k)})\) is bounded. Therefore, there exists a subsequence (m(l)) of (n(k)) such that \(y_{m(l)} \rightharpoonup p\) as \(l \rightarrow +\infty \), for some \(p \in H\). Now for every \(x \in D(A)\) and \(y \in A(x)\), since
we get
Since \(\lambda _n \rightarrow 1\), \(\gamma _n \rightarrow +\infty \) and \(((1-\lambda _n)e_n)\) is bounded, it follows that:
Now letting \(l \rightarrow \infty \) in (15), we get:
Therefore, by the maximality of A, we conclude that \(p \in D(A)\) and \(0 \in A(p)\). This completes the proof. \(\square \)
Remark 3.3
The proof of Theorem 3.3 shows that \(A^{-1}(0) \ne \emptyset \), if and only if \((y_n)\) is bounded.
Theorem 3.4
Let \(A:D(A) \subseteq H \rightrightarrows H\) be a maximal monotone operator. For any \(y_0, u \in H\), let the sequence \((y_n)\) be generated by (13), where \(\lambda _n \in ]0,1[\) and \(\gamma _n \in ]0,+\infty [\) for all \(n \ge 0\). If \(\lambda _n \rightarrow 1\), \(\gamma _n \rightarrow +\infty \) and \(((1-\lambda _n)e_n) \rightarrow 0\) as \(n \rightarrow +\infty \), then \((y_n)\) converges strongly to \(P_{F}u\), the metric projection of u onto \(F:= A^{-1}(0)\), if and only if \(\liminf _{n \rightarrow \infty }\Vert y_{n} \Vert < \infty \)
Proof
The proof is done along similar lines as in the proof of Theorem 3.2, and therefore, we omit it here. \(\square \)
3.3 Comparison of Algorithms (6) and (13)
In the following theorems, we compare Algorithms (6) and (13).
Theorem 3.5
Let \((x_n)\) and \((y_n)\) be generated by (6) and (13), respectively. If \(1>\lambda _n \ge \alpha >0\), for some \(\alpha >0\) and all \(n \ge 0\), then \((x_n)\) is bounded if and only if \((y_n)\) is bounded.
Proof
Assume that \((x_n)\) is bounded. Then for all \(n \ge 0\);
and this shows that \((y_n)\) is bounded.
Conversely, assume that \((y_n)\) is bounded. Then there exist \(\beta \in ]0,1[\) and \(M >0\) such that \(\Vert y_0-y_n \Vert \le M\) and \(1- \lambda _n < \beta \) for all \(n \ge 0\).
Therefore, for all \(m \ge 0\), we have:
Using the above inequality, by induction, we get for all \(n \ge 0\),
This inequality shows that \((x_n)\) is bounded, and the proof is now complete. \(\square \)
Theorem 3.6
Let \((x_n)\) and \((y_n)\) be generated by (6) and (13), respectively. If \(\lambda _n \rightarrow 1\), as \(n \rightarrow +\infty \), then \((x_n)\) converges strongly to \(P_{F}(u)\) if and only if \((y_n)\) converges strongly to \(P_{F}(u)\).
Proof
If \((x_n)\) converges strongly to \(P_{F}(u)\), then it follows from (16) that \((y_n)\) converges strongly to \(P_{F}(u)\).
Conversely, if \((y_n)\) converges strongly to \(P_{F}(u)\), then letting \(n \rightarrow +\infty \) in (17), we conclude that \((x_n)\) converges strongly to \(P_{F}(u)\). This completes the proof. \(\square \)
The following example shows that the speed of convergence of the two algorithms cannot be compared with each other, and therefore, depending on the problem studied, one may prove more useful than the other.
Example 3.2
Let \(A:\mathbb {R} \longrightarrow \mathbb {R}\) be defined by \(Ax=x-1\). Obviously \(A^{-1}(0)=\{ 1 \}\). Let \( \lambda _n=1-\dfrac{1}{n^2}\), \(\gamma _n=n\) and \(e_n=n\). By taking \(x_0=y_0=u=0\), we have \(| x_{n+1}-1 |=\dfrac{1-\dfrac{1}{n}-\dfrac{x_n}{n^2}}{1+n}\) and \(| y_{n+1}-1 |=\dfrac{1-\dfrac{1}{n}}{1+n}\). Therefore, \(|x_{n+1}-1| \le |y_{n+1}-1|\). On the other hand, by taking \(x_0=y_0=2\) and \(u=0\), we have \(| x_{n+1}-1 |=\dfrac{1-\dfrac{1}{n}-\dfrac{x_n}{n^2}}{1+n}\) and \(| y_{n+1}-1 |=\dfrac{1-\dfrac{1}{n}-\dfrac{2}{n^2}}{1+n}\). Since \(x_n \rightarrow 1\) as \(n \rightarrow +\infty \), there exists some \(N_0 \in \mathbb {N}\) such that for all \(n \ge N_0 \), \(\dfrac{x_n}{n^2} < \dfrac{2}{n^2}\). Then we have \(|y_{n+1}-1| \le |x_{n+1}-1|\) for all \(n \ge N_0 \) .
4 Applications
4.1 Application to Optimization
Let H be a real Hilbert space and \(f:H \longrightarrow ]-\infty ,+\infty ]\) be a proper, convex and lower semicontinuous function. We know from [1] that \(A= \partial f\) is a maximal monotone operator in H, and the zero set of A coincides with the set of minimizers of f. Then Theorems 3.2 and 3.4 provide approximation schemes for a minimizer of f (argminf).
Theorem 4.1
Let H be a real Hilbert space and \(f:H \longrightarrow ]-\infty ,+\infty ]\) be a proper, convex and lower semicontinuous function. For any \(x_0,u \in H\), let the sequence \((x_n)\) be generated by (6) with \(A= \partial f\), where \(\lambda _n \in ]0,1[\) and \(\gamma _n \in ]0,+\infty [\) for all \(n \ge 0\). If \(\lambda _n \rightarrow 1\), \(\gamma _n \rightarrow +\infty \), and \(((1-\lambda _n)e_n) \rightarrow 0\) as \(n \rightarrow +\infty \), then \((x_n)\) converges strongly to \(P_{F}u\), the metric projection of u onto \(F:=\partial f^{-1}(0)= argmin f\), if and only if \((x_n)\) is bounded.
Proof
Since \(A=\partial f\) is a maximal monotone operator, then the conclusion follows from Theorem 3.2. \(\square \)
Remark 4.1
Similarly, if \(( y_n )\) is generated by (13) with \(A= \partial f\), then \((y_n)\) converges strongly to \(P_{F}u\), the metric projection of u onto \(F:=\partial f^{-1}(0)= argmin f\), if and only if \((y_n)\) is bounded.
To show again the usefulness of our new PPA (13) introduced in this paper, we provide another example of application of this scheme to optimization, namely the subgradient proximal algorithm which is described as follows:
We select \(y_{0} \in H\). Then with \((y_{n})\) being generated by (13), we calculate the iterative scheme \(w_{n} \in H\) by
where \(z_n\) is such that
for all \(n \ge 0\), and B(x, r) denotes the open ball in H centered at x with radius r.
Now as an application of Theorem 3.4, we prove the following result.
Theorem 4.2
Let H be a real Hilbert space and \(f:H \longrightarrow ]-\infty ,+\infty ]\) be a proper, convex and lower semicontinuous function. Let \(y_0,u \in H\), \(\varepsilon >0\) be given, and the sequence \((y_n)\) be generated by (13) with \(A= \partial f\), where \(\lambda _n \in ]0,1[\) and \(\gamma _n \in ]0,+\infty [\) for all \(n \ge 0\). Let the sequence \(w_n\) be generated by the subgradient proximal algorithm:
where \(z_n\) is such that
for all \(n \ge 0\). If \(F:=\partial f^{-1}(0)= \mathrm{argmin} f \ne \emptyset \), \(\lambda _n \rightarrow 1\), \(\gamma _n \rightarrow +\infty \), and \(((1-\lambda _n)e_n) \rightarrow 0\) as \(n \rightarrow +\infty \), then there exists some \(N_0 \in \mathbb {N}\) such that
for all \(n \ge N_0\).
Proof
By the definition of \(y_{n}\), we have:
Therefore, there exists \(\xi _{n} \in \partial f(y_{n+1})\) such that:
Letting \(n \longrightarrow \infty \) in the above equality, we get:
Therefore, there exists some \(N_1 \in \mathbb {N}\) such that for all \(n \ge N_1\)
On the other hand, since by the hypothesis we have:
it follows that there exists some \(N_0 \ge N_1\) such that for all \(n \ge N_0\)
and
Then for all \(n \ge N_0\), we have:
This completes the proof. \(\square \)
4.2 Application to Variational Inequalities
Let D be a nonempty, closed and convex subset of a Hilbert space H, and let \(A_0 :D \longrightarrow H\) be a single-valued, monotone and hemicontinuous (i.e., continuous along each line segment in H with respect to the weak topology) mapping. Let \(N_{D}(z)\) denote the normal cone to D at z:
and let \(A: H \rightrightarrows H\) be defined by:
The maximal monotonicity of the multivalued mapping A was proved by Rockafellar [1]. The relation \(0 \in A(z)\) reduces to \(-A_0(z) \in N_D(z)\), or the so-called variational inequality:
We define \(VI(A_0,D)\) as follows:
If D is a cone, this condition can be written as
and the problem of finding such z is an important instance of the well-known complementarity problem of mathematical programing. Then Theorems 3.2 and 3.4 provide approximation schemes for a solution to the variational inequality for a single-valued monotone and hemicontinuous map \(A_0:D \longrightarrow H\).
Theorem 4.3
Let H be a real Hilbert space, D be a nonempty closed and convex subset of H, \(A_0 :D \longrightarrow H\) be a single-valued, monotone and hemicontinuous map, and \(N_D(z)\) be the normal cone to D at z. For any \(x_0,u \in H\), let the sequence \((x_n)\) be generated by (6), with A being defined by (20), where \(\lambda _n \in ]0,1[\) and \(\gamma _n \in ]0,+\infty [\) for all \(n \ge 0\). If \(\lambda _n \rightarrow 1\), \(\gamma _n \rightarrow +\infty \), and \(((1-\lambda _n)e_n) \rightarrow 0\) as \(n \rightarrow +\infty \), then \(VI(A_0,D) \ne \emptyset \) if and only if \((x_n)\) is bounded, and in this case, \((x_n)\) converges strongly to an element of \(VI(A_0,D)\), which is the metric projection of u onto \(A^{-1}(0)\).
Proof
Since we know that A is a maximal monotone operator and \(VI(A_0,D)=A^{-1}(0)\), then the conclusion follows from Theorem 3.2. \(\square \)
Remark 4.2
Similarly, if \(( y_n )\) is generated by (13) with A being defined by (20), then \(VI(A_0,D) \ne \emptyset \) if and only if \((y_n )\) is bounded, and in this case, \((y_n)\) converges strongly to \(P_{F}u\), the metric projection of u onto \(F:=A^{-1}(0)\), which is an element of \(VI(A_0,D)\).
5 Conclusions
In this paper, we studied two PPA with error sequences for a maximal monotone operator A in a real Hilbert space H. In each case, we provided necessary and sufficient conditions for the zero set of A to be nonempty, and proved the strong convergence of the PPA to a zero of A in this case, without assuming the boundedness of the error sequence. Moreover, we showed the usefulness of both schemes, by showing that their speed of convergence cannot be compared with each other, and therefore, depending on the problem studied, one may prove more useful than the other. Since verifying that the zero set of A is nonempty might be a difficult task, then our criterion may provide a very useful and convenient way for this task, especially from a computational point of view. Moreover, the fact that in our paper, the error sequence maybe unbounded, is more realistic both from a theoretical and from a practical point of view, and provides a significant improvement compared to the existing literature. As a future direction for research, since numerous other PPA have been developed and their convergence studied by many authors, it might be interesting to investigate the possibility of implementing the ideas and methods developed in this paper to these other PPA.
References
Rockafellar, R.T.: On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 33, 209–216 (1970)
Martinet, B.: Régularisation d’inéquations variationnelles par approximations successives. Rev. Française Informat. Recherche Opérationnelle 4((Ser. R–3)), 154–158 (1970)
Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877–898 (1976)
Boikanyo, O.A., Morosanu, G.: Modified Rockafellar’s algorithm. Math. Sci. Res. J. 13, 101–122 (2009)
Djafari Rouhani, B., Khatibzadeh, H.: On the proximal point algorithm. J. Optim. Theory Appl. 137, 411–417 (2008)
Güler, O.: On the convergence of the proximal point algorithm for convex minimization. SIAM J. Control Optim. 29, 403–419 (1991)
Lehdili, N., Moudafi, A.: Combining the proximal algorithm and Tikhonov regularization. Optimization 37, 239–252 (1996)
Xu, H.K.: A regularization method for the proximal point algorithm. J. Global Optim. 36, 115–125 (2006)
Baillon, J.B.: Un théorème de type ergodique pour les contractions non linéaires dans un espace de Hilbert. C.R. Acad. Sci. Paris Sér. A–B 280, A1511–A1514 (1975)
Borwein, J.M.: Fifty years of maximal monotonicity. Optim. Lett. 4, 473–490 (2010)
Brézis, H.: Opérateurs Maximaux Monotones et Semi-Groupes de Contractions dans les Espaces de Hilbert, North-Holland Mathematics Studies, vol. 5. North-Holland, Amsterdam (1973)
Morosanu, G.: Nonlinear Evolution Equations and Applications. Reidel, Dordrecht (1988)
Pardalos, P.M., Rassias, T.M., Khan, A.A.: Nonlinear Analysis and Variational Problems, Springer Optimization and Its Applications, vol. 35. Springer, New York (2010)
Djafari Rouhani, B.: Ergodic theorems for nonexpansive sequences in Hilbert spaces and related problems, Ph.D. Thesis, Yale University, part I, pp. 1–76 (1981)
Djafari Rouhani, B.: Asymptotic behaviour of quasi-autonomous dissipative systems in Hilbert spaces. J. Math. Anal. Appl. 147, 465–476 (1990)
Djafari Rouhani, B.: Asymptotic behaviour of almost nonexpansive sequences in a Hilbert space. J. Math. Anal. Appl. 151, 226–235 (1990)
Boikanyo, O.A., Morosanu, G.: Strong convergence of a proximal point algorithm with bounded error sequence. Optim. Lett. 7, 415–420 (2013)
Bruck Jr., R.E.: A strongly convergent iterative solution of \(0 \in U(x)\) for a maximal monotone operator U in Hilbert space. J. Math. Anal. Appl. 48, 114–126 (1974)
Morosanu, G.: Asymptotic behaviour of resolvent for a monotone mapping in a Hilbert space. Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Natur. 61, 565–570 (1977)
Acknowledgments
The authors are grateful to the referees for valuable suggestions leading to the improvement of the paper.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Djafari Rouhani, B., Moradi, S. Strong Convergence of Two Proximal Point Algorithms with Possible Unbounded Error Sequences. J Optim Theory Appl 172, 222–235 (2017). https://doi.org/10.1007/s10957-016-1028-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-016-1028-5