Abstract
We consider the regularization of two proximal point algorithms (PPA) with errors for a maximal monotone operator in a real Hilbert space, previously studied, respectively, by Xu, and by Boikanyo and Morosanu, where they assumed the zero set of the operator to be nonempty. We provide a counterexample showing an error in Xu’s theorem, and then we prove its correct extended version by giving a necessary and sufficient condition for the zero set of the operator to be nonempty and showing the strong convergence of the regularized scheme to a zero of the operator. This will give a first affirmative answer to the open question raised by Boikanyo and Morosanu concerning the design of a PPA, where the error sequence tends to zero and a parameter sequence remains bounded. Then, we investigate the second PPA with various new conditions on the parameter sequences and prove similar theorems as above, providing also a second affirmative answer to the open question of Boikanyo and Morosanu. Finally, we present some applications of our new convergence results to optimization and variational inequalities.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In a fundamental work, Rockafellar [1] proved that the subdifferential of a proper, convex and lower semicontinuous function is a maximal monotone operator. Since the zeroes of the subdifferential correspond to the minimizers of the function, then finding and approximating those zeroes is a problem of fundamental importance in optimization. One of the most effective iterative methods for solving this problem is the proximal point algorithm (abbreviated PPA), which was studied by Rockafellar [2]. The first mean ergodic theorem for nonexpansive self-mappings of a nonempty closed and convex subset of a Hilbert space was proved by Baillon [3]. For more details on maximal monotone operators and nonexpansive mappings, we refer the reader to [4,5,6,7]. In this paper, motivated by our approach in [8,9,10,11,12], we consider the regularization of two PPA with errors for a maximal monotone operator in a real Hilbert space, previously studied, respectively, by Xu, and by Boikanyo and Morosanu, where they assumed the zero set of the operator to be nonempty. First, by providing a counter example, we point out an error in the proof of Xu’s theorem concerning the uniformity of the strong convergence with respect to a parameter. Subsequently, we prove a correct extended version of that theorem with appropriate conditions, giving also a necessary and sufficient condition for the zero set of the operator to be nonempty and showing the strong convergence of the regularized scheme to a zero of the operator. We note in passing that Song and Yang [13] had already noticed and corrected an error in Xu’s paper, and Wang [14] had extended a theorem in Xu’s paper. However, none of these authors had noticed the error that we point out and correct in our paper. This will give a first affirmative answer to the open question raised by Boikanyo and Morosanu concerning the design of a PPA, where the error sequence tends to zero and a parameter sequence remains bounded. Then we investigate the second PPA with various new conditions on the parameter sequences and prove similar theorems as above, providing also a second affirmative answer to the open question of Boikanyo and Morosanu. We note also that Yao and Shahzad [15] had already provided an affirmative answer to the open question raised by Boikanyo and Morosanu, however, with a PPA different from the one considered by them and with conditions stronger than ours on the parameter sequences, whereas our paper answers affirmatively the open question for the same PPA with mild conditions on the parameter sequences. Finally, we present some applications of our new convergence results to optimization and variational inequalities.
2 Preliminaries
Let H be a real Hilbert space with inner product \(\langle \cdot ,\cdot \rangle \) and norm \(\Vert \cdot \Vert \). An operator \(T:D(T) \subseteq H \rightrightarrows H\) is said to be monotone if its graph G(T) is a monotone subset of \(H \times H\), that is,
for all \(x_1,x_2 \in D(T)\) and all \(y_1 \in T(x_1)\) and \(y_2 \in T(x_2)\). Clearly, if T is monotone, then its inverse defined by \(T^{-1}:= \{(y,x): (x,y) \in G(T)\}\) is also a monotone operator. We say that T is maximal monotone if T is monotone and the graph of T is not properly contained in the graph of any other monotone operator. It is known that in Hilbert space, this is equivalent to the range of the operator \((I+T)\) being all of H, where I is the identity operator on H, i.e., \(R(I+T)=H\). Also it is clear that T is maximal monotone if and only if \(T^{-1}\) is maximal monotone. For a maximal monotone operator T, and for every \(t > 0\), the operator \(J_{t}: H \longrightarrow H\) defined by \(J_{t}^{\mathrm{T}}(x):=(I+tT)^{-1}(x)\) is well defined, single-valued and nonexpansive on H. It is called the resolvent of T. Consider the following set valued problem: find \(\, x \in D(T) \, {\mathrm{such}} \, {\mathrm{that}} \, 0 \in T(x).\) One of the most effective iterative methods for solving this problem is the proximal point algorithm (abbreviated PPA). The PPA generates an iterative sequence \((x_n)\) as follows: \(x_{n+1}=J_{\gamma _n}^{\mathrm{T}}(x_n+e_n),\) for all \(n \ge 0\), where \(x_0 \in H\) is a given starting point, \((\gamma _n) \subset ]0,+ \infty [\) and \((e_n)\) is a sequence of computational errors. In 2006, Xu [16] proposed the following regularization for the proximal point algorithm:
where \(x_0,u \in H\), \(t_n \in ]0,1[\), \(c_n \in ]0,+ \infty [\), for all \(n \ge 0\). He showed that if \(T^{-1}(0) \ne \emptyset \) and \(t_n \rightarrow 0\), \(\sum _{n=0}^{\infty } t_n = \infty \), \(\sum _{n=0}^{\infty } \Vert e_n\Vert < \infty \), \(t_{n+1} \le \frac{c_{n+1}}{c_n}\) and either \(\lim _{n \rightarrow \infty }\frac{1}{t_n}|\frac{c_{n+1}t_n}{c_nt_{n+1}}-1|=0\) or \(\sum _{n=0}^{\infty }|\frac{c_{n+1}t_n}{c_nt_{n+1}}-1| < \infty \), then \((x_n)\) converges strongly to an element of \(T^{-1}(0)\) which is nearest to u. This algorithm essentially includes the algorithm that was introduced by Lehdili and Moudafi [17].
Recently, Boikanyo and Morosanu [18] considered the sequence generated by the following algorithm:
where \(x_0,u \in H\), \(\alpha _n \in ]0,1[\), \(\beta _n \in ]0,+ \infty [\), for all \(n \ge 0\). They showed that if \(T^{-1}(0) \ne \emptyset \) and \(\alpha _n \rightarrow 0\), \(\beta _n \rightarrow +\infty \), \(\sum _{n=0}^{\infty } \alpha _n = \infty \) and either \(\frac{\Vert e_n \Vert }{\alpha _n} \rightarrow 0\) or \(\sum _{n=0}^{\infty } \Vert e_n \Vert < \infty \), then \((x_n)\) converges strongly to an element of \(T^{-1}(0)\) which is nearest to u. It is easy to see that algorithms (1) and (2) are in fact equivalent (see, e.g., [18]). For more information on PPA, we refer the reader to [2, 8, 16, 17, 19, 20], and the references therein.
3 Main Results
In this section, we use contractions as the Tikhonov regularization of the resolvent \(J^{\mathrm{T}}_{c}\) by considering the mapping \(V_t\) defined by
where \(t \in ]0,1[\), \(c > 0\) and \(u,e \in H\) are fixed. Since \(J^{\mathrm{T}}_{c}\) is nonexpansive, it is easy to see that \(V_t\) is a contraction. By using the Banach contraction principle, \(V_t\) has a unique fixed point which is denoted by \(v_t\). Hence,
In the sequel, we denote \(F:=T^{-1}(0)\) and \(P_F\) denotes the metric projection map of H onto F.
3.1 The Regularization Method
In the following theorem, we consider the algorithm (5) below, and we show that the sequence \((v_n)\) converges strongly to \(P_Fu\). We provide also a necessary and sufficient condition for the zero set of T to be nonempty.
Theorem 3.1
Let \(T:D(T) \subseteq H \rightrightarrows H\) be a maximal monotone operator. Let \(u \in H\), \((t_n)_{n=1}^{\infty } \subset ]0,1[\), \((c_n)_{n=1}^{\infty } \subset ]0,\infty [\) and \((e_n)_{n=1}^{\infty } \subset H\). Then the following statements hold:
-
(i)
For every \(n \in \mathbb {N}\), there exists a unique \(v_n \in H\) such that
$$\begin{aligned} v_n:=J^{\mathrm{T}}_{c_n}((1-t_n)v_n+t_n u+e_n). \end{aligned}$$(5) -
(ii)
If either one of the following two conditions hold, then \(F:=T^{-1}(0) \ne \emptyset \) if and only if \(\underset{n \rightarrow +\infty }{\liminf } \Vert v_n \Vert < + \infty \) if and only if \((v_n )\) is bounded.
-
(a)
\((\frac{e_n}{t_n})\) is bounded and \(\lim _{n \rightarrow \infty }c_n= \infty \).
-
(b)
\(\lim _{n \rightarrow \infty }t_n=0\), \((\frac{e_n}{t_n})\) is bounded and \(\liminf _{n \rightarrow \infty } c_n \ge \beta \) for some \(\beta >0 \).
-
(a)
-
(iii)
If either one of the following two conditions hold, and if \(F=T^{-1}(0) \ne \emptyset \), then \(s-\lim _{n \rightarrow +\infty }v_n = P_{F}u\).
-
(a)
\(\lim _{n \rightarrow \infty }\frac{\Vert e_n\Vert }{t_n}=0\) and \(\lim _{n \rightarrow \infty }c_n= \infty \),
-
(b)
\(\lim _{n \rightarrow \infty }t_n=0\), \(\lim _{n \rightarrow \infty }\frac{\Vert e_n\Vert }{t_n}=0\) and \(\liminf _{n \rightarrow \infty } c_n \ge \beta \) for some \(\beta >0 \).
-
(a)
Proof
-
(i)
From (4), it follows that for every \(n \in \mathbb {N}\) there exists a unique \(v_n \in H\) such that (5) holds.
-
(ii)
Assume that \(F:=T^{-1}(0) \ne \emptyset \) and let \(p \in T^{-1}(0)\). Since \(J_{c_n}(p)=p\), from (5) and the nonexpansiveness of the resolvent operator, we have:
$$\begin{aligned} \Vert v_{n}-p\Vert \le&(1-t_n) \Vert v_n -p\Vert + t_n \Vert u-p \Vert + \Vert e_n \Vert . \end{aligned}$$(6)Hence, \(\Vert v_n -p \Vert \le \Vert u-p \Vert + \frac{e_n}{t_n}\), and this shows that \((v_n)\) is bounded and hence \(\liminf _{n \rightarrow +\infty } \Vert v_n \Vert < + \infty \).
Conversely, assume that \(\liminf _{n \rightarrow +\infty } \Vert v_n \Vert < + \infty \). Then, there exists a subsequence (n(k)) of \(\mathbb {N}\) and some \(v_{\infty } \in H\) such that \(v_{n(k)} \rightharpoonup v_{\infty }\). Now for every \(x \in D(T)\) and \(y \in T(x)\), by the monotonicity of T, we get:
By letting \(k \rightarrow \infty \) in (7), we have \(\langle 0-y,v_{\infty }-x \rangle \ge 0\), and therefore, by the maximality of T, we conclude that \(v_{\infty } \in D(T)\) and \(0 \in T(v_{\infty })\). Hence, \(T^{-1}(0) \ne \emptyset \). The above proof shows that \(T^{-1}(0) \ne \emptyset \), if and only if \((v_n )\) is bounded and this completes the proof of (ii).
(iii) It follows from (ii) that \(F=T^{-1}(0) \ne \emptyset \) and \((v_n)\) is bounded. Let \(p \in T^{-1}(0)\). By the monotonicity of T, we have: \(\langle \frac{t_{n}(u-v_{n})+e_{n}}{c_{n}}-0, v_{n}-p \rangle \ge 0\) and hence \( \langle v_n-p,v_n - u \rangle \le \langle \frac{e_n}{t_n},v_n-p \rangle .\) Therefore,
Since \((v_n)\) is bounded, there exists a subsequence (n(k)) of \(\mathbb {N}\) and \({\overline{v}} \in H\) such that \(v_{n(k)} \rightharpoonup {\overline{v}}\), and
Now for every \((x,y) \in G(T)\):
Letting \(k \longrightarrow \infty \) in the above inequality, we get \(\langle 0-y,{\overline{v}}-x \rangle \ge 0\), and hence, by the maximality of T, we conclude that \({\overline{v}} \in D(T)\) and \(0 \in T({\overline{v}})\). Also from (8) we get:
For \(q=P_{F}u\), since \({\overline{v}} \in T^{-1}(0)\), we have: \(\langle {\overline{v}}-q,u-q \rangle \le 0.\) Now replacing p with q in (10), and using the above inequality, we conclude that \( \limsup _{n \rightarrow \infty } \Vert v_n -q \Vert =0\) and hence \(s-\lim _{n \rightarrow +\infty }v_n =q= P_{F}u\). This completes the proof. \(\square \)
Remark 3.1
In his Theorem 3.1, Xu [16] claims that the strong convergence in this theorem is uniform with respect to \(c > 0\). However, his proof is not correct because the choice of \(v_t\) in this theorem depends on the choice of c, and actually, the result is not true, as shown by the following example. It is true if the condition \(c> 0\) is replaced by \(c > \alpha \) for some \(\alpha >0\). Therefore, our Theorem 3.1 is a true extension of Xu [16, Theorem 3.1].
Example 3.1
Let \(T: \mathbb {R} \longrightarrow \mathbb {R}\) be defined by \(Tx=x\). Obviously \(F=T^{-1}(0)= \{ 0 \}\). Let \(u \in \mathbb {R}\). For every \(t \in ]0,1[\) and \(c >0\) there exists a unique \(v_{t}^{c} \in \mathbb {R}\) such that \(v_{t}^{c}=J_{c}^{\mathrm{T}}((1-t)v_{t}^{c}+tu)\) and hence \(v_{t}^{c}=\frac{t}{t+c}u\). Suppose \(u=1\). If \(t_n \longrightarrow 0\) and \(c_n= t_n\), then we have \(v_{t_n}^{c_n}=\frac{1}{2} \nrightarrow P_F(u)=0\).
Remark 3.2
Our regularization method introduced in Theorem 3.1 provides an affirmative answer to an open question raised by Boikanyo and Morosanu [18, p. 640], concerning the design of a PPA where \({\lim \nolimits _{{n \rightarrow \infty }}} \Vert e_n \Vert =0\) and the sequence \((c_n )\) is bounded.
3.2 Proximal Point Algorithm
In the following theorem, we give a necessary and sufficient condition for the zero set of T to be nonempty, in which case we show the strong convergence of the sequence generated by (1).
Theorem 3.2
Let \(T:D(T) \subseteq H \rightrightarrows H\) be a maximal monotone operator. For any fixed \(x_0,u \in H\), let the sequence \((x_n)\) be generated by
for all \(n \ge 0\), where \(t_n \in ]0,1[\), \(c_n \in ]0,+\infty [\) and \(e_n \in H\) for all \(n \ge 0\). Then the following statements hold:
-
(i)
If \(\lim _{n \rightarrow +\infty } c_n= +\infty \), \((e_n) \subset H\) is bounded and
$$\begin{aligned} \underset{{m \rightarrow \infty }}{\limsup } \sum _{k=1}^{m}(1-t_k)(1-t_{k+1})\cdots (1-t_m) < \infty , \end{aligned}$$(12)then \(T^{-1}(0) \ne \emptyset \) if and only if \(\liminf _{n \rightarrow \infty }(\Vert x_{n+1} \Vert +\Vert x_n \Vert ) < \infty \) if and only if \((x_n)\) is bounded.
-
(ii)
If \(F:=T^{-1}(0) \ne \emptyset \), then for every sequence \((e_n) \subset H\), \((t_n)_{n=1}^{\infty } \subset ]0,1[\) and \((c_n)_{n=1}^{\infty } \subset ]0,\infty [\) where (12) holds, \(\lim _{n \rightarrow +\infty } c_n= +\infty \) and \(\lim _{n \rightarrow +\infty }\frac{\Vert e_n \Vert }{t_n}=0\), we have that the sequence \((x_n)\) generated by (11) converges strongly to \(P_{F}u\).
Proof
(i) Assume that \(T^{-1}(0)\ne \emptyset \), and let \(p \in T^{-1}(0)\). Since \(J_{c_m}(p)=p\), from (11), and the nonexpansiveness of the resolvent operator, we have:
Let \(M >0\) be such that \(\Vert e_n \Vert < M\) for all \(n \in \mathbb {N}\). Then from (13) we get: \(\Vert x_{m+1}-p\Vert \le (1-t_m)\Vert x_m-p\Vert + \Vert u-p \Vert +M\) for all \(m \ge 0\). Hence, by using induction, for all \(n \ge 0\), we get:
This shows that \((x_n)\) is bounded and so \(\liminf _{n \rightarrow \infty }(\Vert x_{n+1} \Vert +\Vert x_n \Vert ) < \infty \).
Conversely, assume that \(\liminf _{n \rightarrow \infty }(\Vert x_{n+1} \Vert +\Vert x_n \Vert ) < \infty \). Then there exists a subsequence (n(k)) of \(\mathbb {N}\) such that \(\Vert x_{n(k)+1} \Vert +\Vert x_{n(k)} \Vert \) is bounded. Therefore, there exists a subsequence (m(l)) of (n(k)) such that \(x_{m(l)+1} \rightharpoonup p\) as \(l \rightarrow +\infty \) for some \(p \in H\). Also \(( x_{m(l)} )\) is a bounded sequence. Now for every \(x \in D(T)\) and \(y \in T(x)\), by the monotonicity of T, we get:
Now letting \(m \rightarrow \infty \) in (14), we get \(\langle y-0,x-p \rangle \ge 0\), and hence, by the maximality of T, we conclude that \(p \in D(T)\) and \(0 \in T(p)\). Moreover, the above proof shows that \(T^{-1}(0) \ne \emptyset \) if and only if \((x_n)\) is bounded, and this completes the proof of (i).
(ii) Since (12) holds, there exists some \(M < \infty \) such that for all \(n \in \mathbb {N}\), \( \sum _{k=1}^{n}(1-t_k)(1-t_{k+1})\cdots (1-t_n) < M\).
We know from Theorems 3.1 and 3.2 (i) that the two sequences \((v_n)\) and \((x_n)\) are bounded. Also for all \(m \in \mathbb {N}\)
Let \(\varepsilon >0\) be arbitrary. Since by Theorem 3.1 we have \(s-\lim _{n \longrightarrow \infty } v_n=P_{F}u\), there exists some \(p \in \mathbb {N}\) such that for every \(i \ge p\) we have \(\Vert v_i- v_{i-1} \Vert < \varepsilon \). Hence, from (15) and by using induction, for every \(n \ge p\), we have:
By the above inequality, we have: \(\underset{n \longrightarrow \infty }{\limsup } \Vert x_{n+1}-v_{n+1} \Vert \le 0+ (1+M)\varepsilon .\) Since \(\varepsilon >0\) is arbitrary, from the above inequality we get \(\lim _{n \rightarrow \infty }\Vert x_n-v_n \Vert =0\), and hence, since \(s-\lim _{n \rightarrow \infty }v_n =P_{F}u\) (Theorem 3.1), we have \(s-\lim _{n \rightarrow \infty }x_n =P_{F}u\). This completes the proof. \(\square \)
Remark 3.3
If \((t_n)_{n=1}^{\infty } \subset ]0,1[\) and \(\liminf _{n\rightarrow +\infty }t_n \ge \alpha \) for some \(\alpha \in ]0,1[\), then clearly (12) holds. However, from (12) we cannot conclude that \({\lim }{\mathrm{inf}}_{{n\rightarrow +\infty }}t_n \ge \alpha \) for some \(\alpha \in ]0,1[\). For example, if \(t_n= \frac{1}{ln(n+2)}\) then the condition (12) holds, but \(\lim _{n \rightarrow \infty } t_n=0\). In fact, condition (12) holds for any sequence \((t_n)_{n=1}^{\infty } \subset ]0,1[\) that converges to zero as \(n\rightarrow \infty \) and satisfies \(t_n > \frac{1}{ln(n+2)}\), e.g., \(t_n = \frac{1}{ln (ln(n+2))}\). Therefore, the following corollary is a direct consequence of Theorem 3.2.
Corollary 3.1
Let \(T:D(T) \subseteq H \rightrightarrows H\) be a maximal monotone operator. For any fixed \(x_0,u \in H\), let the sequence \((x_n)\) be generated by (11), for all \(n \ge 0\), where \(t_n \in ]0,1[\), \(c_n \in ]0,+\infty [\) and \(e_n \in H\) for all \(n \ge 0\). Then the following statements hold.
-
(i)
If \(\liminf _{n\rightarrow +\infty }t_n \ge \alpha \) for some \(\alpha \in ]0,1[\), \(\lim _{n \rightarrow +\infty } c_n= +\infty \) and \((e_n) \subset H\) is bounded, then \(T^{-1}(0) \ne \emptyset \) if and only if \(\underset{n\rightarrow +\infty }{\liminf }(\Vert x_{n+1} \Vert +\Vert x_n \Vert ) < \infty \) if and only if \((x_n)\) is bounded.
-
(ii)
If \(F:=T^{-1}(0) \ne \emptyset \), \(\underset{n\rightarrow +\infty }{\liminf }t_n \ge \alpha \) for some \(\alpha >0\), \(\lim _{n \rightarrow +\infty } c_n= +\infty \), and \(\lim _{n \rightarrow +\infty } \Vert e_n \Vert =0\), then the sequence \((x_n)\) generated by (11) converges strongly to \(P_{F}u\).
In the following corollary, we prove a similar theorem for the scheme (2) considered by Boikanyo and Morosanu [18].
Corollary 3.2
Let \(T:D(T) \subseteq H \rightrightarrows H\) be a maximal monotone operator. For any fixed \(y_0,u \in H\), let the sequence \((y_n)\) be generated by
for all \(n \ge 0\), where \(t_n \in ]0,1[\), \(c_n \in ]0,+\infty [\) and \(e_n \in H\) for all \(n \ge 0\). Then the following statements hold:
-
(i)
If the condition (12) holds, \({\lim }{\mathrm{sup}}_{{n\rightarrow +\infty }}t_n < 1\), \(\lim _{n \rightarrow +\infty } c_n= +\infty \) and \((e_n) \subset H\) is bounded, then \(T^{-1}(0) \ne \emptyset \) if and only if \({\lim }{\mathrm{inf}}_{{n\rightarrow +\infty }}(\Vert y_{n+1} \Vert +\Vert y_n \Vert ) < \infty \) if and only if \((y_n)\) is bounded.
-
(ii)
If \(F:=T^{-1}(0) \ne \emptyset \), \(\lim _{n \rightarrow +\infty } \Vert e_n \Vert =0\), \(\lim _{n\rightarrow +\infty }t_n = t\) for some \(t \in ]0,1[\), and \(\lim \nolimits _{n\rightarrow +\infty }c_n = +\infty \), then the sequence \((y_n)\) generated by (16) converges strongly to \(tu+(1-t)P_{F}u\).
-
(iii)
If \(F:=T^{-1}(0) \ne \emptyset \), (12) holds, \(\lim _{n\rightarrow +\infty }t_n = 0\), \(\lim _{n \rightarrow +\infty }\frac{\Vert e_n \Vert }{t_n}=0\) and \(\lim _{n\rightarrow +\infty }c_n = +\infty \), then the sequence \((y_n)\) generated by (16) converges strongly to \(P_{F}u\).
Proof
(i) For every \(n \ge 0\), define
then we have \( x_n:= J_{c_n}^{\mathrm{T}}(y_n)=J_{c_n}^{\mathrm{T}}(t_{n-1} u+(1-t_n)x_{n-1}+e_{n-1})\).
Also \({\lim }\mathrm{inf}_{{n\rightarrow +\infty }}(\Vert y_{n+1} \Vert +\Vert y_n \Vert ) < \infty \), if and only if \({\lim }\mathrm{inf}_{{n\rightarrow +\infty }}(\Vert x_{n+1} \Vert +\Vert x_n \Vert ) < \infty \). Hence, the conclusion follows from a similar proof as in Theorem 3.2 (i).
(ii) and (iii) Let \((x_n)\) be generated by (17). Then, by a similar proof as in Theorem 3.2 (ii), \(s-\lim _{n \rightarrow \infty }x_n =P_{F}u\). Hence, by
we conclude that \(s-\lim _{n \rightarrow \infty }y_n =tu+(1-t)P_{F}u\) and this completes the proof. \(\square \)
The following examples show that without additional assumptions, we cannot replace the condition \(\lim _{n\rightarrow +\infty }c_n = +\infty \) with the boundedness condition for \((c_n)\). In the first example \(T^{-1}(0)=\emptyset \), and in the second one \(T^{-1}(0) \ne \emptyset \).
Example 3.2
Let \(T: \mathbb {R} \longrightarrow \mathbb {R}\) be defined by \(Tx=1\). Obviously T is a maximal monotone operator. By taking \(c_n=1\), \( e_n=0\), \(t_n=\frac{1}{2}\) for all \(n \ge 0\), \(x_0=0\) and \(u=0\), we have \(x_{n+1}=\frac{1}{2} x_n-1\). (The sequence \((x_n)\) is generated by (11).) Then \((x_n)\) is a decreasing sequence, and obviously \(\lim _{n \rightarrow +\infty } x_n=-2\), but \(T^{-1}(0)=\emptyset \).
Example 3.3
Let \(T: \mathbb {R} \longrightarrow \mathbb {R}\) be defined by \(Tx=x+1\). Obviously T is a maximal monotone operator. By taking \(c_n=1\), \(e_n=0\), \(t_n=\frac{1}{2}\) for all \(n \ge 0\), \(x_0=1\) and \(u=0\) we have \(x_{n+1}=\frac{1}{4} x_n- \frac{1}{2}\) (the sequence \((x_n)\) is generated by (11)). Then \((x_n)\) is a decreasing sequence, and obviously \(\lim _{n \rightarrow +\infty } x_n=\frac{-2}{3}\), but \(T^{-1}(0)=\{ -1 \}\).
In the following theorem, we give another necessary and sufficient condition for the zero set of T to be nonempty, and show the strong convergence of the corresponding PPA.
Theorem 3.3
Let \(T:D(T) \subseteq H \rightrightarrows H\) be a maximal monotone operator. For any fixed \(x_0,y_0,z_0,u \in H\), let the sequences \((x_n)\), \((y_n)\) and \((z_n)\) be generated by
for all \(n \ge 0\), where \(t_n \in ]0,1[\), \(c_n \in [\gamma ,+\infty [\) for some \(\gamma \in ]0,+\infty [\), and \(e_n \in H\) for all \(n \ge 0\). Suppose that (12) holds. Then the following statements hold:
-
(i)
If \((e_n)\) is bounded, then \((x_n)\) is bounded if and only if \((y_n)\) is bounded. Also if \((y_n)\) is bounded, then \((z_n)\) is bounded too, but the converse is not true, as Remark 3.6 shows.
-
(ii)
If \((e_n)\) is bounded, \(\lim _{n \longrightarrow \infty }t_n=0\) and \(\sum _{n=1}^{\infty } |t_n-t_{n-1}| < +\infty \) then \(F=T^{-1}(0)\ne \emptyset \) if and only if \((x_n)\) is bounded.
-
(iii)
If \(\lim _{n \rightarrow \infty }t_n=0\), \(\lim _{n \rightarrow \infty }\frac{\Vert e_n\Vert }{t_n}=0\), \(\sum _{n=1}^{\infty } |t_n-t_{n-1}| < +\infty \) and \(F=T^{-1}(0)\ne \emptyset \), then \(s-\lim _{n \longrightarrow \infty }x_n=s-\lim _{n \longrightarrow \infty }y_n=s-\lim _{n \longrightarrow \infty }z_n=P_{F}(0)\).
Proof
(i) For all \(n \ge 0\), by using the fact that the resolvent operator is nonexpansive and by induction, we get:
Since \((e_n)\) is bounded, there exists some \(M_1 >0\) such that \(\Vert e_n \Vert \le M_1\), for all \(n \ge 0\) and \(\Vert u \Vert \le M_1\). Hence, from (18) we conclude that
Since (12) holds, the above inequality shows that \((x_n)\) is bounded if and only if \((y_n)\) is bounded.
Now for all \(m \ge 0\), by using the resolvent identity, and the fact that the resolvent operator is nonexpansive, we have:
If \((y_n)\) is bounded, then there exists some \(M_2 >0\) such that for all \(m \ge 0\):
From the inequalities (19) and (20), and by induction, for all \(n \ge 0\), we get:
Since (12) holds, the above inequality shows that \((z_n)\) is bounded.
(ii) Assume that \(T^{-1}(0) \ne \emptyset \). By a similar proof as in Theorem 3.2(i), \((x_n)\) is bounded. Conversely, assume that \((x_n)\) is bounded. Then from (i), \((z_n)\) is bounded too. First, let’s show that \((z_n)\) converges. Let \(M_3 > 0\) be such that \(\Vert z_n \Vert \le M_3\) for all \(n \ge 0\). Since the resolvent operator is nonexpansive, by induction, we have for all \(n \ge 0\):
From the above inequality, we have
We are going to show that \(\sum _{n=2}^{\infty }(1-t_1)(1-t_2) \cdots (1-t_n) < +\infty \) and that there exists some \(M_4 \in ]0,+\infty [\) such that
for all \(k \ge 1\). Then, we can conclude from (22) that \((z_n)\) is convergent. First, let us show that \(\sum _{n=2}^{\infty }(1-t_1)(1-t_2) \cdots (1-t_n) < +\infty \). For every \(l \in \mathbb {N}\), since \(\lim _{n\longrightarrow \infty } t_n=0\), there exists an integer \(n \in \mathbb {N}\) such that \(n > 2l\) and \(1-t_i < 1-t_{n-i+1}\) for all \(1 \le i \le l\). Then,
Therefore, by (12) and the above inequality, we get: \(\sum _{n=2}^{\infty }(1-t_1)(1-t_2) \cdots (1-t_n) < +\infty \).
Also, for all \(k \ge 1\) and all \(m > k\), there exists an integer \(N \in \mathbb {N}\) (large enough) such that \(1-t_{k+i}<1-t_{N-i+1}\) for all \(1 \le i \le m-k\) and hence
Therefore, by (12) and the above inequality, we conclude that there exists some \(M_4 \in ]0,+\infty [\) such that \(\sum _{n=k+1}^{\infty } (1-t_{k+1})(1-t_{k+2})\cdots (1-t_n) \le M_4\), for all \(k \ge 1\). Then (22) shows that \((z_n)\) is convergent. Let \(s-\lim _{n\longrightarrow \infty }z_n=p\). For every \((x,y) \in {\mathrm{GraphT}}\), by the monotonicity of T, we have for all \(n \in \mathbb {N}\):
By letting \(n \longrightarrow \infty \) in the above inequality, we get \(\langle y- 0, x-p\rangle \ge 0\), and hence, by the maximality of T, \(p \in D(T)\) and \(0 \in T(p)\). So \(T^{-1}(0) \ne \emptyset \).
(iii) It follows from Theorem 3.1 (iii) that \(s-\lim _{n\longrightarrow \infty }v_n=P_{F}(0)\), where
\(v_n=J_{c_n}^{\mathrm{T}}((1-t_n)v_n+t_n\times 0+0)\) for all \(n \ge 0\).
For every \(\varepsilon >0\), there exists an integer \(n_0 \in \mathbb {N}\) such that for all \(m \ge n_0\): \(\Vert v_{m+1}-v_m \Vert \le \varepsilon \). First we show that \(s-\lim _{n \longrightarrow \infty }y_n=P_{F}(0)\).
By induction and the fact that the resolvent operator is nonexpansive, for all \(m \ge n_0\), we have:
Since (12) holds, there exists some \(M_5 \in ]0,+\infty [\) such that, for all \(n \ge 1\).
Hence, from (23) and (24), for all \(n \ge n_0\) we have \(\Vert y_{n+1}-v_{n+1} \Vert \le (1-t_n)(1-t_{n-1})\cdots (1-t_{n_0}) \Vert y_{n_0}-v_{n_0}\Vert +\varepsilon (M_5+1)\), and therefore, \(\limsup _{n\longrightarrow \infty }\Vert y_{n+1}-v_{n+1} \Vert \le \varepsilon (M_5+1)\), for every \(\varepsilon >0\). Hence, \(\lim _{n \longrightarrow \infty }\Vert y_n-v_n\Vert =0\). Since \(s-\lim _{n \longrightarrow \infty }v_n=P_{F}(0)\), we conclude that \(s-\lim _{n \longrightarrow \infty }y_n=P_{F}(0)\). Also, \(s-\lim _{n \longrightarrow \infty }z_n=P_{F}(0)\), since \((z_n)\) is a special case of \((y_n)\), with constant sequence \((c_n)\). Finally, we prove that \(s-\lim _{n \longrightarrow \infty }x_n=P_{F}(0)\).
Let \(\varepsilon >0\) be arbitrary. Since \(\lim _{n \longrightarrow \infty }\Vert e_n\Vert =\lim _{n \longrightarrow \infty }t_n=0\), there exists an integer \(N_0 \in \mathbb {N}\) such that for all \(k \ge N_0\), \(\Vert e_k \Vert < \varepsilon \) and \(t_k < \varepsilon \).
For all \(n \ge N_0\), from the inequality (18), we deduce that
It follows from the above inequality that:
and since \(\varepsilon >0\) is arbitrary, we conclude that \(\lim _{n\longrightarrow \infty }\Vert x_n-y_n\Vert =0\). Since \(s-\lim _{n\longrightarrow \infty }y_n=P_F(0)\), then \(s-\lim _{n\longrightarrow \infty }x_n=P_F(0)\) and this completes the proof. \(\square \)
Remark 3.4
The above theorem provides another affirmative answer to the open question raised by Boikanyo and Morosanu [18, p. 640].
Remark 3.5
Theorem 3.3 also shows that the convergence of the iterative sequences \((x_n)\) and \((y_n)\) is equivalent, implying therefore that the choice of u in the convex combination, as well as the error sequence \((e_n)\), as long as it satisfies the assumptions of the theorem, is irrelevant for the convergence. However, they can be of course used as control parameters to speed up the convergence in each case.
Remark 3.6
The following example shows that the boundedness of the sequence \((z_n)\) in Theorem 3.3(i) does not imply that of the sequence \((y_n)\).
Example 3.4
Let \(H=\mathbb {R}\) and \(T:\mathbb {R} \longrightarrow \mathbb {R}\) be defined by \(Tx=1\). Suppose that \(\gamma =1\), \(c_n=n+1\) and \(t_n=\dfrac{1}{2}\) for all \(n\ge 1\). By choosing \(z_0=y_0=0\), we have
and so
Therefore, \((z_n)\) is bounded. Also, for all \(m \ge 1\):
Therefore,
and so the condition (12) holds. Also, all the assumptions of Theorem 3.2 hold. Thus, by Theorem 3.2(i), \((y_n)\) is not bounded, since \(T^{-1}(0)=\emptyset \).
On the other hand, all the assumptions of Theorem 3.3 are satisfied too. However, \((z_n)\) is bounded, but \((y_n)\) is not bounded.
4 Applications
4.1 Applications to Optimization
We can apply Theorems 3.1–3.3 and Corollary 3.1 to find a minimizer of a function f. Let H be a real Hilbert space and \(f:H \longrightarrow ]-\infty ,+\infty ]\) be a proper, convex and lower semicontinuous function. Then the subdifferential \(\partial f\) of f is the multivalued operator \(\partial f:H \rightrightarrows H\) defined for \(z \in H\) as follows:
We know from [1] that the subdifferential of a proper, convex and lower semicontinuous function is maximal monotone and the zeroes of \(\partial f\) correspond to the minimizers of f. Therefore, the proximal point algorithm for \(\partial f\) provides a scheme for approximating a minimizer of f.
In this section, suppose H is a real Hilbert space and D is a nonempty, closed and convex subset of H.
Theorem 4.1
Let \(f:H \longrightarrow ]-\infty ,+\infty ]\) be a proper, convex and lower semicontinuous function. For any \(x_0,u \in H\), let the sequence \((x_n)\) be generated by (11) for \(T= \partial f\), where \(u \in H\), \((t_n)_{n=1}^{\infty } \subset ]0,1[\) and \((c_n)_{n=1}^{\infty } \subset ]0,\infty [\) such that (12) holds and \(c_n \rightarrow +\infty \) as \(n\rightarrow +\infty \). Suppose that \((e_n)_{n=1}^{\infty } \subset H\) is a sequence with \(\lim _{n \rightarrow +\infty }\frac{\Vert e_n \Vert }{t_n}=0\). If \((x_n)\) is bounded, then \({\mathrm{argmin}} f \ne \emptyset \) and \((x_n)\) converges strongly to \(P_{F}u\), the metric projection of u onto \(F:=\partial f^{-1}(0)= {\mathrm{argmin}} f\).
Proof
This follows from Theorem 3.2. \(\square \)
Theorem 4.2
Let \(f:H \longrightarrow (-\infty ,+\infty ]\) be a proper, convex and lower semicontinuous function. For any \(x_0,u \in H\), let the sequence \((x_n)\) be generated by (11) for \(T= \partial f\), where \(u \in H\), \((t_n)_{n=1}^{\infty } \subset ]0,1[\) with \(\liminf _{n\rightarrow +\infty }t_n \ge \alpha \) for some \(\alpha >0\), and \((c_n)_{n=1}^{\infty } \subset ]0,\infty [\) with \(c_n \rightarrow +\infty \) as \(n\rightarrow +\infty \). Suppose that \((e_n)_{n=1}^{\infty } \subset H\) is a sequence with \(\lim _{n \rightarrow +\infty } \Vert e_n \Vert =0\). If \((x_n)\) is bounded, then \({\mathrm{argmin}} f \ne \emptyset \) and \((x_n)\) converges strongly to \(P_{F}u\), the metric projection of u onto \(F:=\partial f^{-1}(0)= {\mathrm{argmin}} f\).
Proof
This follows from Corollary 3.1. \(\square \)
Theorem 4.3
Let \(f:H \longrightarrow ]-\infty ,+\infty ]\) be a proper, convex and lower semicontinuous function. For any \(x_0,u \in H\), let the sequence \((x_n)\) be generated by (1) for \(T= \partial f\), where \(u \in H\), \((t_n)_{n=1}^{\infty } \subset ]0,1[\) and \((c_n)_{n=1}^{\infty } \subset ]0,\infty [\) such that (12) holds, \(\lim _{n \rightarrow \infty }t_n=0\) and \(\lim _{n \rightarrow \infty }\frac{\Vert e_n\Vert }{t_n}=0\) and \(\sum _{n=1}^{\infty } |t_n-t_{n-1}| < +\infty \). If \((x_n)\) is bounded, then \(s-\lim _{n \longrightarrow \infty }x_n=P_{F}(0)\), where \(F:=\partial f^{-1}(0)= {\mathrm{argmin}} f\) is nonempty.
Proof
This follows from Theorem 3.3. \(\square \)
Subgradient Proximal Algorithm
Initialization: select \(x_0 \in H\)
Iterative step: The iterative sequence \(w_n \in H\) is calculated by
where \(z_n\) is such that
for all \(n \ge 0\), \(t_n \rightarrow \alpha \in ]0,1[\), and B(x, r) denotes the open ball in H centered at x with radius r. Now we can prove the following result which provides an approximation to \(P_Fu\), by using Theorem 3.2.
Theorem 4.4
Let H be a real Hilbert space and \(f:H \longrightarrow ]-\infty ,+\infty ]\) be a proper, convex and lower semicontinuous function. For any \(x_0,u \in H\), let \(\varepsilon >0\) and the sequence \((x_n)\) be generated by (11) with \(T= \partial f\), where \(t_n \in ]0,1[\) and \(c_n \in ]0,+\infty [\) for all \(n \ge 0\). Also let the sequence \(w_n\) be generated by the iterative procedure
where \(z_n\) is such that
for all \(n \ge 0\). If \(F:=\partial f^{-1}(0)= {\mathrm{argmin}} f \ne \emptyset \), \(t_n \rightarrow \alpha \in ]0,1[\), \(c_n \rightarrow +\infty \), and \(e_n \rightarrow 0\) as \(n \rightarrow +\infty \), then there exists some \(N_0 \in \mathbb {N}\) such that
for all \(n \ge N_0\).
Proof
For all \(n \in \mathbb {N}\) from
we deduce that there exists \(\xi _{n} \in \partial f(x_{n+1})\) such that
Letting \(n \longrightarrow \infty \) in the above equality, by Theorem 3.2 we get:
Therefore, there exists some \(N_1 \in \mathbb {N}\) such that for all \(n \ge N_1\)
On the other hand, since by the hypothesis we have
it follows that there exists some \(N_0 \ge N_1\) such that for all \(n\ge N_0\)
and
Then for all \(n \ge N_0\)
and this completes the proof. \(\square \)
4.2 Applications to Variational Inequalities
Let \(T_0 :D \longrightarrow H\) be a single-valued, monotone and hemicontinuous (i.e., continuous along each line segment in H with respect to the weak topology) operator. Suppose that \(N_{D}(z)\) is the normal cone to D at z:
and \(T: H \rightrightarrows H\) be defined by:
The maximal monotonicity of such a multifunction T was proved by Rockafellar [1]. The relation \(0 \in T(z)\) reduces to \(-T_0(z) \in N_D(z)\), or the so-called variational inequality:
Let \(VI(T_0,D)\) denote the solution set of the above variational inequality. If D is a cone, then \(VI(T_0,D)\) is the set of all \(z \in D\) such that \(-T_0(z) \in D^{\circ }\) (the polar of D) and \(\langle z,T_0(z) \rangle =0\), and the problem of finding such z is an important instance of the well-known complementarity problem of mathematical programming. We can then apply Theorems 3.2, 3.3 and Corollary 3.1 to find a solution for the variational inequality of a single-valued monotone and hemicontinuous function \(T_0\).
Theorem 4.5
Let \(T_0 :D \longrightarrow H\) be a single-valued, monotone and hemicontinuous operator, and let \(N_D(z)\) be the normal cone defined above. For any \(x_0,u \in H\), let the sequence \((x_n)\) be generated by (11) for T defined as above, where \((t_n)_{n=1}^{\infty } \subset ]0,1[\), \((c_n)_{n=1}^{\infty } \subset ]0,\infty [\), (12) holds and \(c_n \rightarrow +\infty \) as \(n\rightarrow +\infty \). Suppose that \((e_n)_{n=1}^{\infty } \subset H\) is a sequence with \(\lim _{n \rightarrow +\infty }\frac{\Vert e_n \Vert }{t_n}=0\). If \((x_n)\) is bounded, then \(VI(T_0,D) \ne \emptyset \) and \((x_n)\) converges strongly to an element of \(VI(T_0,D)\).
Proof
Since T is a maximal monotone operator and \(VI(T_0,D)=T^{-1}(0)\), then the conclusion follows from Theorem 3.2. \(\square \)
Theorem 4.6
Let \(T_0 :D \longrightarrow H\) be a single-valued, monotone and hemicontinuous operator, and let \(N_D(z)\) be the normal cone defined above. For any \(x_0,u \in H\), let the sequence \((x_n)\) be generated by (11) for T defined as above, where \((t_n)_{n=1}^{\infty } \subset ]0,1[\) with \(\liminf _{n\rightarrow +\infty }t_n \ge \alpha \) for some \(\alpha >0\), and \((c_n)_{n=1}^{\infty } \subset ]0,\infty [\) with \(c_n \rightarrow +\infty \) as \(n\rightarrow +\infty \). Suppose that \((e_n)_{n=1}^{\infty } \subset H\) is a sequence with \(\lim _{n \rightarrow +\infty } \Vert e_n \Vert =0\). If \((x_n)\) is bounded, then \(VI(T_0,D) \ne \emptyset \) and \((x_n)\) converges strongly to an element of \(VI(T_0,D)\).
Proof
The conclusion follows from Corollary 3.1. \(\square \)
Theorem 4.7
Let \(T_0 :D \longrightarrow H\) be a single-valued, monotone and hemicontinuous operator, and let \(N_D(z)\) be the normal cone defined above. For any \(x_0,u \in H\), let the sequence \((x_n)\) be generated by (1) for T defined as above, where \((t_n)_{n=1}^{\infty } \subset ]0,1[\) and \((c_n)_{n=1}^{\infty } \subset ]0,\infty [\). Suppose that (12) holds, \(\lim _{n \rightarrow \infty }t_n=0\) and \(\lim _{n \rightarrow \infty }\frac{\Vert e_n\Vert }{t_n}=0\) and \(\sum _{n=1}^{\infty } |t_n-t_{n-1}| < +\infty \). If \((x_n)\) is bounded, then \(VI(T_0,D) \ne \emptyset \) and \((x_n)\) converges strongly to an element of \(VI(T_0,D)\).
Proof
The conclusion follows from Theorem 3.3. \(\square \)
5 Conclusions
In this paper, we considered the regularized proximal point algorithms with errors (5) and (11), with various conditions on the parameters and the error sequence, more general than those considered by previous authors. We provided necessary and sufficient conditions for the zero set of the operator to be nonempty and showed the strong convergence of the scheme to a zero of the operator in this case. We presented also some applications of our results to optimization and variational inequalities. In particular, we provided two affirmative answers to an open question raised by Boikanyo and Morosanu [18, p. 640], concerning the design of a PPA where the error sequence tends to zero and a parameter sequence remains bounded. Moreover, since verifying that the zero set of the operator is nonempty might be a difficult task, then our criteria, which is related only to the boundedness of the iterates, may provide a very useful and convenient way for this task, especially from a computational point of view. As a future direction for research, since numerous other PPA have been developed and their convergence studied by many authors, it might be interesting to investigate the possibility of implementing the ideas and methods developed in this paper to these other PPA, as well as extending these methods to more general spaces, such as Banach and metric spaces. In particular, in this connection, we can mention the recent work of Wang et al. [21], as well as Cui and Ceng [22], dealing with the so-called contraction proximal point algorithm.
References
Rockafellar, R.T.: On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 33, 209–216 (1970)
Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877–898 (1976)
Baillon, J.B.: Un théorème de type ergodique pour les contractions non linéaires dans un espace de Hilbert. Comptes R. Acad. Sci. Paris, Sér. A B 280, A1511–A1514 (1975)
Borwein, J.M.: Fifty years of maximal monotonicity. Optim. Lett. 4, 473–490 (2010)
Brézis, H.: Opérateurs Maximaux Monotones et Semi-Groupes de Contractions dans les Espaces de Hilbert, North-Holland Mathematics Studies, vol. 5. North-Holland, Amsterdam (1973)
Morosanu, G.: Nonlinear Evolution Equations and Applications. Reidel, Dordrecht (1988)
Pardalos, P.M., Rassias, T.M., Khan, A.A.: Nonlinear Analysis and Variational Problems, Springer Optimization and Its Applications, vol. 35. Springer, New York (2010)
Djafari Rouhani, B., Khatibzadeh, H.: On the proximal point algorithm. J. Optim. Theory Appl. 137, 411–417 (2008)
Djafari Rouhani, B.: Ergodic theorems for nonexpansive sequences in Hilbert spaces and related problems. Ph.D. Thesis, Yale University, part I, pp. 1–76 (1981)
Djafari Rouhani, B.: Asymptotic behaviour of quasi-autonomous dissipative systems in Hilbert spaces. J. Math. Anal. Appl. 147, 465–476 (1990)
Djafari Rouhani, B.: Asymptotic behaviour of almost nonexpansive sequences in a Hilbert space. J. Math. Anal. Appl. 151, 226–235 (1990)
Djafari Rouhani, B., Moradi, S.: Strong convergence of two proximal point algorithms with possible unbounded error sequences. J. Optim. Theory Appl. 172, 222–235 (2017)
Song, Y., Yang, C.: A note on a paper “A regularization method for the proximal point algorithm”. J. Glob. Optim. 43, 171–174 (2009)
Wang, F.: A note on the regularized proximal point algorithm. J. Glob. Optim. 50, 531–535 (2011)
Yao, Y., Shahzad, N.: Strong convergence of a proximal point algorithm with general errors. Optim. Lett. 6, 621–628 (2012)
Xu, H.K.: A regularization method for the proximal point algorithm. J. Glob. Optim. 36, 115–125 (2006)
Lehdili, N., Moudafi, A.: Combining the proximal algorithm and Tikhonov regularization. Optimization 37, 239–252 (1996)
Boikanyo, O.A., Morosanu, G.: A proximal point algorithm converging strongly for general errors. Optim. Lett. 4, 635–641 (2010)
Boikanyo, O.A., Morosanu, G.: Modified Rockafellar’s algorithm. Math. Sci. Res. J. 13, 101–122 (2009)
Güler, O.: On the convergence of the proximal point algorithm for convex minimization. SIAM J. Control Optim. 29, 403–419 (1991)
Wang, Y., Wang, F., Xu, H.K.: Error sensitivity for strongly convergent modifications of the proximal point algorithm. J. Optim. Theory Appl. 168, 901–916 (2016)
Cui, H., Ceng, L.: Convergence of over-relaxed contraction-proximal point algorithm in Hilbert spaces. Optimization 66, 793–809 (2017)
Acknowledgements
The authors are grateful to the editor and the referees for valuable suggestions leading to the improvement of the paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Qamrul Hasan Ansari.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Djafari Rouhani, B., Moradi, S. Strong Convergence of Regularized New Proximal Point Algorithms. J Optim Theory Appl 181, 864–882 (2019). https://doi.org/10.1007/s10957-019-01497-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-019-01497-9