Abstract
In this paper, we investigate the generalized Halpern iteration for computing fixed points of nonexpansive mappings in Hilbert space setting, and prove the strong convergence under new control conditions on parameters. The convergence results generalize the existing ones in the literature. We also present a convergence rate analysis for the generalized Halpern iteration with a particular choice of parameters. Finally, we give an application to the split feasibility problem and two numerical examples for illustrating the performance of the algorithm.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Many problems arising in different areas of mathematics, such as optimization, variational analysis, signal recovery, etc., can be modeled by the fixed point problem
where \(T:K\rightarrow K\) is a mapping and K is a nonempty subset of a Hilbert space H. The solutions to this problem are called fixed points of the mapping T. We denote the set of fixed points of T by
A mapping \(T:K\rightarrow K\) is said to be nonexpansive if
for all \(x,y\in K\).
A significant body of work on iteration methods for fixed points problems of nonexpansive mappings has accumulated in the literature (for example, see [1, 3, 5, 9, 11]). In particular, one of the most famous fixed point methods is the Krasnosel’skiĭ–Mann iteration from [14, 17, 20] that starts at some given point \(x_0\in K\) and uses the recursion
for some suitably chosen scalars \(\lambda _n\in [0,1]\). The iterative sequence \(\{x_n\}\) converges weakly to a fixed point of T provided that F(T) is nonempty and \(\lambda _n\) satisfies
Another famous fixed point method is the Halpern iteration, which was first introduced in 1967 by Halpern [10] in the framework of Hilbert spaces. The recursion formula of the Halpern iteration is as follows: \(x_0\in K\),
where \(u\in K\) is an arbitrary (but fixed) element in K, and \(\delta _n\in [0,1]\). Halpern proved the strong convergence of \(\{x_n\}\) to a fixed point of T where \(\delta _n:=n^{-a}, a\in (0,1)\). In 1977, Lions [15] improved Halpern’s result by proving strong convergence of \(\{x_n\}\) to a fixed point of T if the real sequence \(\{\delta _n\}\) satisfies the following conditions:
-
(A1)
\(\lim _{n\rightarrow \infty }\delta _n=0\) and \(\sum _{n=0}^\infty \delta _n=\infty \);
-
(A2)
\(\lim _{n\rightarrow \infty }\frac{|\delta _n-\delta _{n-1}|}{\delta _n^2}=0\).
It is easy to observe that both Halpern’s and Lion’s conditions on the sequence \(\{\delta _n\}\) excluded the canonical choice \(\delta _n=\frac{1}{n+1}\). To overcome this, Wittmann [28] proved the strong convergence of \(\{x_n\}\) to a fixed point of T if \(\{\delta _n\}\) satisfies condition (A1) and the following condition (A3):
-
(A3)
\(\sum _{n=0}^\infty |\delta _{n+1}-\delta _n|<\infty \).
In 2002, Xu [21, 29, 30] suggested the following condition (A4) instead of the conditions (A2) or (A3):
-
(A4)
\(\lim _{n\rightarrow \infty }\frac{|\delta _n-\delta _{n-1}|}{\delta _n}=0\).
He proved the strong convergence of \(\{x_n\}\) using the control conditions (A1) and (A4). Xu also showed that condition (A2) and condition (A4) are not comparable.
Chidume and Chidume [7] and Suzuki [24] proposed the following simpler modification of the Halpern iteration method: \(x_0\in K\):
where \(u\in K\) is an arbitrary (but fixed) element in K, and \(\alpha \in (0,1)\) is a constant. They proved the strong convergence of the iteration (1.3) only under condition (A1). Kim and Xu [13] proposed the following iteration method: \(x_0\in K\):
where \(\{\alpha _n\}\) and \(\{\beta _n\}\) are two sequences in (0, 1). Assume that the following conditions are satisfied:
-
(B1)
\(\lim _{n\rightarrow \infty }\alpha _n= 0, \sum _{n=0}^\infty \alpha _n=\infty \) and \(\sum _{n=0}^\infty |\alpha _{n+1}-\alpha _n|<\infty \);
-
(B2)
\(\lim _{n\rightarrow \infty }\beta _n= 0, \sum _{n=0}^\infty \beta _n=\infty \) and \(\sum _{n=0}^\infty |\beta _{n+1}-\beta _n|<\infty \).
Then, \(\{x_n\}\) defined by (1.4) converges strongly to a fixed point of T. Yao [33] also proved that the sequence \(\{x_n\}\) defined by (1.4) converges strongly to a fixed point of T provided that the control sequences \(\{\alpha _n\}\) and \(\{\beta _n\}\) satisfy the conditions
-
(B3)
\(\lim _{n\rightarrow \infty }\alpha _n=0\) and \(\sum _{n=0}^\infty \alpha _n=\infty \);
-
(B4)
\(0<\liminf _{n\rightarrow \infty }\beta _n\le \limsup _{n\rightarrow \infty }\beta _n<1\).
Song and Li [23] proposed the slightly different iteration process: \(x_0\in K\)
They proved that the conditions (B3) and (B4) are sufficient to guarantee the strong convergence of the sequence \(\{x_n\}\) generated by the algorithm (1.5).
Inspired and motivated by the above studies, Kanzow and Shehu [12] introduced a new inexact generalized Halpern iteration method. To be more precise, they consider the following iteration process: \(x_0\in H\):
where \(T:H\rightarrow K\) is a nonexpansive mapping, \(u\in K\) denotes a fixed element, \(r_n\) represents the residual, and \(\delta _n, \alpha _n, \beta _n\in [0,1]\) are chosen, such that \(\delta _n+\alpha _n+\beta _n\le 1\). Clearly, the iterative sequence (1.6) is a natural generalization of the iterations (1.3), (1.4) and (1.5). They proved that the sequence \(\{x_n\}\) generated by (1.6) converges strongly to a point in F(T), which is the projection of u to F(T), under the following conditions:
-
(C1)
\(\lim _{n\rightarrow \infty }\delta _n=0\) and \(\sum _{n=0}^\infty \delta _n=\infty \);
-
(C2)
\(\sum _{n=0}^\infty (1-\delta _n-\alpha _n-\beta _n)<\infty \);
-
(C3)
\(\sum _{n=0}^\infty \Vert r_n\Vert <\infty \);
-
(C4)
\(\liminf _{n\rightarrow \infty }\alpha _n\beta _n>0\).
It is easy to verify that Kanzow’s conditions on the sequences \(\{\delta _n\}, \{\alpha _n\}, \{\beta _n\}\) and \(\{r_n\}\) exclude the canonical choices \(\delta _n=\frac{1}{2(n+1)}, \alpha _n=\frac{1}{2(n+1)}, \beta _n=1-\frac{1}{n+1}\) and \(r_n\equiv 0\). In this paper, we will prove that the sequence \(\{x_n\}\) generated by (1.6) converges strongly to a point in F(T) under this choices. On the other hand, since \(\alpha _n\equiv 0\) does not satisfy condition (C4), the iteration (1.6) does not include (1.2) as a special case. To overcome these shortcomings, we give certain different control conditions on the sequences \(\{\delta _n\}, \{\alpha _n\}, \{\beta _n\}\) and \(\{r_n\}\) so that \(\{x_n\}\) generated by (1.6) converges strongly to a fixed point of T. Moreover, we give the convergence rate of the generalized Halpern iteration under specially selected parameters, which satisfy our conditions, but do not satisfy Kanzow’s conditions.
2 Preliminaries
Throughout this paper, we denote by H a real Hilbert space with inner product \(\langle \cdot , \cdot \rangle \) and norm \(\Vert \cdot \Vert \) and by I the identity operator on H. The strong (weak) convergence of a sequence \(\{x_n\}\) to x is denoted by \(x_n\rightarrow x\) (\(x_n\rightharpoonup x\)), respectively.
Definition 2.1
[2] Let D be a nonempty subset of H and let \(T:D\rightarrow H\). Then, T is
-
(1)
Nonexpansive if
$$\begin{aligned}\Vert Tx-Ty\Vert \le \Vert x-y\Vert , \quad \forall x, y\in D;\end{aligned}$$ -
(2)
Firmly nonexpansive if
$$\begin{aligned} \Vert Tx-Ty\Vert ^2\le \Vert x-y\Vert ^2-\Vert (I-T)x-(I-T)y\Vert ^2, \quad \forall x,y\in D; \end{aligned}$$ -
(3)
\(\nu \)-inverse strongly monotone if there is \(\nu >0\) such that
$$\begin{aligned}\langle Tx-Ty, x-y\rangle \ge \nu \Vert Tx-Ty\Vert ^2, \quad \forall x,y\in D.\end{aligned}$$
For any \(x\in H\), the projection to a nonempty closed convex subset C is defined as
The projection \(P_C\) has the following well-known properties.
Lemma 2.2
[2, 9] Let \(C\subseteq H\) be a nonempty closed convex subset. Then, for all \(x,y\in H\) and \(z\in C\)
-
(1)
\(\langle x-P_C x, z-P_C x\rangle \le 0\);
-
(2)
\(P_C\) and \(I-P_C\) are both nonexpansive;
-
(3)
\(P_C\) and \(I-P_C\) are both firmly nonexpansive;
-
(4)
\(P_C\) and \(I-P_C\) are both 1-inverse strongly monotone.
We shall make use of the following well-known results.
Lemma 2.3
[25] Let \(\{x_n\}\) and \(\{z_n\}\) be bounded sequences in a Banach space X and let \(\{\gamma _n\}\) be a sequence in [0, 1] which satisfies the following condition:
Suppose
and
Then, \(\lim _{n\rightarrow \infty }\Vert z_n-x_n\Vert =0\).
Lemma 2.4
[2] Let H be a Hilbert space, C a closed convex subset of H, and \(T:C\rightarrow C\) a nonexpansive mapping with \(F(T)\ne \emptyset \). If \(\{x_n\}\) is a sequence in C weakly converging to x and if \(\{(I-T)x_n\}\) converges strongly to y, then \((I-T)x=y\).
We need the following lemma whose proof is elementary and therefore omitted.
Lemma 2.5
Let H be a Hilbert space. Then
-
(i)
\(\Vert x+y\Vert ^2\le \Vert x\Vert ^2+2\langle y, x+y\rangle , \quad \forall x, y\in H.\)
-
(ii)
\(\Vert tx+sy\Vert ^2=t(t+s)\Vert x\Vert ^2+s(t+s)\Vert y\Vert ^2-st\Vert x-y\Vert ^2, \quad \forall x, y\in H, \forall s, t\in {\mathbb {R}}\).
The following result is also well known. It plays a central role in our strong convergence result.
Lemma 2.6
[30] Let \(\{a_n\}\) be a sequence of nonnegative real numbers satisfying the following relation:
where (a) \(\{a_n\}\subset [0,1]\), \(\sum _{n=1}^\infty \alpha _n=\infty \); (b) \(\limsup \sigma _n\le 0\); and (c) \(\gamma _n \ge 0 (n\ge 1)\), \(\sum _{n=1}^\infty \gamma _n<\infty \). Then, \(a_n\rightarrow 0\) as \(n\rightarrow \infty \).
3 The generalized Halpern iteration with new control conditions
In this section, we investigate the strong convergence of the generalized Halpern iteration (1.6) with new control conditions on parameters. First, we give the equivalent characterization of condition (C4). We give the following two conditions (C5) and (C6), and prove that condition (C4), condition (C5), and condition (C6) are equivalent:
-
(C5)
\(0<\liminf _{n\rightarrow \infty }\alpha _n\le \limsup _{n\rightarrow \infty }\alpha _n<1\);
-
(C6)
\(0<\liminf _{n\rightarrow \infty }\beta _n\le \limsup _{n\rightarrow \infty }\beta _n<1\).
Lemma 3.1
Let \(\{\delta _n\}\), \(\{\alpha _n\}\) and \(\{\beta _n\}\) be real sequences in [0, 1], such that \(\delta _n+\alpha _n+\beta _n\le 1\). If condition (C1) and \(\lim _{n\rightarrow \infty }(1-\delta _n-\alpha _n-\beta _n)=0\) hold, then (C4) \(\Longleftrightarrow \) (C5) \(\Longleftrightarrow \) (C6).
Proof
We first prove that (C4) \(\Longleftrightarrow \) (C5).
‘\(\Longrightarrow \)’. From the monotonicity of the lower limit, we have
Since \(\alpha _n\le 1-\delta _n-\beta _n\), we obtain that
‘\(\Longleftarrow \)’. If \(0<\liminf _{n\rightarrow \infty }\alpha _n\le \limsup _{n\rightarrow \infty }\alpha _n<1\), then
Therefore, \(\liminf _{n\rightarrow \infty }\alpha _n\beta _n\ge \liminf _{n\rightarrow \infty }\alpha _n\cdot \liminf _{n\rightarrow \infty }\beta _n>0\).
In the same way, we can prove that (C4) \(\Longleftrightarrow \) (C6). \(\square \)
Second, we give the following two conditions (C7) and (C8). By replacing conditions (C2) and (C3) with conditions (C7) and (C8), respectively, we prove the strong convergence of the generalized Halpern iteration (1.6)
-
(C7)
\(1-\delta _n-\alpha _n-\beta _n=o(\delta _n), \ \mathrm {i.e.},\ \lim _{n\rightarrow \infty }\frac{1-\delta _n-\alpha _n-\beta _n}{\delta _n}=0\);
-
(C8)
\(\Vert r_n\Vert =o(\delta _n),\ \mathrm {i.e.},\ \lim _{n\rightarrow \infty }\frac{\Vert r_n\Vert }{\delta _n}=0\).
Theorem 3.2
Let H be a real Hilbert space, K a nonempty, closed, and convex subset of H, \(T:H\rightarrow K\) a nonexpansive mapping with \(F(T)\ne \emptyset \). Let \(\{x_n\}\) be generated by the iteration (1.6). Assume that \(\{\delta _n\}\), \(\{\alpha _n\}\) and \(\{\beta _n\}\) are real sequences in [0, 1], such that \(\delta _n+\alpha _n+\beta _n\le 1\) for all \(n\ge 0\) and conditions (C1),(C4),(C7), and (C8) hold. Then, \(\{x_n\}\) strongly converges to z, where \(z:=P_{F(T)}u\).
Proof
It is easy to know that the projection \(z=P_{F(T)}u\) exists, since F(T) is nonempty, closed, and convex. We now divide the proof into five steps.
Step 1. We show that \(\{x_n\}\) is bounded. In fact, without losing generality, it can be assumed from condition (C8) that \(\Vert r_n\Vert \le \delta _n \ (n\ge 0)\). Take \(x^*\in F(T)\). It follows from (1.6) that:
Using induction, it is easy to see that this implies
Therefore, \(\{x_n\}\) is bounded in H, so is \(\{Tx_n\}\).
Step 2. Here, we show that \(\Vert x_{n+1}-x_n\Vert \rightarrow 0\). Since condition (C4) holds, it can be obtained from Lemma 3.1 that
Setting \(z_{n}=\frac{\delta _n u+\beta _n Tx_n+r_n}{1-\alpha _n}\), we have
Observe that
This implies that
Since \(\limsup _{n\rightarrow \infty }\alpha _n<1, \lim _{n\rightarrow \infty }\delta _n=0, \lim _{n\rightarrow \infty }\Vert r_n\Vert =0, \lim _{n\rightarrow \infty }(1-\delta _n-\alpha _n-\beta _n)=0\) and \(\{Tx_n\}\) is bounded, we obtain that
Hence, by Lemma 2.3, we have
It follows from (3.1) that:
Step 3. We now show that \(\Vert x_n-Tx_n\Vert \rightarrow 0\). Indeed, we have
Hence
From (3.2) and conditions (C1), (C7), and (C8), this implies that
Step 4. We show that \(\limsup _{n\rightarrow \infty }\langle u-z, x_{n+1}-z\rangle \le 0\). Since \(\{x_n\}\) is bounded, we can extract a subsequence \(\{x_{n_k}\}\) of \(\{x_n\}\), such that
and \(x_{n_k}\rightharpoonup {\bar{x}}\). It follows from Lemma 2.4 and (3.3) that \({\bar{x}}\in F(T)\). Therefore, we obtain from Lemma 2.2 (i) that
Step 5. In this final step, we prove that \(x_n\rightarrow z\). From Lemma 2.5 (i) and (ii), we have
for some \(M_1>0\) (recall that \(\{x_n\}\) is bounded). Applying Lemma 2.6 to (3.4) and using conditions (C1), (C7), and (C8), we have that \(\lim _{n\rightarrow \infty }\Vert x_n-z\Vert =0\). Thus, \(x_n\rightarrow z=P_{F(T)}u\) whenever \(n\rightarrow \infty \). \(\square \)
Remark 3.3
(1) Condition (C7) and condition (C2) are not comparable. Indeed, take \(\delta _n=\frac{1}{\sqrt{n}}\), \(\alpha _n=1-\frac{1}{\sqrt{n}}-\frac{2}{n}-\epsilon _0\), \(\beta _n=\frac{1}{n}+\epsilon _0\) and \(r_n\equiv 0\) for all \(n\ge 5\), where \(\epsilon _0\) is a constant satisfying \(0<\epsilon _0<\frac{3-\sqrt{5}}{5}\). Then, we have \(1-\delta _n-\alpha _n-\beta _n=\frac{1}{n} \ (n\ge 5)\). It is easy to verify that condition (C7) is satisfied, but condition (C2) is violated. On the other hand, we take
and \(r_n\equiv 0\) for all \(n\ge 3\), where \(\epsilon _0\) is a constant satisfying \(0<\epsilon _0<\frac{1}{2}\). Then
which satisfies condition (C2), but does not satisfy (C7).
(2) We now show that condition (C8) and condition (C3) are also not comparable. Take \(\delta _n=\frac{1}{\sqrt{n}}\), \(\alpha _n=1-\frac{1}{\sqrt{n}}-\frac{1}{n}-\epsilon _0\), \(\beta _n=\frac{1}{n}+\epsilon _0\) and \(\Vert r_n\Vert =\frac{1}{n}\) for all \(n\ge 3\), where \(\epsilon _0\) is a constant satisfying \(0<\epsilon _0<\frac{2-\sqrt{3}}{3}\). It is easy to verify that condition (C8) is satisfied, but condition (C3) is violated. On the other hand, we take
and \(\Vert r_n\Vert =\frac{1}{n^2}\) for all \(n\ge 3\), where \(\epsilon _0\) is a constant satisfying \(0<\epsilon _0<\frac{1}{2}\). It is easy to verify that condition (C3) is satisfied, but condition (C8) is violated.
(3) Conditions (C7) and (C8) broaden the application range of the generalized Halpern iteration. In practical applications, when the residual \(\Vert r_n\Vert \) does not satisfy condition (C3), the strong convergence of the generalized Halpern iteration can still be guaranteed by adjusting the parameter \(\delta _n\) to satisfy condition (C8).
It is easy to observe that condition (C4) is very harsh, which excludes many common choices, such as \(\delta _n=\frac{1}{2(n+1)}, \alpha _n=\frac{1}{2(n+1)}, \beta _n=1-\frac{1}{n+1}\) and \(r_n\equiv 0\). Therefore, we give the following new control condition, that includes such a choice as a special case:
-
(C9)
\(\limsup _{n\rightarrow \infty }\alpha _n<1\), either \(\lim _{n\rightarrow \infty }\frac{\delta _{n-1}}{\delta _n}=1\) or \(\sum _{n=1}^\infty |\delta _n-\delta _{n-1}|<\infty \), either \(\lim _{n\rightarrow \infty }\frac{|\alpha _n-\alpha _{n-1}|}{\delta _n}=0\) or \(\sum _{n=1}^\infty |\alpha _n-\alpha _{n-1}|<\infty \).
Theorem 3.4
Let H be a real Hilbert space, K a nonempty, closed, and convex subset of H, and \(T:H\rightarrow K\) a nonexpansive mapping with \(F(T)\ne \emptyset \). Let \(\{x_n\}\) be generated by the iteration (1.6). Assume that \(\{\delta _n\}\), \(\{\alpha _n\}\) and \(\{\beta _n\}\) are real sequences in [0, 1], such that \(\delta _n+\alpha _n+\beta _n\le 1\) for all \(n\ge 0\) and conditions (C1–C3) and (C9) hold. Then, \(\{x_n\}\) strongly converges to z, where \(z:=P_{F(T)}u\).
Proof
Similar to the proof of Theorem 3.2, we also divide the proof into five steps.
Step 1. We show that \(\{x_n\}\) is bounded. Take \(x^*\in F(T)\). It follows from (1.6) that:
Using induction, it is easy to see that this implies
which yields
It follows from condition (C3) that \(\{x_n\}\) is bounded in H, so is \(\{Tx_n\}\).
Step 2. We now show that \(\Vert x_{n+1}-x_n\Vert \rightarrow 0\). Indeed, from (1.6), we have
for some \(M_2>0\) (recall that \(\{x_n\}\) and \(\{Tx_n\}\) are bounded). Since \(\sum _{n=0}^\infty \Vert r_n\Vert <\infty \) and \(\sum _{n=0}^\infty (1-\delta _n-\alpha _n-\beta _n)<\infty \), it is easy to obtain that \(\sum _{n=1}^\infty \Vert r_n-r_{n-1}\Vert <\infty \) and \(\sum _{n=1}^\infty |(1-\delta _{n-1}-\alpha _{n-1}-\beta _{n-1})-(1-\delta _n-\alpha _n-\beta _n)|<\infty \). From conditions (C1), (C9) and Lemma 2.6, we have
The proof of Step 3–Step 5 is the same as that of Theorem 3.2, so we omit it. \(\square \)
The following example shows that our Theorem 3.4 fails if condition (C9) does not hold.
Example 3.5
Let H be any real Hilbert space. Choose an arbitrary element \(y\in H\), such that \(y\ne 0\) and \(\Vert y\Vert =1\), and define a subset K of H by \(K=\{x\in H\mid x=\lambda y, \lambda \in [0,1]\}\), i.e., K is the connecting line from the origin to the given point y. Observe that K is a nonempty, closed, and convex subset of H. Then, define the mapping \(T:H\rightarrow K\) by \(Tx=0\) for all \(x\in H\). Clearly, T is nonexpansive, and \(F(T)=\{0\}\).
(1) Consider the iterative scheme (1.6) with
Take \(u=y\in K\), and let \(x_0=u\) be the starting point. It is clear that conditions (C1–C3) are satisfied, but both condition (C9) and condition (C4) are violated. It is proved that \(\{x_n\}\) generated by the iteration (1.6) does not strongly converge to 0, which is the unique fixed point of T in [12].
(2) Consider the iterative scheme (1.6) with
Then, it is clear that conditions (C1–C3) are satisfied, but condition (C4) is violated. On the other hand, since condition (C9) is satisfied, we verifies that \(\{x_n\}\) generated by the iteration (1.6) strongly converges to 0, which is the unique fixed point of T for all \(u\in K\) and \(x_0\in H\). It follows from (1.6) that:
Using induction, it is easy to obtain
Thus, \(\{x_n\}\) is bounded. This implies that
Hence, \(\{x_n\}\) strongly converges to 0.
Remark 3.6
(1) It is easy to verify that the canonical choices \(\delta _n=\frac{1}{2(n+1)}, \alpha _n=\frac{1}{2(n+1)}, \beta _n=1-\frac{1}{n+1}\) and \(r_n\equiv 0\) satisfy the conditions (C1–C3) and (C9), but do not satisfy the condition (C4).
(2) In conditions (C1–C3) and (C9), if \(\delta _n+\alpha _n+\beta _n=1\), \(\alpha _n\equiv 0\) and \(r_n\equiv 0\), then conditions (C1–C3) and (C9) are reduced to conditions (A1), (A3) or (A4). Therefore, iteration (1.6) with conditions (C1–C3) and (C9) is a generalization of iteration (1.2) with conditions (A1), (A3), or (A4). However, \(\alpha _n\equiv 0\) does not satisfy condition (C4). Therefore, iteration (1.6) with conditions (C1–C4) is not a generalization of iteration (1.2).
(3) We show that condition (C9) and condition (C4) are not comparable. We take \(\delta _n=\frac{1}{n+3}\), \(\alpha _n=\alpha _n^\prime +\epsilon _0\), \(\beta _n=1-\frac{1}{n+3}-\alpha _n^\prime -\epsilon _0\) and \(r_n\equiv 0\) for all \(n\ge 0\), where
\(\epsilon _0\) is a constant satisfying \(0<\epsilon _0<\frac{1}{6}\). It is easy to verify that conditions (C1–C4) are satisfied. However, condition (C9) is violated. Indeed, we just need to verify that neither \(\lim _{n\rightarrow \infty }\frac{|\alpha _n-\alpha _{n-1}|}{\delta _n}=0\) nor \(\sum _{n=0}^\infty |\alpha _n-\alpha _{n-1}|<\infty \) holds. It is easy to obtain that
Thus
This implies that \(\frac{|\alpha _n-\alpha _{n-1}|}{\delta _n}\rightarrow \infty \). When \(n>0\) and n is even
Hence
This implies that \(\sum _{n=0}^\infty |\alpha _n-\alpha _{n-1}|\) is divergent, since \(\sum _{n=1}^\infty \big (\frac{1}{\sqrt{n+3}}-\frac{1}{n+1}\big )\) is divergent.
From (1) of Remark 3.6, it can be seen that condition (C9) and condition (C4) are not comparable.
Next, we give another control condition on parameters. Under this condition, the strong convergence of \(\{x_n\}\) generated by the iteration (1.6) to a fixed point of T can also be proved.
-
(C10)
\(\liminf _{n\rightarrow \infty }\beta _n>0\), either \(\lim _{n\rightarrow \infty }\frac{\delta _{n-1}}{\delta _n}=1\) or \(\sum _{n=1}^\infty |\delta _n-\delta _{n-1}|<\infty \), either \(\lim _{n\rightarrow \infty }\frac{|\beta _n-\beta _{n-1}|}{\delta _n}=0\) or \(\sum _{n=1}^\infty |\beta _n-\beta _{n-1}|<\infty \).
Theorem 3.7
Let H be a real Hilbert space, K a nonempty, closed, and convex subset of H, and \(T:H\rightarrow K\) a nonexpansive mapping with \(F(T)\ne \emptyset \). Let \(\{x_n\}\) be generated by the iteration (1.6). Assume that \(\{\delta _n\}\), \(\{\alpha _n\}\) and \(\{\beta _n\}\) are real sequences in [0, 1], such that \(\delta _n+\alpha _n+\beta _n\le 1\) for all \(n\ge 0\) and conditions (C1)-(C3) and (C10) hold. Then, \(\{x_n\}\) strongly converges to z, where \(z:=P_{F(T)}u\).
The proof of Theorem 3.7 is similar to that of Theorem 3.4, so we omit its proof.
4 Convergence rate analysis for the generalized Halpern iteration
In [8], Cominetti et al. showed that \(\Vert x_n-Tx_n\Vert \) in (1.1) converges to zero at a rate of \(O(1/\sqrt{\sigma _n})\)(big-O), where \(\sigma _n=\sum _{k=0}^{n}\lambda _k(1-\lambda _k)\), \(n\ge 0\), whenever \(\lim _{n\rightarrow \infty }\sigma _n=\infty \). In particular, when \(\lambda _n=\lambda \) (\(\lambda \in (0,1)\)), this convergence rate becomes \(O(1/\sqrt{n+1})\). In this section, we will give the convergence rate of \(\Vert x_n-Tx_n\Vert \) and \(\Vert x_n-x_{n-1}\Vert \) in (1.6) with a particular choice of parameters \(\delta _n, \alpha _n, \beta _n\) and \(r_n\). The technical lemma which we state next, and for which we refer to [22], will play a crucial role in the convergence rate analysis.
Lemma 4.1
Let \(M>0\). Assume that \(\{a_n\}\) is a sequence of nonnegative real numbers which satisfy \(a_1\le 2M\) and
where \(\{b_n\}\) is a sequence which is defined by \(b_n=\frac{2}{n+4}\). Then, the sequence \(\{a_n\}\) satisfies
Proof
We will assume that the claim is true for \(n\ge 1\) and prove that it is true for \(n+1\). Using the induction assumption, we obtain
This completes the proof of the desired result. \(\square \)
Theorem 4.2
Let \(\{x_n\}\) be a sequence generated by the iteration (1.6), where \(\{\delta _n\}, \{\alpha _n\}, \{\beta _n\}\), and \(\{r_n\}\) are defined by
Then, for any \({\bar{x}}\in Fix(T)\), the two sequences \(\{\Vert x_n-x_{n-1}\Vert \}\) and \(\{\Vert x_n-Tx_n\Vert \}\) converge to 0, and the convergence rates are given by
and
where \(C_{{\bar{x}}}=\max \{\Vert u-{\bar{x}}\Vert , \Vert x_0-{\bar{x}}\Vert \}\).
Proof
Since \(\delta _n+\alpha _n+\beta _n=1\) and \(r_n\equiv 0\), it follows from (1.6) that:
Using induction, we have
Thus, we obtain that
and
Again, it follows from (1.6) and \(\delta _n+\alpha _n+\beta _n=1\) that:
Moreover
Hence, the convergence rate for the sequence \(\Vert x_n-x_{n-1}\Vert \) is an immediate result of applying Lemma 4.1 to (4.2) with \(a_n=\Vert x_n-x_{n-1}\Vert , b_n=\delta _n, M=C_{{\bar{x}}}\).
Similar to (4.1), it is easy to obtain that \(\Vert u-Tx_n\Vert \le 2C_{{\bar{x}}}\). The convergence rate for \(\Vert x_n-Tx_n\Vert \) is derived by the following arguments:
and we have
This completes the proof. \(\square \)
Remark 4.3
(1) Obviously, the parameters in Theorem 4.2 satisfy our condition (C9), but do not satisfy Kanzow’s condition (C4).
(2) Theorem 4.2 gives the convergence rate of the generalized Halpern iteration (1.6) using the quantity \(\Vert x_n-Tx_n\Vert \) as a measure of its convergence rate. This convergence rate would imply that an \(\epsilon \)-accuracy solution, in the sense that \(\Vert x_n-Tx_n\Vert \le \epsilon \), is obtainable within no more than \(O(1/\epsilon )\) iterations.
Example 4.4
Continue to consider Example 3.5. Consider the iterative scheme (1.6) with
Now, take \(u=y\in K\), and let \(x_0=u\) be the starting point. Since \(r_n\equiv 0\), \(\delta _n+\alpha _n+\beta _n=1\), \(x_0\in K\), \(u\in K\), and T maps into the convex set K, it follows by induction that the entire sequence \(\{x_n\}\) generated by (1.6) belongs to K. Therefore, we have the representation \(x_n=\lambda _n y\) for \(n\ge 0\) for some \(\lambda _n\in [0,1]\). Taking this into account, we can write (1.6) as
This implies that
Hence
Remark 4.5
It should be noted that the convergence rate proved in Theorem 4.2 means that the sequence \(\{\Vert x_n-Tx_n\Vert \}\) converges to 0 with a rate of O(1/n). However, it can be seen from Example 4.4 that this convergence rate cannot be improved.
5 Application to the split feasibility problem
Let C and Q be the nonempty closed convex subsets of the real Hilbert spaces \(H_1\) and \(H_2\), respectively. Let \(A:H_1\rightarrow H_2\) be a bounded linear operator. The split feasibility problem (SFP) was first introduced by Censor and Elfving [6] to solve the phase retrieval problem, and is formulated as
Several iterative algorithms were presented to solve the SFP provided the solution exists; see [16, 18, 19, 31, 34, 35] and references therein. One of the most popular and practical algorithms is the CQ algorithm, which was proposed by Byrne [4]. The CQ algorithm is defined as follows:
where \(\gamma \in (0,\frac{2}{\Vert A\Vert ^2})\), \(A^*\) is the corresponding adjoint operator of A, and \(P_C\) and \(P_Q\) stand for the projections to C and Q, respectively.
Let \(F=A^*(I-P_Q)A\) and \(T=P_C(I-\gamma A^*(I-P_Q)A)=P_C(I-\gamma F)\). Next, we prove that T is a nonexpansive mapping if \(0<\gamma \le \frac{2}{\Vert A\Vert ^2}\). Indeed, from Lemma 2.2, we have
It follows from Lemma 2.2 and \(0<\gamma \le \frac{2}{\Vert A\Vert ^2}\) that
This shows that T is a nonexpansive mapping.
Lemma 5.1
[32] Let \(\tau >0\) and \(T=P_C(I-\gamma A^*(I-P_Q)A)\). Then, \({\tilde{x}}\in H_1\) solves the SFP (5.1) if and only if \({\tilde{x}}\) solves the fixed point equation \({\tilde{x}}=T{\tilde{x}}\).
In general, the CQ algorithm has only weak convergence. According to Lemma 5.1, solving the SFP is equivalent to solving the fixed point problem of the nonexpansive mapping T. Therefore, we can prove the following strong convergence result by applying Theorem 3.4.
Theorem 5.2
Assume that the solution set of the SFP, denoted by S, is nonempty. Let \(\{\delta _n\}\), \(\{\alpha _n\}\) and \(\{\beta _n\}\) be real sequences in [0, 1], such that \(\delta _n+\alpha _n+\beta _n\le 1\) for all \(n\ge 0\). Let the sequence \(\{x_n\}\) be generated by
using an arbitrary starting point \(x_0\in H_1\), where \(u\in H_1\) denotes a fixed element, and \(r_n\) is the residual and \(\gamma \in (0, \frac{2}{\Vert A\Vert ^2}]\). If conditions (C1)-(C3) and (C9) hold, then \(\{x_n\}\) strongly converges to z, where \(z:=P_S u\).
In the following, we present two numerical experiments, to illustrate the performance of the iteration (5.3) with conditions (C1–C3) and (C9). All algorithms in our experiments are coded in MATLAB R2016a on a 16 GB RAM, 1.80 GHz, Intel(R) Core(TM) i7-8565U personal computer. Iter. denotes the numbers of iterations, and CPU denotes the computing time.
It is easy to verify that parameters \(\delta _n=\frac{1}{2(n+2)}, \alpha _n=\frac{1}{2(n+2)}+\frac{1}{4}, \beta _n=\frac{n+1}{n+2}-\frac{1}{4}, r_n\equiv 0\) satisfies conditions (C1–C4), and parameters \(\delta _n=\frac{1}{2(n+2)}, \alpha _n=\frac{1}{2(n+2)}, \beta _n=\frac{n+1}{n+2}, r_n\equiv 0\) satisfies conditions (C1–C3) and (C9). For simplicity, the iteration (5.3) with parameters \(\delta _n=\frac{1}{2(n+2)}, \alpha _n=\frac{1}{2(n+2)}+\frac{1}{4}, \beta _n=\frac{n+1}{n+2}-\frac{1}{4}, r_n\equiv 0\) is abbreviated as Iteration A, and the iteration (5.3) with parameters \(\delta _n=\frac{1}{2(n+2)}, \alpha _n=\frac{1}{2(n+2)}, \beta _n=\frac{n+1}{n+2}, r_n\equiv 0\) is abbreviated as Iteration B.
Here are two examples. The first example solves the SFP in finite-dimensional space, while the other solves it in infinite-dimensional space.
Example 5.3
In this example, we apply the iteration (5.3) to solve the LASSO problem. Let us first recall the well-known LASSO problem [26] which is given as follows:
where \(A\in {\mathbb {R}}^{M\times N}\), \(b\in {\mathbb {R}}^M\) and \(t>0\) is a given constant. Let \(C=\{x\in {\mathbb {R}}^N \mid \Vert x\Vert _1\le t\}\) and \(Q=\{b\}\), then the minimization problem (5.4) can be seen as an SFP (5.1). Thus, we can apply the iteration (5.3) to solve the problem (5.4).
For the experiments, the matrix A is generated from a normal distribution with mean zero and one variance. The true sparse signal \(x^*\in {\mathbb {R}}^N\) is generated from uniformly distribution in the interval \([-2, 2]\) with random K position nonzero, while the rest is kept zero. The sample data \(b=Ax^*\) (no noise is assumed). The goal is then to recover the K-sparse signal x by solving the LASSO problem (5.4).
In the first test, the parameters are set with \(M=50\), \(N=100\), \(t=K\), \(\gamma =\frac{3}{2\Vert A\Vert ^2}\) and \(u=(0, 0,\cdots , 0)^T\). The stopping criteria are that \(\Vert x_{n+1}-x_n\Vert \le 10^{-6}\). The results are listed in Tables 1, 2, and 3 using different initial points \(x_0\) and K.
Example 5.4
[27] Let \(H_1=H_2=L_2[0,1]\) with the inner product given by
Let \(C=\{x\in L_2[0,1] \mid \Vert x\Vert _{L_2}\le 1\}\) and \(Q=\{x\in L_2[0,1] \mid \langle x, \frac{t}{2}\rangle =0\}\). Find \(x\in C\), such that \(Ax\in Q\), where \((Ax)(t)=\frac{x(t)}{2}\).
In the second test, we take the stepsize \(\gamma =\frac{1}{\Vert A\Vert ^2}\) and \(u=u(t)=1, \forall t\in [0, 1]\). The stopping criteria are that \(\Vert x_{n+1}-x_n\Vert \le 10^{-6}\). The results are listed in Table 4 using different initial points.
From Tables 1, 2, 3, and 4, we can see that our parameters can make the algorithm perform better compared with Kanzow’s parameters, in terms of cpu time and the numbers of iterations.
Data availability statement
Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
References
Bauschke, H.H., Burachik, R.S., Combettes, P.L., Elser, V., Luke, D.R., Wolkowicz, H.: Fixed point algorithms for inverse problems in science and engineering, Springer optimization and its applications. Springer, New York (2011)
Bauschke, H.H., Combettes, P.L.: Convex analysis and monotone operator theory in Hilbert space. Springer-Verlag, New York (2011)
Berinde, V.: Iterative Approximation of Fixed Points. Lecture Notes in Mathematics, vol. 1912. Springer, Berlin (2007)
Byrne, C.: Iterative oblique projection onto convex sets and the split feasibility problem. Inv. Probl. 18, 441–453 (2002)
Cegielski, A.: Iterative Methods for Fixed Point Problems in Hilbert Spaces. Lecture Notes in Mathematics, vol. 2057. Springer, Berlin (2012)
Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projections in product space. Num. Algorithms. 8, 221–239 (1994)
Chidume, C.E., Chidume, C.O.: Iterative approximation of fixed points of nonexpansive mappings. J. Math. Anal. Appl. 318(1), 288–295 (2006)
Cominetti, R., Soto, J.A., Vaisman, J.: On the rate of convergence of Krasnosel’skiĭ-Mann iterations and their connection with sums of Bernoullis. Israel J. Math. 199, 757–772 (2014)
Goebel, K., Reich, S.: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Marcel Dekker, New York and Basel (1984)
Halpern, B.: Fixed points of nonexpanding maps. Bull. Am. Math. Soc. 73, 957–961 (1967)
He, S., Yang, C.: Boundary point algorithms for minimum norm fixed points of nonexpansive mappings. Fixed Point Theroy Appl. 2014, 56 (2014)
Kanzow, C., Shehu, Y.: Generalized Krasnosel’skiĭ-Mann-type iterations for nonexpansive mappings in Hilbert spaces. Comput. Optim. Appl. 67(3), 593–620 (2017)
Kim, T.H., Xu, H.K.: Strong convergence of modified Mann iterations. Nonlinear Anal. 61, 51–60 (2005)
Krasnosel’skiĭ, M.A.: Two remarks on the method of successive approximations. Uspekhi Mat. Nauk 10, 123–127 (1955)
Lions, P.L.: Approximation de points fixes de contractions, C. R. Acad. Sci. Paris Ser. A-B 284, 1357–1359 (1977)
López, G., Martín, V., Wang, F., Xu, H.K.: Solving the split feasibility problem without prior knowledge of matrix norms. Inv. Probl. 28, 085004 (2012)
Mann, W.R.: Mean value methods in iteration. Bull. Am. Math. Soc. 4, 506–510 (1953)
Masad, E., Reich, S.: A note on the multiple-set split convex feasibility problem in Hilbert space. J. Nonlinear Convex Anal. 8, 367–371 (2007)
Moudafi, A.: Alternating CQ-algorithm for convex feasibility and split fixed-point problems. J. Nonlinear Convex Anal. 15, 809–818 (2014)
Reich, S.: Weak convergence theorems for nonexpansive mppings in Banach spaces. J. Math. Anal. Appl. 67, 274–276 (1979)
Reich, S.: Approximating fixed points of nonexpansive mappings. PanAm. Math. J. 4(2), 23–28 (1994)
Sabach, S., Shtern, S.: A first order method for solving convex bilevel optimization problems. SIAM J. Optimiz. 27(2), 640–660 (2017)
Song, Y., Li, H.: Strong convergence of iterative sequences for nonexpansive mappings. Appl. Math. Lett. 22, 1500–1507 (2009)
Suzuki, T.: A sufficient and necessary condition for Halpern-type strong convergence to fixed points of nonexpansive mappings. Proc. Am. Math. Soc. 135(1), 99–106 (2007)
Suzuki, T.: Strong convergence of Krasnoselskii and Mann’s type sequences for one-parameter nonexpansive semigroups without Bochner integrals. J. Math. Anal. Appl. 305, 227–239 (2005)
Tibshirani, R.: Regression shrinkage and selection via the LASSO. J. R. Stat. Soc. B 58, 267–288 (1996)
Vinh, N.T., Cholamjiak, P., Suantai, S.: A new CQ algorithm for solving split feasibility problems in Hilbert spaces. Bull. Malays. Math. Sci. Soc. 42(5), 2517–2534 (2019)
Wittmann, R.: Approximation of fixed points of nonexpansive mappings. Arch. Math. (Basel) 58, 486–491 (1992)
Xu, H.K.: Another control condition in an iterative method for nonexpansive mappings. Bull. Astral. Math. Soc. 65, 109–113 (2002)
Xu, H.K.: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2, 240–256 (2002)
Xu, H.K.: A variable Krasnosel’skiĭ-Mann algorithm and the multiple-set split feasibility problem. Inv. Probl. 22(6), 2021–2034 (2006)
Xu, H.K.: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 150, 360–378 (2011)
Yao, Y., Chen, R., Yao, J.C.: Strong convergence and certain control conditions for modified Mann iteration. Nonlinear Anal. 68(6), 1687–1693 (2008)
Yu, H., Wang, F.: Modified relaxed CQ algorithms for split feasibility and split equality problems in Hilbert spaces. Fixed Point Theory 21(2), 819–832 (2020)
Yu, H., Zhan, W., Wang, F.: The ball-relaxed CQ algorithms for the split feasibility problem. Optimization 67(10), 1687–1699 (2018)
Funding
This work is supported by the National Natural Science Foundation of China (Nos. 11971216, 62072222).
Author information
Authors and Affiliations
Contributions
HY contributed to the conception of the study, wrote the main manuscript text, and performed the experiment. FW contributed significantly to analysis and manuscript preparation and helped perform the analysis with constructive discussions. All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no conflict of interest.
Consent for publication
Not applicable.
Human and animal ethics
Not applicable.
Consent to participate
Not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Yu, H., Wang, F. Generalized Halpern iteration with new control conditions and its application. J. Fixed Point Theory Appl. 25, 45 (2023). https://doi.org/10.1007/s11784-023-01050-2
Accepted:
Published:
DOI: https://doi.org/10.1007/s11784-023-01050-2