Abstract
In this paper, we propose a new modification of the Gradient Projection Algorithm and the Forward–Backward Algorithm. Using our proposed algorithms, we establish two strong convergence theorems for solving convex minimization problem, monotone variational inclusion problem and fixed point problem for demicontractive mappings in a real Hilbert space. Furthermore, we apply our results to solve split feasibility and optimal control problems. We also give two numerical examples of our algorithm in real Euclidean space of dimension 4 and in an infinite dimensional Hilbert space, to show the efficiency and advantage of our results.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In this paper, we assume that H is a real Hilbert space with inner product \(\langle . , . \rangle \) and norm \(\Vert .\Vert .\)
Let C be a nonempty, closed and convex subset of H. A point \(x\in C\) is said to be a fixed point of the mapping \(T:C\rightarrow C\), if \(Tx = x.\) We denote by F(T) the fixed points set of T.
Definition 1.1
A mapping \(T:C\rightarrow C\) is said to be
-
(i)
contractive, if there exists \(k\in (0, 1)\) such that
$$\begin{aligned} ||Tx-Ty||\le k||x-y||\quad \forall x, y\in C, \end{aligned}$$if \(k=1\), then T is said to be nonexpansive,
-
(ii)
quasi-nonexpansive, if
$$\begin{aligned} \Vert Tx - Tp\Vert \le \Vert x - p\Vert \ \ \forall x\in C, \ \ p\in F(T), \end{aligned}$$ -
(iii)
demicontractive, (or \(\psi \)-demicontractive) if there exists \(0<\psi < 1\) such that
$$\begin{aligned} \Vert Tx - Tp\Vert ^2 \le \Vert x - p\Vert ^2 + \psi \Vert x - Tx\Vert ^2 \ \ \forall x\in C, \ \ p\in F(T), \end{aligned}$$ -
(iv)
monotone, if
$$\begin{aligned} \langle x-y, Tx-Ty \rangle \ge 0 \quad \forall \ x,y \in C, \end{aligned}$$ -
(v)
\(\beta \)-strongly monotone, if there exists \(\beta > 0\) such that
$$\begin{aligned} \langle x-y, Tx-Ty \rangle \ge \beta {\Vert x-y\Vert }^2 \quad \forall \ x,y \in C, \end{aligned}$$ -
(vi)
\(\nu \)-inverse strongly monotone (for short\(\nu \)-ism), if there exists \(\nu > 0 \) such that
$$\begin{aligned} \langle x-y, Tx-Ty \rangle \ge \nu {\Vert Tx-Ty\Vert }^2 \quad \forall \ x,y \in C. \end{aligned}$$
Definition 1.2
A mapping \( T : H \rightarrow H \) is said to be firmly nonexpansive if and only if \(2T-I \) is nonexpansive or equivalently
Alternatively, T is firmly nonexpansive if and only if T can be expressed as
where \( S: H \rightarrow H \) is nonexpansive. In this connection, see [1, Proposition 11.2].
It can easily be seen that if T is nonexpansive, then \(I-T\) is monotone where I is an identity operator. Inverse strong monotone (also referred to as co-coercive) operators have been widely used to solve practical problem in various fields, for instance, in traffic assignment problems (see, for example [2, 3] and the references therein).
Remark 1.3
The class of demicontractive mappings is of central importance in optimization since it contains many common types of operators arising in optimization (see [4,5,6,7] and the references therein). More precisely, the class of demicontractive mappings contains the class of quasi-nonexpansive mappings (which contains the class of nonexpansive mappings with nonempty fixed point sets) and is more desirable for fixed point methods in image recovery (see [5, 7,8,9]). More so, the class of demicontractive mappings contains the class of firmly nonexpansive mappings which in turn contains the class of metric resolvent and projections, known as important tools for solving optimization problems (see [6, 10, 11] and the references therein).
Let \( x \in H,\) there exists a unique point \( P_Cx \in C \) such that
where \(P_C\) is called the metric projection of H onto C. We know that \(P_C\) is a nonexpansive mapping from H onto C. It is also known that \(P_C\) satisfies
Furthermore, \(P_Cx \) is characterized by the properties \(P_Cx \in C\) and
In a real Hilbert space, metric projections are examples of firmly nonexpansive mappings. For more information on metric projections, see [1] and the references therein.
Definition 1.4
A mapping \(T : H \rightarrow H \) is said to be an averaged mapping, if it can be written as the average of the identity mapping I and a nonexpansive mapping; that is
where \( \alpha \in (0,1) \) and \(S: H \rightarrow H \) is nonexpansive. More precisely, when (1.3) holds, we say that T is \( \alpha \)-averaged. Thus, firmly nonexpansive mappings (in particular, projections) are \(\frac{1}{2}\)-averaged mappings. The term “averaged mapping” was coined by Baillon–Bruck–Reich [12].
Consider the following constrained convex minimization problem:
where C is a closed convex subset of H and \( g : C \rightarrow \mathbb {R} \) is a real-valued convex function. We say that the minimization problem (1.4) is consistent, if it has a solution. In the sequel, we shall denote the set of solutions of problem (1.4) by \( \Upsilon \). If g is Fréchet differentiable functional, then the Gradient-Projection Algorithm (GPA) generates a sequence \( \{x_n\} \) according to the recursive formula
or more generally
where in both (1.5) and (1.6), the initial guess \( x_1 \) is taken from C arbitrarily, and the parameters, \(\lambda \) and \( \lambda _n\), are positive real numbers. The convergence of algorithms (1.5) and (1.6) depends on the behaviour of the gradient \( \nabla g \). As a matter of fact, it is known that if \( \nabla g \) is \( \alpha \)-strongly monotone and L-Lipschizian, that is,
and
then for \(0< \gamma < \frac{2\alpha }{L^2},\) the operator
is a contraction. Hence, the sequence \( \{x_n\}\) defined by the algorithm (1.5) converges in norm to the unique solution of the minimization problem (1.4). More generally, if the sequence \(\{{\lambda }_n\}\) is chosen to satisfy the property
then the sequence \( \{x_n \} \) defined by the algorithm (1.6) converges in norm to the unique minimizer of (1.4). However, if the gradient \(\nabla g \) fails to be strongly monotone, then the operator T defined by (1.9) would fail to be contractive. Consequently, the sequence \( \{x_n \}\) generated by algorithm (1.6) may fail to converge strongly (see [13]). If \( \nabla g \) is Lipschizian, then algorithm (1.5) and (1.6) can still converge in the weak topology under certain conditions.
Xu [13] gave an alternative operator-oriented approach to algorithm (1.6); namely an average mapping approach. He gave his averaged mapping approach to the gradient-projection algorithm (1.6) and the relaxed gradient-projection algorithm. Moreover, he constructed a counter example which shows that algorithm (1.5) does not converge in norm in an infinite-dimensional space, and he also presented two modification of gradient-projection algorithm which are shown to have strong convergence. Furthermore, he regularized the minimization problem (1.4) to devise an iterative scheme that generates a sequence converging in norm to the minimum-norm solution of (1.4) in the consistent case.
Recently, Cai and Shehu [14] introduced the following iterative algorithm for finding a fixed point of a strictly pseudocontractive mapping which is also a solution of a constrained convex minimization problem for a convex and continuously Fréchet differentiable functional g in a real Hilbert space and prove the strong convergence of the sequence generated by their scheme in a real Hilbert space.
Theorem 1.5
[14] Let C be a nonempty, closed and convex subset of real Hilbert space H. Suppose that the minimization problem (1.4) is consistent and let \(\Upsilon \) denote its solution set. Assume that the gradient \(\nabla g\) is L-Lipschitzian with constant \(L>0.\) Let T be a k-strictly pseudo-contractive mapping on C into itself such that \(F(T)\cap \Gamma \ne \emptyset .\) Let \(\{t_n\}\) be a sequence in (0, 1), \(\{\alpha _n\}\) in \((0,(1-k)(1-t_n)) \subset (0,1),\) and \(\{\lambda _n\}\) a sequence in \((0,\frac{2}{L})\) satisfying the following conditions:
-
(i)
\(\underset{n \rightarrow \infty }{\lim }t_n = 0;\)
-
(ii)
\(\sum _{n=1}^\infty t_n = \infty ; \)
-
(iii)
\( 0< \underset{n\rightarrow \infty }{\liminf }\alpha _n \le \underset{n\rightarrow \infty }{\limsup }\alpha _n < 1-k ;\)
-
(iv)
\( 0< \underset{n\rightarrow \infty }{\liminf }\lambda _n \le \underset{n\rightarrow \infty }{\limsup }\lambda _n < \frac{2}{L}. \)
Then the sequences \(\{u_n\}\) and \(\{x_n\}\) generated for fixed \( u\in C\) by \(u_1, x_1 \in C\)
converges strongly to \(x^*\in F(T) \cap \Gamma ,\) where \(x^* = P_{F(T)\cap \Upsilon }u.\)
Remark 1.6
Having searched the literature, we observe that, to prove strong convergence results for the GPA problem and other related optimization problems, the CQ (modified Haugazeau) algorithms are often used. In some other cases (where algorithms other than the CQ algorithms are used), some compactness conditions are assumed on the operators under consideration, or the proof maybe divided into two cases which may result to a very long proof.
Motivated by the above works and Remark 1.6, we propose a new modification of the GPA and the FBA by adopting the idea of algorithm (1.11). Using our proposed algorithms, we establish two strong convergence theorems for solving convex minimization problem, monotone variational inclusion problem and fixed point problem for demicontractive mappings in a real Hilbert space. Furthermore, we apply our results to solve split feasibility and optimal control problems. We also give two numerical examples of our algorithm in real Euclidean space of dimension 4 and in an infinite dimensional Hilbert space, to show the efficiency and advantage of our results.
2 Preliminaries
Lemma 2.1
[15, 16] Let H be a real Hilbert space. Then the following hold:
-
(i)
\(||x+y||^2 \le ||y||^2 + 2\langle x,x+y \rangle \) for all \(x,y \in H.\)
-
(ii)
\(||\alpha x + (1-\alpha )y||^2 = \alpha ||x||^2 + (1-\alpha )||y||^2 - \alpha (1-\alpha )||x-y||^2\) for all \(x,y \in H\) and \(\alpha \in (0,1).\)
-
(iii)
\(||x+y||^2=||x||^2+ 2 \langle x,y \rangle + ||y||^2.\)
Lemma 2.2
([17]) Let C be a nonempty, closed and convex subset of a real Hilbert space H. Let \( T: C \rightarrow C\) be a nonexpansive mapping. Then \(I-T\) is demiclosed at 0, (i.e., if \(x_n \rightharpoonup x\in C\) and \(x_n-Tx_n \rightarrow 0\), then \(x=Tx\)).
Lemma 2.3
[13, 18] Let C be a nonempty subset of H. Then, the following statements hold:
-
i
If \(T:C\rightarrow H\) is \(\alpha \)-averaged, then for any \(z\in Fix(T)\) and for all \(x\in C,\)
$$\begin{aligned} \Vert Tx - z\Vert ^2 \le \Vert x -z\Vert ^2 - \frac{1-\alpha }{\alpha }\Vert Tx -x\Vert ^2. \end{aligned}$$ -
ii
If \(T_1 : H\rightarrow H\) and \(T_2 : H\rightarrow H\) are \(\alpha _1\) and \(\alpha _2\)-averaged respectively. Then \(T_1T_2\) is \((\alpha _1 + \alpha _2 - \alpha _1\alpha _2)\)-averaged.
Lemma 2.4
[19] Let \(\{a_n\}\) be a sequence of non-negative number such that
where \(\{r_n\}\) is a sequence of real numbers bounded from above and \(\{\alpha _n\} \subset [0,1]\) satisfies \(\sum \alpha _n = \infty .\) Then
3 Main Results
Theorem 3.1
Let C be a nonempty, closed and convex subset of a real Hilbert space H and f be a contraction mapping on C with coefficient \(k\in (0,1).\) Let \(T:C\rightarrow C\) be a \(\psi \)- demicontractive mapping with \(\psi \in [0, 1)\). Suppose that the minimization problem (1.4) is consistent and \(\Upsilon \) denotes its solution set such that \( \Theta := F(T)\cap \Upsilon \ne \emptyset \). Assume that the gradient \(\nabla g\) is L-Lipschitzian with constant \(L>0.\) Let the sequence \(\{x_n\}\) be generated for fixed \(x_1 \in C\) by
where \(T_\kappa = \kappa I + (1-\kappa )T, ~~\kappa \in [\psi , 1)\) such that T is demiclosed at 0, \(\{\alpha _n\},\)\(\{t_n\}\) and \(\{\beta _n\}\) are sequences in (0, 1) and \(\{\lambda _n\} \) is a sequence in \((0,\frac{2}{L})\) satisfying the following conditions:
-
(i)
\(\lim _{n\rightarrow \infty } \alpha _n = 0,\)\(\sum _{n=0}^\infty \alpha _n = \infty ;\)
-
(ii)
\(0< \liminf _{n\rightarrow \infty }\lambda _n \le \limsup _{n\rightarrow \infty }\lambda _n < \frac{2}{L};\)
-
(iii)
\(0< \liminf _{n\rightarrow \infty }\beta _n \le \limsup _{n\rightarrow \infty }\beta _n < 1;\)
-
(iv)
\(0< \liminf _{n\rightarrow \infty }t_n \le \limsup _{n\rightarrow \infty }t_n < 1.\)
Then \(\{x_n\}\) converges strongly to \(z\in \Theta \), where \(z =P_{\Theta }f(z).\)
Proof
It is well known that \( z \in C \) solves the minimization problem (1.4) if and only if z solves the fixed point equation:
where \( \lambda > 0 \) is any fixed positive number. We may assume that (due to condition (ii))
where a and b are constant. Furthermore it is well known that the gradient \( \nabla g \) is \( \frac{1}{L}\) -ism, (\(I - {\lambda }_n \nabla g \)) is nonexpansive and that \(P_C( I - \lambda \nabla g )\) is \( \frac{2 + \lambda L}{4} \)-averaged for \( 0< \lambda < \frac{2}{L}\). Hence, we find that for each n, \(P_C( I - {\lambda }_n \nabla g )\) is \( \frac{2 + {\lambda }_n L}{4} \)-averaged. Therefore we can write
where \(S_n\) is nonexpansive for each \(n\ge 1,\)\( \mu _n = \frac{2 + {\lambda }_n L}{4} \in [a_1,b_1] \subset (0,1) \), \( a_1 = \frac{2 + aL}{4} \) and \( b_1 = \frac{2 + bL}{4} < 1 \). Let \(y_n = P_C(I-\lambda _n\nabla g)w_n.\) Then by (3.2), we obtain
Firstly, we show that if T is \(\psi \)- demicontractive, \(T_\kappa \) is quasi-nonexpansive. Let \(x \in C, ~y\in F(T)\) and \(0<\psi <\kappa \le 1,\) then we have
Hence, \(T_{\kappa }\) is quasi-nonexpansive. Furthermore, we know that \(F(T_\kappa ) = F(T).\)
Next, we show that \(\{x_n\}\) is bounded. Let \(z\in \Theta ,\) from (3.3), we obtain
Since \(T_\kappa \) is quasi-nonexpansive, we obtain from (3.1) that
Now, we obtain from (3.1) and Lemma 2.1 (ii) that
But from (3.1), we have
Similarly, we have from (3.3) that
Therefore, from (3.6), (3.7) and (3.8), we obtain
which shows that \(\{x_n\}\) is bounded and consequently, \(\{w_n\}\), \(\{y_n\}\) and \(\{z_n\}\).
Furthermore, we have from (3.7) that
Also, from (3.8), we obtain
Using Lemma 2.1 and (3.1) (noting that \(\alpha _n\in (0,1)\)), we have
Putting (3.12) in (3.9), we obtain
Let
Then, (3.13) becomes
Since \(\{x_n\}\) is bounded an so it is bounded below. Hence, \(\Gamma _n\) is bounded below. Furthermore, using Lemma 2.4 and condition (i) of Theorem 3.1 in (3.15), we obtain
Therefore, \(\liminf _{n\rightarrow \infty } \Gamma _n\) is a finite. We have from (3.14) that
Since \(\{x_n\}\) is bounded, there exists a subsequence \(\{x_{n_k}\}\) of \(\{x_n\}\) such that \(x_{n_k} \rightharpoonup q \in H\) and
Since \(\{x_n\}\) is bounded and \(\underset{n\rightarrow \infty }{\liminf }\Gamma _n\) is finite, we have that \(\left\{ \frac{1}{\alpha _{n_k}\beta _{n_k}}(1-\beta _{n_k})\Vert x_{n_k+1} - y_{n_k}\Vert ^2\right\} \)
and \(\left\{ \frac{1}{\alpha _{n_k}\mu _{n_k}}(1-\mu _{n_k})\Vert y_{n_k} - w_{n_k}\Vert ^2\right\} \) are bounded. Also, by assumption (iii), we have that there exists \(b\in (0,1)\) such that \(\beta _n \le b <1\) and this implies that \(\frac{1}{\alpha _{n_k}\beta _{n_k}}(1-\beta _{n_k}) \ge \frac{1}{\alpha _{n_k}\beta _{n_k}}(1-b)>0\) and so we have that \(\left\{ \frac{1}{\alpha _{n_k}\beta _{n_k}}\Vert x_{n_k+1} - y_{n_k}\Vert ^2\right\} \) is bounded. Similarly, we obtain that \(\frac{1}{\alpha _{n_k}\mu _{n_k}}(1-\mu _{n_k}) \ge \frac{1}{\alpha _{n_k}\mu _{n_k}}(1-b_1)>0\) and \(\left\{ \frac{1}{\alpha _{n_k}\mu _{n_k}}\Vert y_{n_k} - w_{n_k}\Vert ^2\right\} \) is bounded. Observe from assumptions (i) and (iii) that there exists \(a\in (0,1)\) such that
This implies that \( \frac{\alpha _{n_k}}{\beta _{n_k}} \rightarrow 0, \ k\rightarrow \infty .\) Therefore, we obtain from (3.10) and \( \frac{\alpha _{n_k}}{\beta _{n_k}} \rightarrow 0, \ k\rightarrow \infty \) that
From (3.1) and (3.18), we obtain
Following the same argument as in above, we obtain that
and that
From (3.7) and (3.18), we have that
Furthermore, from (3.1) and assumption (i), we obtain
Also, from (3.4) and (3.20), we obtain
So, we get that
Hence,
Observe that \(w_{n_k} \rightharpoonup x^* \in C, \ k \rightarrow \infty \) since \(w_{n_k} - x_{n_k} \rightarrow 0, \ k\rightarrow \infty \) and \(x_{n_k} \rightharpoonup x^* \in C, \ k \rightarrow \infty .\) We may assume that \(\lambda _{n_k} \rightharpoonup \lambda ;\) then we have \(0<\lambda < \frac{2}{L}.\) Set \( S:= P_C(I-\lambda \nabla g ), \) then S is nonexpansive and we get from (3.18) that
It then follows from Lemma 2.2 that \( x^*\in F(S)\). But \(F(S) = \Upsilon \). Therefore we have that \( x^* \in \Upsilon \).
Moreover, since \(\{x_{n_k}\}\) converges weakly to \(x^*\in C,\) and that \(y_{n_k} - x_{n_k} \rightarrow 0, \ k\rightarrow \infty ,\) then there exists a subsequence \(\{y_{n_k}\}\) of \(\{y_n\}\) that converges weakly to \(x^*\in C.\) Hence by (3.19) and the demicloseness of \(T_\kappa \) at the origin, we obtain that \(x^*\in F(T_\kappa ) = F(T).\) Hence \(x^*\in \Theta .\) Now, we obtain from (3.17) and the property of \(P_\Theta \) that
Thus from (3.16), we have that
Therefore, \(\lim _{n\rightarrow \infty }\Vert x_n - z\Vert = 0\) and this implies that \(\{x_n\}\) converges strongly to z. \(\square \)
If T is a strictly pseudocontractive mapping, then we obtain the following result.
Corollary 3.2
Let C be a nonempty, closed and convex subset of a real Hilbert space H and f be a contraction mapping on C with coefficient \(k\in (0,1).\) Let \(T:C\rightarrow C\) be a \(\psi \)-strictly pseudocontractive mapping with \(\psi \in [0, 1)\). Suppose that the minimization problem (1.4) is consistent and \(\Upsilon \) denote its solution set such that \( \Theta := F(T)\cap \Upsilon \ne \emptyset \). Assume that the gradient \(\nabla g\) is L-Lipschitzian with constant \(L>0.\) Let the sequence \(\{x_n\}\) be generated for fixed \(x_1 \in C\) by
where \(T_\kappa = \kappa I + (1-\kappa )T, ~~\kappa \in [\psi , 1)\), \(\{\alpha _n\},\)\(\{t_n\}\) and \(\{\beta _n\}\) are sequences in (0, 1) and \(\{\lambda _n\} \) is a sequence in \((0,\frac{2}{L})\) satisfying the following conditions:
-
(i)
\(\lim _{n\rightarrow \infty } \alpha _n = 0,\)\(\sum _{n=0}^\infty \alpha _n = \infty ;\)
-
(ii)
\(0< \liminf _{n\rightarrow \infty }\lambda _n \le \limsup _{n\rightarrow \infty }\lambda _n < \frac{2}{L};\)
-
(iii)
\(0< \liminf _{n\rightarrow \infty }\beta _n \le \limsup _{n\rightarrow \infty }\beta _n < 1;\)
-
(iv)
\(0< \liminf _{n\rightarrow \infty }t_n \le \limsup _{n\rightarrow \infty }t_n < 1.\)
Then \(\{x_n\}\) converges strongly to \(z\in \Theta \), where \(z =P_{\Theta }f(z).\)
By setting \(f(x)=u~\forall x\in C\) in Theorem 3.1, we obtain the following result.
Corollary 3.3
Let C be a nonempty, closed and convex subset of a real Hilbert space H and \(T:C\rightarrow C\) be a \(\psi \)- demicontractive mapping with \(\psi \in [0, 1)\). Suppose that the minimization problem (1.4) is consistent and \(\Upsilon \) denote its solution set such that \( \Theta := F(T)\cap \Upsilon \ne \emptyset \). Assume that the gradient \(\nabla g\) is L-Lipschitzian with constant \(L>0.\) Let the sequence \(\{x_n\}\) be generated for fixed \(x_1,~u \in C\) by
where \(T_\kappa = \kappa I + (1-\kappa )T, ~~\kappa \in [\psi , 1)\) such that T is demiclosed at 0, \(\{\alpha _n\},\)\(\{t_n\}\) and \(\{\beta _n\}\) are sequences in (0, 1) and \(\{\lambda _n\} \) is a sequence in \((0,\frac{2}{L})\) satisfying the following conditions:
-
(i)
\(\lim _{n\rightarrow \infty } \alpha _n = 0,\)\(\sum _{n=0}^\infty \alpha _n = \infty ;\)
-
(ii)
\(0< \liminf _{n\rightarrow \infty }\lambda _n \le \limsup _{n\rightarrow \infty }\lambda _n < \frac{2}{L};\)
-
(iii)
\(0< \liminf _{n\rightarrow \infty }\beta _n \le \limsup _{n\rightarrow \infty }\beta _n < 1;\)
-
(iv)
\(0< \liminf _{n\rightarrow \infty }t_n \le \limsup _{n\rightarrow \infty }t_n < 1.\)
Then \(\{x_n\}\) converges strongly to \(z\in \Theta \), where \(z =P_{\Theta }u.\)
We next investigate the problem of finding a zero of the sum of two monotone operators, which is formulated as the following monotone variational inclusion problem: Find \(x\in H\) such that
where \(A:H\rightarrow H\) and \(B: H\rightarrow 2^H\) are two monotone operators in Hilbert space H.
Lemma 3.4
Let H be a real Hilbert space, then the following well-known identity holds:
Lemma 3.5
Let C be a nonempty subset of H, \(\nu \in \mathbb {R}^+,\)\(T:C\rightarrow H\) be \(\nu \)-ism and \(\gamma \in (0,2\nu ).\) Then \(I-\gamma T\) is \(\gamma /2\nu \)-averaged.
Proof
Set \(N = I - 2\nu T.\) Since T is \(\nu \)-ism, we obtain from Lemma (3.4) that
\(\square \)
Hence, N is nonexpansive. Thus, we obtain that
Since \(\gamma \in (0,2\nu ),\) then \(\gamma /2\nu \in (0,1)\), thus we have that \(I-\gamma T\) is \(\gamma /2\nu \)-averaged.
We shall assume that problem (3.26) is consistent, namely its solution set, denoted by \(\Theta \) is nonempty. We now introduce an iterative algorithm that converges strongly to a solution of (3.26). More accurately, our algorithm starts with an arbitrary initial guess \(x_0 \in H,\) and generates \(x_{n+1}\) according to the recursion process
where \(T_\kappa = \kappa I + (1-\kappa )T, ~~\kappa \in [\psi , 1)\) such that T is demiclosed at 0. Noting that \(\Theta = F(J_{\gamma _nB}(I-\gamma _nA))\) i.e \(x\in \Theta ,\) if and only if
Let \(z \in \Theta \), from Lemma 3.5, we obtain that \(I-\gamma _n A\) is \(\gamma _n A/2\nu \)-averaged for every \(n\in \mathbb {N}.\) Since \(J_{\gamma _n B}\) is nonexpansive, then it is \(\frac{1}{2}\)-averaged. It follows from Lemma 2.3 (ii) that \(J_{\gamma _n B}(I-\gamma _n A)\) is \((2\nu + \gamma _n)/4\nu \) - averaged. Let \(\mu _n = \frac{2\nu + \gamma _n}{4\nu },\) in view of \(\gamma _n \in (0,2\nu ),\) we have that \(\mu _n \in (0,1).\) So \( J_{\gamma _n B}(I-\gamma _n A)\) is \(\mu _n\)-averaged. Hence it follows from Definition 1.2 that
where \(T_n\) is nonexpansive for every \(n\ge 1.\) By (3.28), we obtain that
Hence we obtain the following strong convergence theorem for finding a zero of the sum of two monotone operators.
Theorem 3.6
Let \(A:H\rightarrow H\) be a \(\nu \)-inverse strongly monotone mapping and \(B:H\rightarrow 2^H\) be a maximal monotone mapping. Let f be a contraction with constant \(k\in (0, 1)\) and T be a \(\psi \)-demicontractive mapping with \(\psi \in [0, 1)\). Let \(F(T)\cap \Theta \ne \emptyset \), \(\{\gamma _n\}\), \(\{\alpha _n\},\)\(\{t_n\}\) and \(\{\beta _n\}\) be sequences in (0, 1) satisfying the following conditions:
-
(i)
\(\lim _{n\rightarrow \infty } \alpha _n = 0,\)\(\sum _{n=0}^\infty \alpha _n = \infty ;\)
-
(ii)
\(0< \liminf _{n\rightarrow \infty }\gamma _n \le \limsup _{n\rightarrow \infty }\gamma _n < 2\nu ;\)
-
(iii)
\(0< \liminf _{n\rightarrow \infty }\beta _n \le \limsup _{n\rightarrow \infty }\beta _n < 1;\)
-
(iv)
\(0< \liminf _{n\rightarrow \infty }t_n \le \limsup _{n\rightarrow \infty }t_n < 1.\)
Then, the sequence \(\{x_n\}\) generated by (3.27) converges strongly to \(F(T)\cap \Theta \).
Proof
By replacing \( J_{\gamma _nB}(I-\gamma _nA)\) with \(P_C(I-\lambda _n\nabla g)\) in Algorithm 3.1, and following similar proof as in the proof of Theorem 3.1, we get that \(\{x_n\}\) generated by (3.27) converges strongly to \(z\in F(T)\cap \Theta ,\) where \(z =P_{F(T)\cap \Theta }f(z).\)\(\square \)
Remark 3.7
We now point out some differences between the presentation of our method of proof of Theorem 3.1 and that of Theorem 2.4 of [14] viz:
-
(i)
The major key in proving Theorem 3.1 is to show that \(\limsup \nolimits _{n\rightarrow \infty }(-\Gamma _n)\le 0\) as given in (3.23) and using Lemma 2.4 in (3.15).
-
(ii)
In our convergence analysis, we did not make use of Lemma 2.3 of [14], which was used in the convergence analysis of proof of Theorem 2.4 in [14]; rather we used Lemma 2.4 of this paper.
-
(iii)
If we replace “\(f(x_n)\)” by “u” (for arbitrary \(u\in C\)) in Algorithm 3.1 (which is of viscosity-type), then Algorithm 3.1 becomes of Halpern-type (see Corollary 3.3), and the conclusion of Theorem 3.1 will still hold. However, we use a viscosity-type algorithm instead of an Halpern-type due to the fact that viscosity-type algorithms have higher rate of convergence than Halpern-type. Moreover, it was established in [20, 21] that Halpern-type convergence theorems imply viscosity convergence theorems for weak contractions.
-
(iv)
Observe from the characterization of metric projection that,
$$\begin{aligned} z=P_{\Theta }f(z)\Longleftrightarrow \langle f(z)-z, z-y\rangle \ge 0 \quad \forall y\in C. \end{aligned}$$(3.29)Therefore, one advantage of adopting Algorithm (3.1) for our convergence analysis, is that it also converges strongly to a solution of the variational inequality (3.29) (see for example [22]).
-
(v)
As stated in Remark 1.6, in establishing strong convergence results for the GPA problems and other related optimization problems, the CQ algorithms (modified Haugazeau or an Halpern-CQ modifications) are often used. However, these algorithms require at each step of the iteration process, the computation of two subsets \(C_n\) and \(Q_n\), the computation of their intersection \(C_n\cap Q_n\) and the computation of the projection of the initial starting point onto this intersection; thus, leading to an increase in the computational cost of the iteration. Therefore, algorithms that does not involve the constructions of \(C_n\) and \(Q_n\) (as in our case) are more interesting and of practical computational importance since they are easy to compute than those that involve these computations.
Based on the above remark, Corollaries 3.2 and 3.3, our results improve and extend the results of Cai and Shehu [14], and many other important results in this direction.
4 Applications
In this section, we give applications of Theorem 3.1 to solve split feasibility and optimal control problems.
4.1 Split feasibility problem
The Split Feasibility Problem (SFP) was introduced by Censor and Elfving [23] and has gained much attention of several authors due to its applications to image reconstruction, signal processing, and intensity-modulated radiation therapy.
The SFP is finding a point x such that
where C and Q are nonempty, closed and convex subsets of real Hilbert spaces \(H_1\) and \(H_2\), respectively and \( B: H_1 \rightarrow H_2 \) is a bounded linear operator.
Clearly \(x^*\) is a solution to the SFP (4.1) if and only if \(x^* \in C \) and \(Bx^* - P_QBx^* = 0. \) Several iterative methods have been developed for solving the SFP and its related optimization problems (see for example, [24,25,26,27,28,29,30,31,32,33,34]).
The proximity function g is defined by
and we consider the constrained convex minimization problem
Then \(x^*\) solves the SFP (4.1) if and only if \(x^*\) solves the minimization problem (4.3). In [35], the following CQ algorithm was introduced to solve the SFP,
where \(0< \lambda < \frac{2}{{\Vert B\Vert }^2}\) and \(B^*\) is the adjoint of B. It was proved that the sequence generated by (4.4) converges weakly to a solution of the SFP.
We now state the following Theorem as an application of Theorem 3.1 to solve the SFP (4.1) and fixed point problem for \(\psi \)-demicontractive mapping.
Theorem 4.1
Let C and Q be nonempty, closed and convex subset of real Hilbert spaces \(H_1\) and \(H_2\) respectively, and \(B:H_1 \rightarrow H_2\) be bounded linear operator. Let f be a contraction with constant \(k\in (0, 1)\) and T be a \(\psi \)-demicontractive mapping with \(\psi \in [0, 1)\). Let \(g(x)=\frac{1}{2}\Vert Bx - P_QBx\Vert ^2\) and let \(\Upsilon \) denotes the solution set of problem (4.1) such that \(\Upsilon \cap F(T) \ne \emptyset .\) Let the sequence \(\{x_n\}\) be generated for fixed \(x_1 \in C\) by
where \(T_\kappa = \kappa I + (1-\kappa )T, ~~\kappa \in [\psi , 1)\) such that T is demiclosed at 0, \(\{\alpha _n\},\)\(\{t_n\}\) and \(\{\beta _n\}\) are sequences in (0, 1) and \(\{\lambda _n\} \) is a sequence in \((0,\frac{2}{\Vert B\Vert ^2})\) satisfying the following conditions:
-
(i)
\(\lim _{n\rightarrow \infty } \alpha _n = 0,\)\(\sum _{n=0}^\infty \alpha _n = \infty ;\)
-
(ii)
\(0< \liminf _{n\rightarrow \infty }\lambda _n \le \limsup _{n\rightarrow \infty }\lambda _n < \frac{2}{B};\)
-
(iii)
\(0< \liminf _{n\rightarrow \infty }\beta _n \le \limsup _{n\rightarrow \infty }\beta _n < 1;\)
-
(iv)
\(0< \liminf _{n\rightarrow \infty }t_n \le \limsup _{n\rightarrow \infty }t_n < 1.\)
Then \(\{x_n\}\) converges strongly to \(z\in F(T)\cap \Upsilon \), where \(z =P_{F(T)\cap \Upsilon }f(z).\)
Proof
By the definition of proximity function, we have that \(\nabla g=B^*(I-P_Q)B\) and \(\nabla g\) is \(||B||^2\)-Lipschitz continuous. Hence, by setting \(\nabla g=B^*(I-P_Q)B\) in Algorithm (3.1), we obtain the desired result. \(\square \)
4.2 Optimal control problem
Let \(L_2([0, \alpha ], \mathbb {R}^m)\) be the Hilbert space of square integrable and measurable vector functions u defined from \([0, \alpha ]\) into \(\mathbb {R}^m\), which is endowed with inner product
and norm
Now, consider the following optimal control problem:
where V is the set of admissible controls in the form of an m-dimensional box and consists of piecewise continuous functions:
Assuming that such a control exists. The terminal objective has the form:
where \(\Phi \) is a convex and differentiable function (see [36]). If the trajectory \(x(t)\in L_2([0, \alpha ])\) satisfies constrains in the of a system of linear differential equation:
where \(D(t)\in \mathbb {R}^{m\times n},~B(t)\in \mathbb {R}^{n\times m}\) are given continuous matrices for every \(t\in [0, \alpha ]\). Then, by the Pontryagin maximum principle (see [36]), there exists a function \(p^*\in L_2([0, \alpha ])\) such that \((x^*, p^*, u^*)\) solves for a.e. \(t\in [0, \alpha ],\) the following system:
where \(N_V(u)\) is the normal cone to V at u defined by
Letting \(Gu(t):=B(t)^Tp(t)\), we have that Gu is the gradient of the objective function g (see [37, 38]). More so, (4.9) can be rewritten as
But we know that \(u^*\) solves (4.10) if and only if \(u^*=P_V(I-\lambda G)u^*\), for any \(\lambda >0\). Therefore, by setting \(\nabla g=G\) in Theorem 3.1, we can apply Theorem 3.1 to solve (4.9).
5 Numerical examples
In this section, we present two numerical examples of our algorithm in real Euclidean space of dimension 4 and in an infinite dimensional Hilbert space, to show its efficiency and advantage.
Throughout this section, we shall take \(\alpha _n=\frac{1}{3n+1},~\beta _n=\frac{n+1}{3n},~t_n=\frac{2n+3}{5n+1}\) and \(\lambda _n=\frac{n}{25n+3}\).
Example 5.1
Here, we present a numerical example in \(\mathbb {R}^4\) to illustrate the performance of our algorithm. Let \(H_1 = H_2 = \mathbb {R}^4\) and \(\nabla g(x)=B^*(I-P_Q)Bx\), where \(Bx=(3x_1+x_2+x_3-x_4, -2x_1-x_2-3x_3+2x_4, -4x_1-x_2+5x_3-2x_4, x_1-x_2+x_3-x_4),~ Q=\{x\in \mathbb {R}^4:\langle w, x\rangle =b\}\), \(w=(-1, 2, 4, 7)^T,~b=2, ~P_Q (x)=\max \left\{ 0, \frac{b-\langle w, x\rangle }{||w||^2}\right\} w+x\). Since B is a bounded linear operator and \(P_Q\) is a metric projection onto Q, then \(\nabla g\) is L-Lipschitz continuous with \(L=||B||^2=50\). Let \(C=\{x\in \mathbb {R}^4:\langle y, x\rangle \ge a\}\), \(y=(2, -5, -7, 1)^T,~a=3, ~P_C (x)= \frac{a-\langle y, x\rangle }{||y||^2}y+x\). Define \(T(x)=\frac{-3}{2} x\) and \(f(x)=\frac{1}{3}x\), then T is a \(\psi \)-demicontractive mapping with \(\psi =\frac{1}{5}\) and f is a contraction. Thus, we can take \(\kappa =\frac{1}{2}\), so that \(T_\kappa =\frac{-1}{4}x\). Hence, Algorithm (4.5) becomes
We consider the following cases for Example 5.1.
Case I Take \(x_1=(-1, 2, 1, 0.5)^T\)
Case II Take \(x_1=(-8, 2, 7, 1)^T\)
Case III Take \(x_1=(1, 7, -5, 3)^T\)
Example 5.2
We now give an example in an infinite dimensional Hilbert space to further show the efficiency and advantage of our results. Let \(H_1=H_2 = L_2([0, 2\pi ])\) be endowed with inner product
and norm
Let \(C = \{x \in L_2([0, 2\pi ]): \langle y, x\rangle \le a\},\) where \(y = e^{2t}\) and \(a = 3.\) Then,
Again, let \(Q=\{x\in L_2([0, 2\pi ]): ||x-d||_{L_2}\le r\}\), where \(d = \sin (t)\) and \(r = 16.\) Then,
Now, let \(B, f, T: L_2([0, 2\pi ]) \rightarrow L_2([0, 2\pi ])\) be defined by \(Bx(t) = x(t),~~fx(t) = \frac{x(t)}{3}\) and \(Tx(t) = \frac{-5}{2}x(t)\). Then, B is a bounded linear operator with adjoint \(B^*x(t) = x(t)\) and \(||B||=1\), f is a contraction and T is \(\psi \)-demicontractive with \(\psi =\frac{21}{49}\).
We now consider the following cases for Example 5.2 (Figs. 1, 2).
Case 1 Take \(x_1(t)=t^3\).
Case 2 Take \(x_1(t)= \sin t\).
Case 3 Take \(x_1(t)= \cos t\).
Remark 5.3
Using Examples 5.1 and 5.2, we compare our algorithm with Algorithm (1.11) of Cai and Shehu [14], by considering in each example, 4 different initial points. As seen from the graphs below, our viscosity-type algorithm converges faster than the Halpern-type algorithm studied by Cai and Shehu [14]. This shows that our algorithm works well and have competitive advantages over the algorithm of Cai and Shehu [14].
References
Goebel, K., Reich, S.: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Marcel Dekker, New York (1984)
Bertsekas, D.P., Gafni, E.M.: Projection methods for variational inequalities with applications to the traffic assignment problem. Math. Program. Stud. 17, 139–159 (1982)
Han, D., Lo, H.K.: Solving non-additive traffic assignment problems: a decent method for co-coercive variational inequalities. Eur. J. Oper. Res 159, 529–544 (2004)
Aremu, K.O., Izuchukwu, C., Ugwunnadi, G.C., Mewomo, O.T.: On the proximal point algorithm and demimetric mappings in CAT(0) spaces. Demonstr. Math. 51, 277–294 (2018)
Eslamian, M.: General algorithms for split common fixed point problem of demicontractive mappings. Optimization 65(2), 443–465 (2016)
Maruter, S., Popirlan, C.: On the Mann-type iteration and the convex feasibility problem. J. Comput. Appl. Math. 212(2), 390–396 (2008)
Yao, Y., Liou, Y.C., Postolache, M.: Self-adaptive algorithms for the split problem of demicontractive operators. Optimization (2017). https://doi.org/10.1080/02331934.2017.1390747
Moudafi, A.: The split common fixed point for demicontractive mappings. Inverse Probl. 26, 055007 (2010)
Yu, Y., Sheng, D.: On the strong convergence of an algorithm about firmly pseudo-demicontractive mappings for the split common fixed-point problem. J. Appl. Math. Article ID 256930 (2012)
Bruck, R.E., Reich, S.: Nonexpansive projections and resolvents of accretive operators in Banach spaces. Houston J. Math. 3, 459–470 (1977)
Izuchukwu, C., Aremu, K.O., Mebawondu, A.A., Mewomo, O.T.: A viscosity iterative technique for equilibrium and fixed point problems in a Hadamard space. Appl. Gen. Topol. 20(1), 193–210 (2019)
Baillon, J.B., Bruck, R.E., Reich, S.: On the asymptotic behavior of nonexpansive mappings and semigroups in Banach spaces. Houston J. Math. 4, 1–9 (1978)
Xu, H.K.: Averaged mapping and the gradient-projection algrothm. J. Optim. Theory Appl. 150(2), 360–378 (2011)
Cai, G., Shehu, Y.: An iterative algorithm for fixed point problem and convex minimization problem with applications. Fixed Point Theory Appl. 2015, 7 (2015)
Bauschke, H.H., Combettes, P.L.: A weak-to-strong convergence principle for Fejer-monotone methods in Hilbert spaces. Math. Oper. Res. 26, 248–264 (2001)
Goebel, K., Wirk, W.A.: Topics in Metric Fixed Point Theory. Cambridge Studies in Advanced Mathematics, vol. 28. Cambridge University Press, Cambridge (1990)
Zhou, H.Y.: Convergence theorems of fixed points for strict pseudo-contractions in Hilbert spaces. Nonlinear Anal. 69(2), 456–462 (2008)
Combettes, P.L.: Solving monotone inclusion via compositions of nonexpansive averaged operators. Optimization 53, 475–504 (2004)
Maingé, P.E., Maruster, S.: Convergence in norm of modified Krasnoselki–Mann iteration for fixed points of demicontractive mapping. Set-Valued Anal. 15, 67–79 (2007)
Reich, S., Zalas, R.: A modular string averaging procedure for solving the common fixed point problem for quasi-nonexpansive mappings in Hilbert space. Numer. Algorithms 72, 297–323 (2016)
Song, Y., Liu, X.: Convergence comparison of several iteration algorithms for the common fixed point problems. Fixed Point Theory Appl. (2009). https://doi.org/10.1155/2009/824374. Article ID 824374
Izuchukwu, C., Mebawondu, A.A., Aremu, K.O., Abass, H.A., Mewomo, O.T.: Viscosity iterative techniques for approximating a common zero of monotone operators in an Hadamard space. Rendi. del Circolo Mat. di Palermo Ser. 2, 1–21 (2019). https://doi.org/10.1007/s12215-019-00415-2
Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221–239 (1994)
Byrne, C.: Unified treatment of some algorithm in signal processing and image construction. Inverse Probl. 20, 103–120 (2004)
Jolaoso, L.O., Ogbuisi, F.U., Mewomo, O.T.: An iterative method for solving minimization, variational inequality and fixed point problems in reflexive Banach spaces. Adv. Pure Appl. Math. 9(3), 167–184 (2018)
Jolaoso, L.O., Oyewole, K.O., Okeke, C.C., Mewomo, O.T.: A unified algorithm for solving split generalized mixed equilibrium problem and fixed point of nonspreading mapping in Hilbert space. Demonstr. Math. 51, 211–232 (2018)
Masad, E., Reich, S.: A note on the multiple-set split convex feasibility problem in Hilbert space. J. Nonlinear Convex Anal. 8, 367–371 (2007)
Lopez, G., Martins-Marquez, V., Wang, F.H., Xu, H.K.: Solving the split feasibility problem without prior knowledge of the matrix norms. Inverse Probl. 28(8), 085004 (2012)
Shehu, Y., Mewomo, O.T., Ogbuisi, F.U.: Further investigation into approximation of a common solution of fixed point problems and split feasibility problems. Acta. Math. Sci. Ser. B Engl. Ed. 36(3), 913–930 (2016)
Shehu, Y., Mewomo, O.T.: Further investigation into split common fixed point problem for demicontractive operators. Acta Math. Sin. (Engl. Ser.) 32(11), 1357–1376 (2016)
Okeke, C.C., Bello, A.U., Izuchukwu, C., Mewomo, O.T.: Split equality for monotone inclusion problem and fixed point problem in real Banach spaces. Aust. J. Math. Anal. Appl. 14(2), 1–20 (2017)
Okeke, C.C., Mewomo, O.T.: On split equilibrium problem, variational inequality problem and fixed point problem for multi-valued mappings. Ann. Acad. Rom. Sci. Ser. Math. Appl. 9(2), 255–280 (2017)
Okeke, C.C., Okpala, M.E., Mewomo, O.T.: Common solution of generalized mixed equilibrium problem and Bregman strongly nonexpansive mapping in reflexive Banach spaces. Adv. Nonlinear Var. Inequal. 21(1), 1–16 (2018)
Taiwo, A., Jolaoso, L.O., Mewomo, O.T.: A modified Halpern algorithm for approximating a common solution of split equality convex minimization problem and fixed point problem in uniformly convex Banach spaces. Comput. Appl. Math. 38(2), 77 (2019)
Attouch, H., Czarnecki, M.O.: Asymptotic control and stabilization of nonlinear oscillators with non-isolated equilibra. J. Differ. Equ. 179(1), 278–310 (2002)
Vuong, P.T., Shehu, Y.: Convergence of an extragradient-type method for variational inequality with applications to optimal control problems. Numer. Algorithms (2018). https://doi.org/10.1007/s11075-018-0547-6
Khoroshilova, E.V.: Extragradient-type method for optimal control problem with linear constraints and convex objective function. Optim. Lett. 7, 1193–1214 (2013)
Nikol’skii, M.S.: Convergence of the gradient projection method in optimal control problems. Comput. Math. Model. 18, 148–156 (2007)
Acknowledgements
The first author acknowledge with thanks the bursary and financial support from Department of Science and Technology and National Research Foundation, Republic of South Africa Center of Excellence in Mathematical and Statistical Sciences (DST-NRF COE-MaSS) Doctoral Bursary. Opinions expressed and conclusions arrived are those of the authors and are not necessarily to be attributed to the CoE-MaSS. The authors would like to thank the anonymous referees for carefully reading the paper and providing useful and interesting comments that have greatly improved the paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Okeke, C.C., Izuchukwu, C. & Mewomo, O.T. Strong convergence results for convex minimization and monotone variational inclusion problems in Hilbert space. Rend. Circ. Mat. Palermo, II. Ser 69, 675–693 (2020). https://doi.org/10.1007/s12215-019-00427-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12215-019-00427-y
Keywords
- Minimization problem
- Monotone inclusion problem
- Fixed point problem
- Inverse strongly monotone
- Maximal monotone operators