Abstract
In this paper, we study the common null point problem in Banach spaces. Then, using the shrinking projection method and ε-enlargement of maximal monotone operator, we prove two strong convergence theorems with nonsummable errors for solving this problem.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Let H be a real Hilbert space and let f : H→(−∞, ∞] be a proper, lower semicontinuous and convex function. In order to find a minimum point of f, Martinet [19] proposed the iterative method as follows: x1∈H and
for all n ≥ 1. He proved that, the sequence {xn} converges weakly to a minimum point of f. Note that, the above sequence {xn} can be rewritten in the form
We know that the subdifferential operator ∂f of f is a maximal monotone operator (see [27]). So, the problem of finding a null point of a maximal monotone operator plays an important role in optimization. One popular method of solving equation 0∈A(x) where A is a maximal monotone operator in Hilbert space H, is the proximal point algorithm. The proximal point algorithm generates, for any starting point x0 = x ∈ E, a sequence {xn} by the rule
where {cn} is a sequence of positive real numbers and \({J}_{c_{n}}^{A}=(I+c_{n}A)^{-1}\) is the resolvent of A. Some of them dealt with the weak convergence of the sequence {xn} generated by (1.1) and others proved strong convergence theorems by imposing assumptions on A. Moreover, Rockafellar [29] gave a more practical method which is an inexact variant of the method:
where {en} is regarded as an error sequence and {cn} is a sequence of positive regularization parameters. Note that the algorithm (1.2) can be rewritten as
This method is called inexact proximal point algorithm. It was shown in Rockafellar [29] that if en→0 quickly enough such that \({\sum }_{n = 1}^{{\infty }} \|e_{n}\|<{\infty } \), then \({x}_{n}\rightharpoonup z{\in } H\) with 0∈Az.
In [10], Burachik et al. used the enlargement Aε to devise an approximate generalized proximal point algorithm. The exact version of this algorithm can be stated as follows: having xn, the next element xn+ 1 is the solution of
where f is a suitable regularization function. Note that, if \(f(x)=\frac {1}{2}\|x\|^{2}\), then the above algorithm becomes the classical proximal point algorithm. Approximate solutions of (1.4) are treated in [10] via Aε. Specifically, an approximate solution of (1.4) is regarded as an exact solution of
for an appropriate value of εn. Note that, if \(f(x)=\frac {1}{2}\|x\|^{2}\), the above relation is equivalent to the problem of finding an element xn+ 1∈H, and \(v_{n + 1}{\in } A^{\varepsilon _{n}}({x}_{n + 1})\) with εn ≥ 0 such that
They proved that if \({\sum }_{n = 1}^{{\infty }}\varepsilon _{n}<{\infty }\), then the sequence {xn} converges weakly to a null point of A.
In [25], Solodov and Svaiter proposed a new criterion for the approximate solution of subproblems as follows: Two element yn and vn are admissible if
and the error en satisfies
where σ is a real number in [0,1). And the next iterative xn+ 1 is obtained by projecting xn onto the hyperplane
By combining the ideas of [10, 25], Solodov et al. [24] proposed an even simpler method, in which no projection is performed. An approximate solution is regarded as a pair yn, vn such that
where εn, en are “relatively small” in comparison with ∥yn − xn∥, and the next iterative xn+ 1 is defined by
Rockafellar [29] posed an open question of whether the sequence generated by (1.1) converges strongly or not. In 1991, G\(\ddot {\text {{u}}}\)ler [15] gave an example showing that Rockafellar’s proximal point algorithm does not converge strongly. An example of the authors Bauschke, Matou\(\check {\text {s}}\textit {kov}\acute {\text {a}}\), and Reich [7] also showed that the proximal algorithm only converges weakly but not in norm. In 2000, Solodov and Svaiter [26] proposed the following algorithm (hybrid projection method): Choose any x0∈H and σ ∈ [0,1). At iteration n, having xn, choose μn > 0 and find (yn, vn) an inexact solution of
with tolerance σ. Define
and
Take
They proved that if the sequence of the regularization parameters μn is bounded from above, then {xn} converges strongly to x∗∈A− 10. Moreover, based on the important fact that Cn and Qn in the above algorithm are two halfspaces, they showed that
where (λ1, λ2) is the solution of the linear system of two equations with two unknowns:
In 2003, Bauschke et al. introduced a new algorithm (see [6, Algorithm 4.1]) for finding a common fixed point of a family of operators \((T_{i})_{i\in \mathbb I}\) in \(\mathcal B\)-class operators (see [5]). Let E be a real Banach space and f : E→(0, ∞] be a lower semicontinuous convex function which is Gâteaux differentiable on int dom f ≠ ∅ and Legendre, i.e., it satisfies the following two properties:
-
(i)
∂f is both locally bounded and single-valued on its domain;
-
(ii)
(∂f)− 1 is locally bounded on its domain and f is strictly convex on every bounded set of dom ∂f.
The Bregman distance associated with f is the function
and the D-projector onto a set C ⊂ E is the operator
It is easy to see that if E is a real Hilbert space and f(x) = ∥x∥2/2 for all x ∈ H, and C is a nonempty closed convex subset of E, then PC is the metric projection from E onto C.
They gave an application of [6, Algorithm 4.1] for finding a common zero of a family of maximal monotone operators \((A_{i})_{i\in \mathbb I}\) in Banach space E as follows: for every \(n{\in } \mathbb {N}\), take i(n)∈I, γn∈(0, ∞) and set xn+ 1 = Q(x0, xn,(▽f + γnAi(n))− 1 ∘▽f(xn)), where
They proved that if ▽ f is uniformly continuous on bounded subsets of E and for every i ∈ I, and every strictly increasing sequence {Pn} such that i(Pn) ≡ i, one has \(\inf _{n}\gamma _{{P}_{n}}>0\) and if the following conditions hold:
-
(i)
The index control mapping \(\text {i}:\ \mathbb {N}\longrightarrow I\) satisfies
$$(\forall i{\in} I) (\exists M_{i}>0) (\forall n\in\mathbb{N})\ i\in\{\text{i}(n),\ldots, \text{i}(n+M_{i}-1)\}.$$ -
(ii)
For every sequence {yn} in int dom f and every bounded sequence {zn} in int dom f, one has
$$D({y}_{n},z_{n})\to 0\Rightarrow {y}_{n}-z_{n}\to 0,$$then xn → PSx0, where \(S=\overline {\text {dom}}f\cap ({\cap }_{i{\in } I}A_{i}^{-1}0)\).
In order to find a fixed point of a nonexpansive mapping T on the closed and convex subset C of H, motivated by the result of Solodov and Svaiter, Takahashi et al. [32] introduced the following iterative method
and they proved that the sequence {xn} converges strongly to PF(T)x0, when {αn}⊂ [0, a), with a ∈ [0,1). Moreover, they also gave a similar iterative method to find zero of a maximal monotone operator in the following form
They showed that if {αn}⊂ [0, a), with a ∈ [0,1) and cn →∞, then the sequence {xn} generated by (1.5) converges strongly to \({P}_{A^{-1} 0}{x}_{0}\).
We can see that the shrinking projection method (1.5) of Takahashi et al. is more complex than the hybrid projection method of Solodov and Svaiter. Because in the iterative method (1.5), to define xn+ 1, we have to find the projection of x0 over the intersection of n closed and convex subsets of H, but in hybrid projection method, we only compute the projection of x0 over the intersection of two hyperplanes. However, recently, many mathematicians studied the shrinking projection method for solving the difference problems, see for instance, Dadashi [13], Kimura [18], Qin et al. [22], Takahashi [33], Takahashi et al. [30, 34], Sean et al. [37].
In this paper, by using shrinking projection method, we introduce two parallel iterative methods for finding a common null point of a finite family of maximal monotone operators in Banach spaces. Moreover, we also give some applications of the main results for solving the problem of finding a common minimum point of convex functions, the convex feasibility problem and the system of variational inequalities. In Section 5, a numerical example is also given to illustrate the effectiveness of the proposed algorithms.
2 Preliminaries
Let E be a real Banach space with norm ∥⋅∥ and let E∗ be its dual. The value of f ∈ E∗ at x ∈ E will be denoted by 〈x, f〉. When {xn} is a sequence in E, then xn → x (resp. \({x}_{n}\rightharpoonup x,\ {x}_{n}\overset {*}{\rightharpoonup } x\)) will denote strong (resp. weak, weak∗) convergence of the sequence {xn} to x. Let JE denote the normalized duality mapping from E into \(2^{E^{*}}\) given by
We always use SE to denote the unit sphere SE = {x ∈ E : ∥x∥ = 1}. A Banach space E is said to be strictly convex if x, y ∈ SE with x ≠ y, and, for all t ∈ (0,1),
A Banach space E is said to be uniformly convex if for any ε ∈ (0,2] and the inequalities ∥x∥≤ 1, ∥y∥≤ 1, ∥x − y∥≥ ε, there exists a δ = δ(ε) > 0 such that
Recall that a Banach space E is called having the Kadec-Klee property, if for every sequence {xn}⊂ E such that ∥xn∥→∥x∥ and \({x}_{n}\rightharpoonup x\), as n →∞, we have xn → x, as n →∞. It is well known that every uniformly convex Banach space has Kadec-Klee property (see [12, 23]).
A Banach space E is said to be smooth provided the limit
exists for each x and y in SE. In this case, the norm of E is said to be Gâteaux differentiable. It is said to be uniformly Gâteaux differentiable if for each y ∈ SE, this limit attained uniformly for x ∈ SE.
Let E be a reflexive Banach space, we know that E is uniformly convex if and only if E∗ is uniformly smooth [1, 14].
We have the following properties of the normalized duality mapping JE (see [1, 12, 14]):i) E is reflexive if and only if JE is surjective.ii) If E∗ is strictly convex, then JE is single-valued.iii) If E is a smooth, strictly convex and reflexive Banach space, then JE is single-valued bijection.iv) If E∗ is uniformly convex, then JE is uniformly continuous on each bounded set of E.We know that if E is a smooth, strictly convex and reflexive Banach space and C is a nonempty, closed, and convex subset of E; then, for each x ∈ E, there exists a unique z ∈ C such that
The mapping PC : E→C defined by PCx = z is called metric projection from E on to C and we denote by d(x, C) = ∥x − PCx∥.
Let \(A:\ E\longrightarrow 2^{E^{*}}\) be an operator. The effective domain of A is denoted by D(A), that is, D(A) = {x ∈ E : Ax ≠ ∅}. Recall that A is called monotone operator if 〈x − y, u − v〉≥ 0 for all x, y ∈ D(A) and for all u ∈ Ax, v ∈ A(y). A monotone operator A on E is called maximal monotone if its graph is not properly contained in the graph of any other monotone operator on E. We know that if A is a maximal monotone operator on E and if E is a uniformly convex and smooth Banach space, then R(JE + rA) = E∗ for all r > 0, where R(JE + rA) is the range of JE + rA (see [9, 28]). For all x ∈ E and r > 0, there exists a unique xr∈E such that
We define Jr by xr = Jrx and Jr is called the metric resolvent of A.
Hence, in this case, we can define a mapping Jr : E→E by Jrx = xr and Jr is called the generalized resolvent of A.
The set of null point of A is defined by A− 10 = {z ∈ E : 0∈Az} and we know that A− 10 is a closed and convex subset of E (see [31]).
Let \(A:\ E\longrightarrow 2^{E^{*}}\) be a maximal monotone operator. In [11], for each ε ≥ 0, Burachik and Svaiter defined Aε(x), an ε-enlargement of A, as follows
It is easy to see that A0x = Ax and if 0 ≤ ε1 ≤ ε2, then \(A^{\varepsilon _{1}}x\subseteq A^{\varepsilon _{2}}x\) for any x ∈ E. The use of element in Aε instead of T allows an extra degree freedom which is very useful in various applications.
Let {Cn} be a sequence of closed, convex, and nonempty subsets of a reflexive Banach space E. We define the subsets s-LinCn and w-LsnCn of E as follows: x ∈ s-LinCn if and only if there exists {xn}⊂ E converges strongly to x and that xn∈Cn for all n ≥ 1; x ∈ w-LsnCn if and only if there exists a subsequence \(\{C_{n_{k}}\}\) of {Cn} and the sequence {yk}⊂ E such that \({y}_{k}\rightharpoonup x\) and \({y}_{k}{\in } {C}_{n_{k}}\) for all k ≥ 1. If s-LinCn = w-LsnCn = C0, then C0 is called the limits of {Cn} in the sense of Mosco [20] and it is denoted by \(C_{0}=\text {M}-\lim _{n\to {\infty }}C_{n}\).
Remark 2.1
We know that, if {Cn} is a decreasing sequence of closed convex subsets of a reflexive Banach space E and \(C_{0}={\cap }_{n = 1}^{{\infty }} {C}_{n}\neq \emptyset \), then \(C_{0}=\text {M}-\lim _{n\to {\infty }}C_{n}\) (see [8, 17]).
Indeed, it is clear that if x ∈ C0, then x ∈ s-LinCn and x ∈ w-LsnCn, because the sequence {xn} with xn = x for all n ≥ 1 converges strongly to x. Thus, we have C0 ⊂ s-LinCn and C0 ⊂ w-LsnCn.
Now, we will show that C0 ⊇ s-LinCn and C0 ⊇ w-LsnCn. Let x ∈ s-LinCn, from the definition of s-LinCn, there exists a sequence {xn}⊂ E with xn∈Cn for all n ≥ 1 such that xn → x, as n →∞. Since {Cn} is a decreasing sequence, xn+k∈Cn for all n ≥ 1 and k ≥ 0. So, letting k →∞ and by the closedness of Cn, we get that x ∈ Cn for all n ≥ 1. Thus, x ∈ C0 and hence C0 ⊇ s-LinCn. Next, let y ∈ w-LsnCn, from the definition of w-LsnCn, there exist a subsequence \(\{C_{n_{k}}\}\) of {Cn} and the sequence {yk}⊂ E such that \({y}_{k}\rightharpoonup x\) and \({y}_{k}{\in } {C}_{n_{k}}\) for all k ≥ 1. From {Cn} is a decreasing sequence, we have
for all k ≥ 1 and p ≥ 0. Since \(C_{n_{k}}\) is closed and convex, \(C_{n_{k}}\) is weakly closed in E for all k ≥ 1. So, in (21), letting p →∞, we get that \(y{\in } {C}_{n_{k}}\) for all k ≥ 1. Since \(C_{k}\supseteq {C}_{n_{k}}\), y ∈ Ck for all k ≥ 1. So, y ∈ C0 and hence C0 ⊇ w-LsnCn.
Consequently, we obtain that s-LinCn = and w-LsnCn = C0. Thus, \(C_{0}=\text {M}-\lim _{n\to {\infty }}C_{n}\).
The following lemmas will be needed in the sequel for the proof of main theorems.
Lemma 2.2
[2, 3, 16] LetE be a smooth, strictly convex and reflexive Banachspace. Let C be a nonempty closed convex subset ofE and letx1∈Eandz ∈ C. Then, the following conditions are equivalent:
-
i)
z = PCx1;
-
ii)
〈z − y, JE(x1 − z)〉≥ 0, ∀y ∈ C.
Lemma 2.3
[36] LetE be a Banach space,R ∈ (0, ∞) andBR = {x ∈ E : ∥x∥≤ R}. IfE is uniformly convex, then there exists a continuous, strictly increasing and convex functiong : [0,2R]→[0, ∞) withg(0) = 0 suchthat
for allx, y ∈ BRandα ∈ [0, 1].
Lemma 2.4
[35] LetE be a smooth, reflexive, and strictly convexBanach space having the Kadec-Klee property. Let{Cn} be a sequence of nonempty closed convex subsets ofE.If\(C_{0}=\mathrm {M}-\lim _{n\to {\infty }}C_{n}\)existsand is nonempty, then\(\{{P}_{C_{n}}x\}\)convergesstrongly to\({P}_{C_{0}}x\)foreachx ∈ C.
Lemma 2.5
[11] The graph of\(A^{\varepsilon } :\ \mathbb R_{+}\times E\longrightarrow 2^{E^{*}}\)isdemiclosed, i.e., the conditions below hold:
i) If {xn}⊂ Econverges strongly tox0, \(\{u_{n}{\in } A^{\varepsilon _{n}}{x}_{n}\}\)convergesweakly tou0inE∗and\(\{\varepsilon _{n}\}\subset \mathbb R_{+}\)convergestoε, thenu0∈Aεx0.
ii) If {xn}⊂ Econverges weakly tox0, \(\{u_{n}{\in } A^{\varepsilon _{n}}{x}_{n}\}\)convergesstrongly tou0inE∗and\(\{\varepsilon _{n}\}\subset \mathbb R_{+}\)convergestoε, thenu0∈Aεx0.
3 Main Results
First, we have the following lemma:
Lemma 3.1
LetE be a uniformly convex and smooth Banach space and let{Cn} be a decreasing sequence of closed and convex subsets ofE suchthat\(C_{0}={\cap }_{n = 1}^{{\infty }} {C}_{n}\neq \emptyset \). Let\({P}_{n}={P}_{C_{n}}u\)withu ∈ Eand let{xn} be the sequence inE suchthat
for alln ≥ 1, where {δn} is a sequence of positive real numbers. If\(\lim _{n\to {\infty }}\delta _{n}= 0\), then {xn} and {Pn} converge strongly to the same point\({P}_{0}={P}_{C_{0}}u\).
Proof
From Remark 2.1, we have \(C_{0}=\text {M}-\lim _{n\to {\infty }}C_{n}\). By Lemma 2.4, we have \({P}_{n}\to {P}_{0}={P}_{C_{0}}u\), as n →∞.
Since \({P}_{n}={P}_{C_{n}}u\), d(u, Cn) = ∥u − Pn∥. From xn∈Cn and the definition of Cn, we have
From (3.1) and the boundedness of {Pn}, the sequence {xn} is bounded. So, R = max{supn{∥xn∥},supn{∥Pn∥}} < ∞.
From the convexity of Cn, we have αPn + (1 − α)xn∈Cn for all α ∈ (0,1). Thus, from the definition of \({P}_{C_{n}}u\) and apply Lemma 2.3 on BR, we get
this combines with (3.1), we obtain that
In (3.2), letting α → 1−, we get
By the property of g and δn → 0, we have
Thus, the sequences {xn} and {Pn} converge strongly to the same point P0, as n →∞. □
Now, we have the following theorem:
Theorem 3.2
LetE be a uniformly convex and smooth Banach space andlet\(A_{i}:\ E\longrightarrow 2^{E^{*}}\), i = 1,2,…, N, be maximal monotone operators ofE into\(2^{E^{*}}\)suchthat\(S={\cap }_{i = 1}^{N}A_{i}^{-1}0\neq \emptyset \). Let {εn} and {δn} be nonnegative real sequences and let {ri, n}, i = 1,2,…, N, be positive real sequences such that mini{infn{ri, n}}≥ r > 0. For a given pointu ∈ E, we define the sequence {xn} byx1 = x ∈ E, C1 = Eand
i) Findyi, n∈Esuch that\({J}_{E}({y}_{i,n}-{x}_{n})+r_{i,n}A^{\varepsilon _{n}}_{i}{y}_{i,n}\ni 0,\ i = 1,2,\dots ,N\).
ii) Chooseinsuch that\(\|{y}_{{i}_{n},n}-{x}_{n}\|={\max }_{i = 1,\dots ,N}\{\|{y}_{i,n}-{x}_{n}\|\},\ \text {let}\ {y}_{n}={y}_{{i}_{n},n}\),
iii) Findxn+ 1∈{z ∈ Cn+ 1 : ∥u − z∥2 ≤ d2(u, Cn+ 1) + δn+ 1}, n = 1,2,…
If\(\lim _{n\to {\infty }}\varepsilon _{n}r_{i,n}=\lim _{n\to {\infty }}\delta _{n} = 0\)for alli = 1,2,…, N, then the sequence{xn} convergesstrongly toPSu, asn → ∞.
Proof
First, we show that S ⊂ Cn for all n ≥ 1 by mathematical induction. Indeed, it is clear that S ⊂ C1 = E. Suppose that S ⊂ Cn for some n ≥ 1. Take v ∈ S, we have
From the definition of \(A^{\varepsilon _{n}}_{{i}_{n}}\), we get
Thus, v ∈ Cn+ 1. Since v is arbitrary in S, S ⊂ Cn+ 1. So, by induction we obtain that S ⊂ Cn for all n ≥ 1.
Moreover, Cn is a closed and convex subset of E for all n. Hence, the sequence {xn} is well defined.
Now, for each n ≥ 1, denote by \({P}_{n}={P}_{C_{n}}u\). By Lemma 3.1, we obtain that the sequences {xn} and {Pn} converge strongly to the same point \({P}_{0} ={P}_{C_{0}}u\) with \(C_{0}={\cap }_{n = 1}^{{\infty }} {C}_{n}\).
From Pn+ 1∈Cn+ 1 and the definition of Cn+ 1, we have
The above inequality is equivalent to
So, we have
This implies that
From Pn → P0, xn → P0 and \(\varepsilon _{n} r_{{i}_{n},n}\to 0\), we obtain that
By the definition of yn, we get that
This implies that yi, n → P0 for all i = 1,2,…, N, as n →∞. Furthermore, since mini{infn{ri, n}}≥ r > 0 and (3.4), we have
for all i = 1,2,…, N, as n →∞. So, by Lemma 2.5, we obtain \({P}_{0}{\in } A^{-1}_{i}0\) for all i = 1,2,…, N, that is, P0∈S.
Finally, we show that P0 = PSu. Indeed, let x∗ = PSu. Since S ⊂ Cn, x∗∈Cn. Thus, from \({P}_{n}={P}_{C_{n}}u\), we have
Letting n →∞, we get that ∥u − P0∥≤∥u − x∗∥. By the uniqueness of x∗, we obtain that P0 = x∗ = PSu. This completes the proof. □
Now, in the following theorem, we give another way to construct the subsets Cn.
Theorem 3.3
LetE be a uniformly convex and smooth Banach space andlet\(A_{i}:\ E\longrightarrow 2^{E^{*}}\), i = 1,2,…, N, be maximal monotone operators ofE into\(2^{E^{*}}\)suchthat\(S={\cap }_{i = 1}^{N}A_{i}^{-1}0\neq \emptyset \). Let {εn} and {δn} be nonnegative real sequences and let {ri, n}, i = 1,2,…, N, be positive real sequences such that mini{infn{ri, n}}≥ r > 0. For a given pointu ∈ E, we define the sequence {xn} byx1 = x ∈ E, C1 = Eand
If\(\lim _{n\to {\infty }}\varepsilon _{n}r_{i,n}=\lim _{n\to {\infty }}\delta _{n} = 0\)for alli = 1,2,…, N, then the sequence{xn} convergesstrongly toPSu, asn →∞.
Proof
First, we show that S ⊂ Cn for all n ≥ 1 by mathematical induction. Indeed, it is clear that S ⊂ C1 = E. Suppose that S ⊂ Cn for some n ≥ 1. Take v ∈ S, we have
From the definition of \(A^{\varepsilon _{n}}_{i}\), we get
Thus, \(v{\in } C^{i}_{n + 1}\) for all i = 1,2,…, N. So, \(v{\in } {C}_{n + 1}={\cap }_{i = 1}^{N}C^{i}_{n + 1}\). By induction, we obtain that S ⊂ Cn for all n ≥ 1.
It is clear that {Cn} is a decreasing sequence of closed and convex subsets of E with \({\cap }_{n = 1}^{{\infty }} {C}_{n}=C_{0}\supset S\neq \emptyset \).
Now, for each n, denote by \({P}_{n}={P}_{C_{n}}u\). By Lemma 3.1, the sequences {xn} and {Pn} converge strongly to the same point \({P}_{0} ={P}_{C_{0}}u\).
We have \({P}_{n + 1}{\in } {C}_{n + 1}={\cap }_{i = 1}^{N}C^{i}_{n + 1}\). Hence, \({P}_{n + 1}{\in } C^{i}_{n + 1}\) for all i = 1,2,…, N. Thus, from the definition of \(C^{i}_{n + 1}\), we have
for all i = 1,2,…, N. Thus, we get that
for all i = 1,2,…, N. From Pn → P0, xn → P0 and \(\varepsilon _{n} r_{{i}_{n},n}\to 0\), we obtain that
for all i = 1,2,…, N. The rest of the proof follows the pattern of Theorem 3.2. This completes the proof. □
Remark 3.4
a) In Theorems 3.2 and 3.3, if N = 1 then the sequence {xn} is defined by: For a given point u ∈ E, we define the sequence {xn} by x1 = x ∈ E, C1 = E and
where {rn} is the positive real sequence and {εn}, {δn} are nonnegative real sequences such that infn{rn}≥ r > 0 and \(\lim _{n\to {\infty }}r_{n}\varepsilon _{n}=\lim _{n\to {\infty }}\delta _{n}= 0\).
b) In Theorem 3.3, to define the element xn+ 1, we have to find the projection of u onto the intersection of n × N half-spaces. In Theorem 3.2, we only find the projection of u onto the intersection of n half-spaces. So, the algorithm to define xn+ 1 in Theorem 3.2 is simpler than the algorithm in Theorem 3.3. However, in the both cases, we can find the element xn+ 1 by the approximation solution of the following minimization problem: Find a minimum point of \(f(x)=\frac {1}{2}\|x-u\|^{2}\) over the intersection of a finite family of half-spaces Ci. In particular, if \(E=\mathbb R^{m}\), then we can find xn+ 1 easily by using the “Quadratic Programming Algorithms” package in MATLAB software.
Next, we have the following corollaries:
Corollary 3.5
LetE be a uniformly convex and smooth Banach space andlet\(A_{i}:\ E\longrightarrow 2^{E^{*}}\), i = 1,2,…, N, be maximal monotone operators ofE into\(2^{E^{*}}\)suchthat\(S={\cap }_{i = 1}^{N}A_{i}^{-1}0\neq \emptyset \). Let\({J^{i}_{r}}\)bethe metric resolvent ofAiforr > 0 withi = 1,2,…, N. Let {δn} be nonnegative real sequence and let {ri, n}, i = 1,2,…, N, be positive real sequences such that mini{infn{ri, n}}≥ r > 0. For a given pointu ∈ E, we define the sequence {xn} byx1 = x ∈ E, C1 = Eand
If\(\lim _{n\to {\infty }}\delta _{n} = 0\), then thesequence {xn} convergesstrongly toPSu, asn →∞.
Proof
In (3.3) if εn = 0 for all n ≥ 1, then the elements yi, n, i = 1,2,…, N, can be rewritten in the form
The above inclusion equation is equivalent to
for all i = 1,2,…, N.
So, apply Theorems 3.2 and 3.3 with εn = 0 for all n ≥ 1, we obtain the proof of this corollary. □
Corollary 3.6
LetE be a uniformly convex and smooth Banach space andlet\(A_{i}:\ E\longrightarrow 2^{E^{*}}\), i = 1,2,…, N, be maximal monotone operators ofE into\(2^{E^{*}}\)suchthat\(S={\cap }_{i = 1}^{N}A_{i}^{-1}0\neq \emptyset \). Let {εn} be a nonnegative real sequence and let {ri, n}, i = 1,2,…, N, be positive real sequences such that mini{infn{ri, n}}≥ r > 0. For a given pointu ∈ E, we define the sequence {xn} byx1 = x ∈ E, C1 = Eand
If\(\lim _{n\to {\infty }}\varepsilon _{n} r_{i,n}= 0\)for alli = 1,2,…, N, then the sequence{xn} convergesstrongly toPSu, asn →∞.
Proof
In (3.3), if δn = 0 for all n ≥ 1, then we have the element xn+ 1 is defined by
that is \({x}_{n + 1}={P}_{C_{n + 1}}u\).
So, apply Theorem 3.2 with δn = 0 for all n ≥ 1, we obtain the proof of this corollary. □
Remark 3.7
If ε = δn = 0 for all n ≥ 1, then the sequence {xn} is defined as follows: For a given point u ∈ E, we define the sequence {xn} by x1 = x ∈ E, C1 = E and
Remark 3.8
In Remark 3.7, if E is a real Hilbert space, N = 1 then we obtain the result of Takahashi et al. in [32] (see [32, Theorem 4.5]). Note that in this case, we do not use the condition rn →∞. So, Theorems 3.2 and 3.3 are more general than the result of Takahashi et al. Furthermore, in the proof of Theorems 3.2 and 3.3, we used the properties (Remark 2.1) of the limits of {Cn} in the sense of Mosco [20] and Lemmas 2.3–2.5 to show that the sequence {xn} converges strongly to PSu. But in order to prove [32, Theorem 4.5], Takahashi et al. used NST(I) condition and Lemma 3.1, Theorems 3.2 and 3.3. Thus, the proofs of main theorems in this paper are simpler than the proof of [32, Theorem 4.5].
4 Applications
4.1 The Common Minimum Point Problem
Let E be a Banach space and let f : E→(−∞, ∞] be a proper, lower semicontinuous and convex function. The subdifferential of f is the multi-valued mapping \(\partial f :\ E\longrightarrow 2^{E^{*}}\) which is defined by
for all x ∈ E. We know that ∂f is a maximal monotone operator (see [28]) and x0∈argminEf(x) if and only if ∂f(x0) ∋ 0.
The ε-subdifferential enlargement of ∂f, is given by
for each ε ≥ 0. We know that ∂εf(x) ⊂ ∂εf(x), for any x ∈ E. Moreover, in some particular cases, we have that \(\partial _{\varepsilon } f(x)\subsetneq \partial ^{\varepsilon } f(x)\) (see [10, Example 2 and Example 3]).
In [4], when E is a real Hilbert space, Alvarez proposed the following approximate inertial proximal algorithm:
In [21], Moudafi and Elisabeth extended the above iterative method in the form
They proved that if there exists c > 0 such that cn ≥ c for all n ≥ 1, and there is α ∈ [0,1) such that {αn}⊂ [0, α], \({\sum }_{n = 1}^{{\infty }} {C}_{k}\varepsilon _{k}<{\infty }\) and
then the sequence {xn} converges weakly to a minimum point of f.
Note that, if αn = 0 for all n ≥ 1, then (4.1) becomes
From Theorems 3.2 and 3.3, we have the following theorem:
Theorem 4.1
LetE be a uniformly convex and smooth Banach space and letfi, i = 1,2,…, N be proper, lower semicontinuous and convex functions ofE into(−∞, ∞] such that\(S={\cap }_{i = 1}^{N}\arg \min _{x{\in } E} f_{i}(x)\neq \emptyset \). Let {εn} and {δn} be nonnegative real sequences and let {ri, n}, i = 1,2,…, N, be positive real sequences such that mini{infn{ri, n}}≥ r > 0. For a given pointu ∈ E, we define the sequence {xn} byx1 = x ∈ E, C1 = Eand
If\(\lim _{n\to {\infty }}\varepsilon _{n}r_{i,n}=\lim _{n\to {\infty }}\delta _{n} = 0\)for alli = 1,2,…, N, then the sequence{xn} convergesstrongly toPSu, asn →∞.
Remark 4.2
In Theorem 4.1, if εn = 0 for all n ≥ 1, then the sequence {xn} is defined as follows: For a given point u ∈ E, we define the sequence {xn} by x1 = x ∈ E, C1 = E and
Note that if E is a real Hilbert space, then the element yi, n can be defined as follows
for all i = 1,2,…, N and for all n ≥ 0.
4.2 The Convex Feasibility Problem
Let C be a nonempty closed convex subset of E. Let iC be the indicator function of C, that is,
It is easy to see that iC is the proper, semicontinuous and convex function, so its subdifferentiable ∂iC is a maximal monotone operator. We know that
where N(u, C) is the normal cone of C at u.
We denote the metric resolvent of ∂iC by Jr with r > 0. Suppose u = Jrx for x ∈ E, that is
Thus, we have
for all y ∈ C. From Lemma 2.4, we get that u = PCx.
So, from Corollary 3.5, we have the following theorem:
Theorem 4.3
LetE be a uniformly convex and smooth Banach space and letQi, i = 1,2,…, N, benonempty closed convex subsets ofE suchthat\(S={\cap }_{i = 1}^{N}Q_{i}\neq \emptyset \). Let {δn} be nonnegative real sequence. For a given pointu ∈ E, we define the sequence {xn} byx1 = x ∈ E, C1 = Eand
If\(\lim _{n\to {\infty }}\delta _{n} = 0\), then thesequence {xn} convergesstrongly toPSu, asn →∞.
4.3 A System Variational Inequalities
Let C be a nonempty closed convex subset of E and let A : C→E∗ be a monotone operator which is hemicontinuous (that is for any x ∈ C and tn → 0+ we have \(A(x+t_{n}y)\rightharpoonup Ax\) for all y ∈ E such that x + tny ∈ C). Then, a point u ∈ C is called a solution of the variational inequality for A, if
We denote by VI(C, A) the set of all solutions of the variational inequality for A.
Define a mapping T by
By Rockafellar [28], we know that TA is maximal monotone and \(T_{A}^{-1}0=VI(C,A)\).
For any y ∈ E and r > 0, we know that the variational inequality VI(C, rA + JE(⋅− y)) has a unique solution. Suppose that x = VI(C, rAx + JE(x − y)), that is
From the definition of N(x, C), we have
which implies that
Thus, we obtain that x = Jry, where Jr is the metric resolvent of TA.
Now, let E and F be two uniformly convex and smooth Banach spaces and let Ki, i = 1,2,…, N be closed convex subsets of E. Let Ai : Ki→E∗ be monotone and hemicontinuous operators. Suppose that \(S={\cap }_{i = 1}^{N}VI(K_{i},A_{i})\neq \emptyset \).
We consider the following problem:
To solve Problem (4.2), we define the operators \(T_{A_{i}}\) as follows
for all i = 1,2,…, N.
So, from Corollary 3.5, we have the following theorem:
Theorem 4.4
Let {δn} be a nonnegative real sequence and let {ri, n}, i = 1,2,…, N, be positive real sequences such that mini{infn{ri, n}}≥ r > 0. For a given pointu ∈ E, we define the sequence {xn} byx1 = x ∈ E, C1 = Eand
If\(\lim _{n\to {\infty }}\delta _{n} = 0\), then thesequence {xn} convergesstrongly toPSu, asn →∞.
5 Numerical Test
We take E = L2([0,1]) with the inner product
and the norm
for all f, g ∈ L2([0, 1]).
Now, let
where ai(t) = ti− 1, \(b_{i}=\frac {1}{i + 2}\) for all i = 1,2,…,10 and t ∈ [0,1].
It is easy to check that \(x(t)=t^{2}{\in } S={\cap }_{i = 1}^{10}Q_{i}\). We consider the problem of finding an element x∗∈S.
Now, by using Theorem 4.3, we consider the convergence of the sequence {xn} which is generated by the following two cases:
- Case A. :
-
$$\begin{array}{@{}rcl@{}} &&\text{i) } {y}_{i,n}={P}_{Q_{i}}{x}_{n},\ i = 1,2,\dots,N,\\ &&\text{ii) } \text{ Choose } {i}_{n} \text{ such that } \|{y}_{{i}_{n},n}-{x}_{n}\|=\max\limits_{i = 1,\dots,N}\{\|{y}_{i,n}-{x}_{n}\|\},\ \text{ let } {y}_{n}={y}_{{i}_{n},n},\\ &&C_{n + 1}=\{z{\in} {C}_{n} : \langle {y}_{n}-z,{J}_{E}({x}_{n}-{y}_{n})\rangle \geq 0\},\\ &&\text{iii) } {x}_{n + 1}={P}_{C_{n + 1}}{x}_{1},\ n = 1,2,\ldots \end{array} $$
- Case B. :
-
$$\begin{array}{@{}rcl@{}} &&\text{i) } {y}_{i,n}={P}_{Q_{i}}{x}_{n},\ i = 1,2,\dots,N,\\ &&\text{ii*) } C^{i}_{n + 1}=\{z{\in} {C}_{n} : \langle {y}_{i,n}-z,{J}_{E}({x}_{n}-{y}_{i,n})\rangle \geq 0\},\ i = 1,2,\dots,N,\\ &&C_{n + 1}={\cap}_{i = 1}^{N}C_{n + 1}^{i},\\ &&\text{iii) } {x}_{n + 1}={P}_{C_{n + 1}}{x}_{1},\ n = 1,2,\ldots \end{array} $$
We obtain Table 1 of numerical results.
The behaviors of the approximation solution xn(t) in both of the cases ∥xn+ 1 − xn∥ < 10− 4 and ∥xn+ 1 − xn∥ < 10− 5 are presented in Figs. 1 and 2.
References
Agarwal, R.P., O’Regan, D., Sahu, D.R.: Fixed Point Theory for Lipschitzian-Type Mappings with Applications Topological Fixed 1 2 and its 5, vol. 6. Springer, New York (2009)
Alber, Y.I.: Metric and Generalized Projections in Banach Spaces: Properties and Applications. Theory and Applications of Nonlinear Operators of Accretive and Monotone Type, 15–50. Lecture Notes in Pure and Appl Math., vol. 178. Dekker, New York (1996)
Alber, Y.I., Reich, S.: An iterative method for solving a class of nonlinear operator in Banach spaces. Panamer. Math. J. 4(2), 39–54 (1994)
Alvarez, F.: On the minimizing property of a second order dissipative system in Hilbert spaces. SIAM J. Control Optim. 38(4), 1102–1119 (2000)
Bauschke, H.H., Borwein, J.M., Combettes, P.L.: Bregman monotone optimization algorithms. SIAM J. Control Optim. 42(2), 596–636 (2003)
Bauschke, H.H., Combettes, P.L.: Construction of best Bregman approximations in reflexive Banach spaces. Proc. Am. Math. Soc. 131(12), 3757–3766 (2003)
Bauschke, H.H., Matoušková, E., Reich, S.: Projection and proximal point methods: convergence results and counterexamples. Nonlinear Anal. 56(5), 715–738 (2004)
Beer, G.: Topologies on Closed and 1 Convex Sets Mathematics and Its Applications, vol. 268. Kluwer, Dordrecht (1993)
Browder, F.E.: Nonlinear maximal monotone operators in Banach space. Math. Ann. 175, 89–113 (1968)
Burachik, R.S., Iusem, A.N., Svaiter, B.F.: Enlargement of monotone operators with applications to variational inequalities. Set-Valued Anal. 5(2), 159–180 (1997)
Burachik, R.S., Svaiter, B.F.: ε-enlargements of maximal monotone operators in Banach spaces. Set-Valued Anal. 7(2), 117–132 (1999)
Cioranescu, I.: Geometry of Banach Spaces, Duality Mappings and Nonlinear Problems Mathematics and Its Applications, vol. 62. Kluwer, Dordrecht (1990)
Dadashi, V.: Shrinking projection algorithms for the split common null point problem. Bull. Aust. Math. Soc. 96(2), 299–306 (2017)
Goebel, K., Kirk, W.A.: Topics in Metric Fixed Point Theory. Cambridge Stud. Adv Math, vol. 28. Cambridge Univ. Press, Cambridge (1990)
Güler, O.: On the convergence of the proximal point algorithm for convex minimization. SIAM J. Control Optim. 29(2), 403–419 (1991)
Kamimura, S., Takahashi, W.: Strong convergence of a proximal-type algorithm in a Banach space. SIAM J. Optim. 13(3), 938–945 (2002)
Kimura, Y., Takahashi, W.: On a hybrid method for a family of relatively nonexpansive mappings in a Banach space. J. Math. Anal. Appl. 357(2), 356–363 (2009)
Kimura, Y.: A shrinking projection method for nonexpansive mappings with nonsummable errors in a Hadamard space. Ann. Oper. Res. 243(1-2), 89–94 (2016)
Martinet, B.: Régularisation d’inéquations variationnelles par approximations successives. Rev. Franç,aise Informat. Recherche Opérationnalle 4(Sér. R-3), 154–158 (1970)
Mosco, U.: Convergence of convex sets and of solutions of variational inequalities. Adv. Math. 3, 510–585 (1969)
Moudafi, A., Elisabeth, E.: An approximate inertial proximal method using the enlargement of a maximal monotone operator. Int. J. Pure Appl. Math. 5(3), 283–299 (2003)
Qin, X., Cho, S.Y., Kang, S.M.: Strong convergence of shrinking projection methods for quasi−nonexpansive mappings and equilibrium problems. J. Comput. Appl. Math. 234(3), 750–760 (2010)
Reich, S.: Book review: Geometry of Banach spaces, duality mappings and nonlinear problems. Bull. Am. Math. Soc. 26(2), 367–370 (1992)
Solodov, M.V., Svaiter, B.F.: A hybrid approximate extragradient-proximal point algorithm using the enlargement of a maximal monotone operator. Set-Valued Anal. 7 (4), 7323–345 (1999)
Solodov, M.V., Svaiter, B.F.: A hybrid projection-proximal point algorithm. J. Convex Anal. 6(1), 59–70 (1999)
Solodov, M.V., Svaiter, B.F.: Forcing strong convergence of proximal point iterations in Hilbert space. Math. Progam. 87(1), 189–202 (2000). Ser. A
Rockafellar, R.T.: On the maximal monotonicity of subdifferential mappings. Pacific. J. Math. 33, 209–216 (1970)
Rockafellar, R.T.: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 149, 75–88 (1970)
Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14(5), 887–898 (1976)
Takahashi, S., Takahashi, W.: The split common null point problem and the shrinking projection method in Banach spaces. Optimization 65(2), 1–7 (2015)
Takahashi, W.: Convex Analysis and Approximation of Fixed Points. Yokohama Publishers, Yokohama (2000)
Takahashi, W., Takeuchi, Y., Kubota, R.: Strong convergence theorems by hybrid methods for families of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 341(1), 276–286 (2008)
Takahashi, W.: The split feasibility problem and the shrinking projection method in Banach spaces. J. Nonlinear Anal. Convex 16(7), 1449–1459 (2015)
Takahashi, W., Wen, C.-F., Yao, J.-C.: Strong convergence theorem by shrinking projection method for new nonlinear mappings in Banach spaces and applications. Optimization 66(4), 609–621 (2017)
Tsukada, M.: Convergence of best approximations in a smooth Banach space. J. Approx. Theory 40(4), 301–309 (1984)
Xu, H.K.: Inequalities in Banach spaces with applications. Nonlinear Anal. 16 (12), 1127–1138 (1991)
Saewan, S., Kumam, P.: The shrinking projection method for solving generalized equilibrium problems and common fixed points for asymptotically quasi-nonexpansive mappings. Fixed Point Theory Appl. 2011(9), 25 (2011)
Acknowledgements
The authors would like to thank the referees and the editor for the valuable comments and suggestions, which helped to improve this paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Tuyen, T.M., Trang, N.M. Two Strong Convergence Theorems for the Common Null Point Problem in Banach Spaces. Acta Math Vietnam 44, 935–953 (2019). https://doi.org/10.1007/s40306-019-00326-5
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40306-019-00326-5