Abstract
In this paper, we introduce a self-adaptive inertial subgradient extragradient method for solving pseudomonotone equilibrium problem and common fixed point problem in real Hilbert spaces. The algorithm consists of an inertial extrapolation process for speeding the rate of its convergence, a monotone nonincreasing stepsize rule, and a viscosity approximation method which guaranteed its strong convergence. More so, a strong convergence theorem is proved for the sequence generated by the algorithm under some mild conditions and without prior knowledge of the Lipschitz-like constants of the equilibrium bifunction. We further provide some numerical examples to illustrate the performance and accuracy of our method.
Similar content being viewed by others
1 Introduction
Muu and Oettli [40] introduced the equilibrium problem (shortly, EP) as a generalization of many problems in nonlinear analysis, which include variational inequalities, convex minimization, saddle point problems, fixed point problems, and Nash-equilibrium problems; see, e.g., [13, 40]. The EP also has a great impact on the development of many mathematical models arising from several branches of pure and applied sciences such as physics, economics, finance, image reconstruction, ecology, transportation, network elasticity and optimization. Given a nonempty, closed, and convex subset C of a real Hilbert space H and a bifunction \(f:C \times C \to \mathbb{R}\) such that \(f(x,x) = 0\) for all \(x \in C\). The EP in the sense of Muu and Oettli [40] (see also Blum and Oettli [13]) is defined as finding a point \(x^{*} \in C\) such that
We denote the set of solutions of the EP by \(\operatorname{EP}(f)\). Due to its importance and applications, many authors have extensively studied several iterative methods for approximating the solutions of the EP, see for example [11, 12, 18, 22, 23, 29–31, 41, 47].
Recently, researchers have considered solving EP (1.1) with pseudomonotone bifunction f. This is because many application problems are modeled with pseudomonotone bifunction. More so, it is important to study the approximation of common solutions of the EP and fixed point problem (i.e., find \(x \in C\) such that \(x \in \operatorname{EP}\mathrel{\cap} F(T)\), where \(F(T) = \{x\in H: x =Tx\}\)) because of some mathematical models whose constraints are expressed as fixed point and equilibrium problems. Such models can be found in many practical problems such as signal processing, network resource allocation, image recovery, etc.; see for instance [26, 27, 36, 37].
Tada and Takahashi [46] first proposed the following hybrid method for approximating a common solution of EP with monotone bifunction and fixed points of a nonexpansive mapping T in Hilbert spaces:
It should be noted that the implementation of Algorithm (1.2) required solving a strongly monotone regularized equilibrium problem for point \(z_{n}\):
Such method does not hold if the bifunction is relaxed to pseudomonotone, which makes the iterative method difficult for solving pseudomonotone EP (1.1). In 2008, Quoc et al. [43] introduced a new process called extragradient method (EM) by extending the work of Korpelevich [33] and Antipin [7] to the case of pseudomonotone EP. In particular, Quoc et al. [43] algorithm is given as in Algorithm 1.1.
Later in 2013, Anh [1] introduced the following iterative scheme for approximating a common solution of pseudomonotone EP and a fixed point of nonexpansive mapping T:
It is evident that in Algorithm 1.1 and (1.4), one needs to solve two strongly convex optimization problems in the feasible set C in each iteration of the algorithm. This task can be very difficult if the set C is not a simple set, i.e., explicit. Hence, there is need for an improvement on the extragradient method. Following Censor et al. [14, 15], Hieu [22] introduced a Halpern-type subgradient extragradient method (shortly, HSEM) which involves a half-space in the second minimization problem as in Algorithm 1.2.
Note that the half-space \(T_{n}\) is explicit, and thus the HSEM has a competitive advantage over the extragradient method in numerical computations. Also, it should be noted that the convergence of the HSEM depends on the Lipschitz constants \(c_{1}\) and \(c_{2}\) which are not easy to be determined. In an attempt to provide an alternative method which does not require prior knowledge of the Lipschitz constants \(c_{1}\) and \(c_{2}\), Yang and Liu [48] proposed the following modified HSEM with a nonincreasing stepsize and proved a strong convergence theorem for finding a common solution of pseudomonotone EP and a fixed point of a quasi-nonexpansive mapping S in real Hilbert spaces (Algorithm 1.3).
For more examples of extragradient and subgradient methods for finding a common element in the set of solutions of equilibrium and fixed point problems, see for instance [1–6, 20, 28, 32] and the references therein.
Motivated by the work of Hieu [22] and Yang and Liu [48], in this paper, we introduce a new self-adaptive inertial subgradient extragradient method which does not depend on the Lipschitz constants \(c_{1}\) and \(c_{2}\) for finding solutions of pseudomonotone EP and a common fixed point of a countable family of demicontractive mappings in real Hilbert spaces. The inertial extrapolation step in our algorithm is regarded as a means of improving the speed of convergence of the algorithm (see for instance [8, 9, 17, 29, 31, 34, 39] on inertial-type algorithms). Our method also consists of the viscosity approximation method (see [38]) which is a generalization of the Halpern-type algorithm. We prove a strong convergence theorem for the sequence generated by our method under some mild conditions and without prior knowledge of the Lipschitz constants. It is noted in [10] that strong convergence method is more desirable than weak convergence one because it translates the physically tangible property that the energy \(\Vert x_{n} -x^{\dagger }\Vert ^{2}\) of the error between the iterate \(x_{n}\) and a solution \(x^{\dagger }\) eventually become small. More importance of strong convergence was also underlined in Güler [21].
The paper is organized as follows. In Sect. 2, we recall some basic definitions and preliminary results which are necessary for our proof. Section 3 deals with proposing and analyzing the convergence of our iterative method for solving the EP and finding a fixed point of demicontractive mappings in real Hilbert spaces. Finally, in Sect. 4, we present some numerical examples to illustrate the performance of the proposed algorithms in comparison with related algorithms in the literature. Throughout this paper, we denote the strong (resp. weak) convergence of a sequence \(\{x_{n}\} \subseteq H\) to a point \(p \in H\) by \(x_{n} \rightarrow p\) (resp. \(x_{n}\rightharpoonup p\)). We also denote the index set \(\mathcal{N} = \mathbb{N}\backslash \{0\} = \{1,2,\dots \}\).
2 Preliminaries
In this section, we present some basic notions and results that are needed in the sequel.
Definition 2.1
Recall that the bifunction \(f:C \times C \rightarrow \mathbb{R}\) is said to be
- (i)
η-strongly monotone on C if there exists a constant \(\eta >0\) such that
$$ f(u,v) + f(v,u) \leq -\eta \Vert u-v \Vert ^{2},\quad \forall u,v \in C; $$ - (ii)
monotone on C if
$$ f(u,v) + f(v,u) \leq 0,\quad \forall u,v\in C; $$ - (iii)
pseudomonotone on C if
$$ f(u,v) \geq 0\quad \Rightarrow\quad f(v,u) \leq 0,\quad \forall u,v \in C; $$ - (iv)
satisfying a Lipschitz-like condition if there exist constants \(c_{1} >0\) and \(c_{2} >0\) such that
$$ f(u,v)+f(v,w) \geq f(u,w) - c_{1} \Vert u-v \Vert ^{2} - c_{2} \Vert v-w \Vert ^{2},\quad \forall u,v,w \in C. $$(2.1)
We note that(i) ⇒ (ii) ⇒ (iii) but the converse implications do not hold (see, e.g., [44]).
Throughout this paper, we assume that the following assumptions hold on f:
- (A1)
f is pseudomonotone on C and \(f(u,u) = 0\ \forall u \in C\);
- (A2)
\(f(\cdot ,v)\) is continuous on C for every \(v \in C\);
- (A3)
\(f(u,\cdot )\) is convex, lower semicontinuous, and subdifferentiable on C for every \(u \in C\);
- (A4)
f satisfies a Lipschitz-like condition.
For each \(u \in C\), the subgradient of the convex function \(f(u,\cdot )\) at u is denoted by \(\partial _{2} f(u,u)\), i.e.,
The metric projection \(P_{C}:H\to C\) at a point \(x \in H\) is the necessary unique vector \(P_{C}x\) in C such that
The metric projection satisfies the following identities (see, e.g., [45]):
- (i)
\(\langle x-y, P_{C} x - P_{C} y \rangle \geq \Vert P_{C} x-P_{C} y\Vert ^{2}\) for every \(x,y \in H\);
- (ii)
for \(x \in H\) and \(z\in C\), \(z = P_{C} x \Leftrightarrow \)
$$ \langle x-z, z-y \rangle \geq 0, \quad \forall y \in C; $$(2.2) - (iii)
for \(x \in H\) and \(y \in C\),
$$ \bigl\Vert y-P_{C}(x) \bigr\Vert ^{2} + \bigl\Vert x- P_{C}(x) \bigr\Vert ^{2} \leq \Vert x-y \Vert ^{2}. $$(2.3)
The mapping \(\phi : H \to H\) is called a contraction if \(\Vert \phi (x) - \phi (y)\Vert \leq \alpha \Vert x-y\Vert \) for \(\alpha \in [0,1)\) and \(x,y \in H\). It is easy to check that \(P_{C}\) is an example of a contraction mapping on C.
A subset D of H is called proximal if, for each \(x \in H\), there exists \(y \in D\) such that
In what follows, \(P(H)\) denotes the family of all nonempty proximal bounded subsets of H and \(\operatorname{CB}(H)\) denotes the family of nonempty, closed bounded subsets of H. The Hausdorff metric on \(\operatorname{CB}(H)\) is defined as
for all \(A,B \in \operatorname{CB}(H)\). A point \(\bar{x} \in H\) is called a fixed point of a multivalued mapping \(S:H \rightarrow 2^{H}\) if \(\bar{x} \in S\bar{x}\). We say that S satisfies the endpoint condition if \(S\bar{x} = \{\bar{x}\}\) for all \(\bar{x} \in F(S)\). We recall some basic definitions of multivalued mappings.
Definition 2.2
A multivalued mapping \(S:H \rightarrow \operatorname{CB}(H)\) is said to be
- (1)
nonexpansive if
$$ \mathcal{H}(Su,Sv) \leq \Vert u-v \Vert ,\quad \forall u,v \in H; $$ - (2)
quasi-nonexpansive if \(F(S) \neq \emptyset \) and
$$ \mathcal{H}(Su,Sp) \leq \Vert u-p \Vert ,\quad \forall u \in H, p \in F(S); $$ - (3)
κ-demicontractive for \(0 \leq \kappa <1\) if \(F(S) \neq \emptyset \), and
$$ \mathcal{H}(Su,Sp)^{2} \leq \Vert u-p \Vert ^{2} + \kappa d(u,Su)^{2},\quad \forall u \in H, p \in F(S). $$
Note that the class of κ-demicontractive mappings includes the class of nonexpansive and quasi-nonexpansive mappings. Let \(S: H \rightarrow P(H)\) be a multivalued mapping. We defined the best approximation operator of S as follows:
It is easy to show that \(F(S) = F(P_{S})\) and \(P_{S}\) satisfies the endpoint condition. More so, \(I-S\) is said to be demiclosed at zero if for any sequence \(\{x_{n}\} \subset H\) such that \(x_{n} \rightharpoonup \bar{x}\) and \(x_{n} - u_{n} \rightarrow 0\), for \(u_{n} \in Sx_{n}\), then \(\bar{x} \in F(S)\).
The following identities hold in any Hilbert space:
and
Lemma 2.3
([16])
LetHbe a real Hilbert space, \(x_{i} \in H\) (\(1 \leq i \leq n\)) and\(\{\alpha _{i}\} \subset (0,1)\)with\(\sum_{i=1}^{n}\alpha _{i} =1\). Then
Lemma 2.4
([19])
LetCbe a convex subset of a real Hilbert spaceHand\(\varphi :C \rightarrow \mathbb{R}\)be a convex and subdifferentiable function onC. Then\(x^{*}\)is a solution to the convex problem: \(\textit{minimize}\{\varphi (x):x \in C\}\)if and only if\(0 \in \partial \varphi (x^{*}) + N_{C}(x^{*})\), where\(\partial \varphi (x^{*})\)denotes the subdifferential ofφand\(N_{C}(x^{*})\)is the normal cone ofCat\(x^{*}\).
Lemma 2.5
([35])
Let\(\{\alpha _{n}\}\)and\(\{\delta _{n}\}\)be sequences of nonnegative real numbers such that
where\(\{\delta _{n}\}\)is a sequence in\((0,1)\)and\(\{\beta _{n}\}\)is a real sequence. Assume that\(\sum_{n=0}^{\infty }\gamma _{n} < \infty \). Then the following results hold:
- (i)
If\(\beta _{n} \leq \delta _{n} M\)for some\(M \geq 0\), then\(\{\alpha _{n}\}\)is a bounded sequence.
- (ii)
If\(\sum_{n=0}^{\infty }\delta _{n} = \infty \)and\(\limsup_{n\rightarrow \infty } \frac{\beta _{n}}{\delta _{n}} \leq 0\), then\(\lim_{n\rightarrow \infty }\alpha _{n} =0\).
Lemma 2.6
([36])
Let\(\{a_{n}\}\)be a sequence of real numbers such that there exists a nondecreasing subsequence\(\{a_{n_{i}}\}\)of\(\{a_{n}\}\). Then there exists a nondecreasing sequence\(\{m_{k}\}\subset \mathbb{N}\)such that\(m_{k} \rightarrow \infty \)and the following properties are satisfied for all (sufficiently large number\(k\in \mathbb{N}\)): \(a_{m_{k}} \leq a_{{m_{k}}+1}\)and\(a_{k} \leq a_{{m_{k}}+1}\), \(m_{k} = \max \{j \leq k: a_{j} \leq a_{j+1}\}\).
3 Main results
In this section, we present our iterative algorithm and its analysis.
In what follows, let C be a nonempty closed convex subset of a real Hilbert space H, \(f:C \times C \to \mathbb{R}\) be a bifunction satisfying (A1)–(A4). For \(i \in \mathcal{N}\), let \(S_{i}: H \to \operatorname{CB}(H)\) be \(\kappa _{i}\)-demicontractive mappings such that \(I-S_{i}\) are demiclosed at zero and \(S_{i}(x^{*}) = \{x^{*}\}\) for all \(x^{*} \in F(S_{i})\). Let ϕ be a fixed contraction on H with coefficient \(\tau \in (0,1)\). Suppose that the solution set
The following conditions are assumed to be satisfied by the control parameters of our algorithm.
- (C1)
\(\theta \in [0,1)\), \(\lambda _{1} >0\);
- (C2)
\(\{\delta _{n}\}\subset (0,1)\) such that \(\sum_{n=1}^{\infty }\delta _{n} = \infty \) and \(\lim_{n \to \infty }\delta _{n} = 0\);
- (C3)
\(\{\epsilon _{n}\}\subset [0,\infty )\) such that \(\epsilon _{n} = o(\delta _{n})\), i.e., \(\lim_{n\to \infty }\frac{\epsilon _{n}}{\delta _{n}} = 0\);
- (C4)
\(\{\beta _{n,i}\}_{i=0}^{\infty }\subset (0,1)\) such that \(\sum_{i=0}^{n} \beta _{n,i}=1\);
- (C5)
\(\liminf_{n\to \infty }(\beta _{n,0}-\kappa )\beta _{n,i}>0\), and \(\kappa = \max \{\kappa _{i}\}\) for all \(i \in \mathcal{N}\).
Next, we present our algorithm (Algorithm 3.1).
Remark 3.2
It is obvious that \(T_{n}\) is a half-space and \(C \subset T_{n}\) (see [22] for more details). Note that (3.1) can be implemented easily since the value of \(\Vert x_{n} -x_{n-1}\Vert \) is known prior to choosing \(\alpha _{n}\). Also, we deduced from (3.1) that \(\lim_{n\to \infty }\frac{\alpha _{n}}{\delta _{n}}\Vert x_{n} -x_{n-1}\Vert = 0\). More so, if \(w_{n} = y_{n}\) and \(w_{n} \in S_{i}(w_{n})\) for all \(i \in \mathcal{N}\), we arrived at common solutions of the equilibrium and fixed point problems.
We begin by proving the following necessary results.
Lemma 3.3
The sequence\(\{\lambda _{n}\}\)generated by (3.5) is monotonically nonincreasing and
Proof
Clearly, \(\lambda _{n}\) is nonincreasing. On the other hand, it follows from Assumption (A4) that
So, the sequence \(\{\lambda _{n}\}\) is nonincreasing and has a lower bound of \(\frac{\mu }{\max \{c_{1},c_{2}\}}\). This implies that there exists \(\lim_{n\rightarrow \infty }\lambda _{n} = \lambda \geq \min \{ \frac{\mu }{\max \{c_{1},c_{2}\}},\lambda _{1} \} \). □
Lemma 3.4
For each\(v \in Sol\)and\(n \geq 1\), we have
Proof
By the definition of \(z_{n}\) and Lemma 2.4, we get
Thus, there exist \(q_{n} \in \partial _{2} f(y_{n},z_{n})\) and \(\xi \in N_{T_{n}}(z_{n})\) such that
Since \(\xi \in N_{T_{n}}(z_{n})\), then \(\langle \xi , y -z_{n} \rangle \leq 0\) for all \(y \in T_{n}\). This together with (3.7) implies that
Since \(v \in Sol\), then \(v \in \operatorname{EP}(f)\) and \(v \in \bigcap_{i=1}^{\infty }F(S_{i})\). Note that \(\operatorname{EP}(f)\subset C \subseteq T_{n}\), we derive from (3.8) that
By the fact that \(q_{n} \in \partial _{2} f(y_{n},z_{n})\), we get
This together with (3.9) implies that
Since \(v \in \operatorname{EP}(f)\), then \(f(v, y_{n}) \geq 0\). From the pseudo-monotonicity of f, we get \(f(y_{n},v) \leq 0\). Hence from (3.10), we get
Since \(z_{n} \in T_{n}\), then
Thus
From the fact that \(\eta _{n} \in \partial _{2} f(w_{n},y_{n})\) and the definition of subdifferential, we get
hence
Combining (3.11) and (3.13), we get
Hence
Using the definition of \(\lambda _{n}\) in the above inequality, we get
Observe that \(\lim_{n \to \infty } \frac{\lambda _{n}\mu }{\lambda _{n+1}} = \mu \). Hence, we obtain from (3.14)
□
Lemma 3.5
The sequence\(\{x_{n}\}\)generated by Algorithm 3.1 is bounded.
Proof
Put \(u_{n} = \beta _{n,0}z_{n} + \sum_{i=1}^{n} \beta _{n,i}\nu _{n,i}\) and let \(v \in Sol\). Then from Lemma 2.3 we have
and by condition (C5), we get
Therefore
Let \(M = \max \{ \frac{\Vert \phi (v)-v\Vert }{1-\tau }, \sup_{n\geq 1} (\frac{1-\delta _{n}}{1-\tau } ) \frac{\alpha _{n}}{\delta _{n}}\Vert x_{n} -x_{n-1}\Vert \} \). Then we have
Hence, by induction and Lemma 2.5(i), we have that \(\{\Vert x_{n} -v\Vert \}\) is bounded. This implies that \(\{x_{n}\}\) is bounded, and consequently \(\{y_{n}\}\), \(\{z_{n}\}\), and \(\{S_{i}z_{n}\}\) are bounded. □
The following lemma will be used in proving the strong convergence of our Algorithm 3.1.
Lemma 3.6
Let\(\{x_{n}\}\)be the sequence generated by Algorithm 3.1. Then
where\(a_{n} = \Vert x_{n} -v\Vert ^{2}\), \(\theta _{n} = \frac{2\delta _{n}(1-\tau )}{1-\delta _{n}\tau }\), \(b_{n} = \frac{\langle \phi (v)-v, x_{n+1} - v \rangle }{1-\tau }\), \(c_{n} = \frac{\delta _{n}^{2}}{1-\delta _{n}\tau }\Vert x_{n} -v\Vert ^{2} + \frac{\alpha _{n}M(1-\delta _{n})^{2}}{1-\delta _{n}\tau }\Vert x_{n} - x_{n-1}\Vert \), and\(v \in Sol\)for some\(M>0\).
Proof
From (3.1), we get
where \(M = \sup_{n\geq 1}(\alpha _{n}\Vert x_{n} -x_{n-1}\Vert +2\Vert x_{n} -v\Vert )\). More so,
Thus
Thus, we obtained the desired result. □
Next, we prove our strong convergence theorem.
Theorem 3.7
The sequence\(\{x_{n}\}\)generated by Algorithm 3.1 converges strongly tox̄, where\(\bar{x} = P_{Sol}(\bar{x})\)is the unique solution of the variational inequality
Proof
Let \(v \in Sol\) and put \(\varGamma _{n} = \Vert x_{n} -v\Vert ^{2}\). We divide the proof into two cases.
Case 1: Suppose that there exists \(N \in \mathbb{N}\) such that \(\{\varGamma _{n}\}\) is monotonically decreasing for all \(n \geq N\). This implies that \(\lim_{n\to \infty }\varGamma _{n}\) exists, and since \(\{x_{n}\}\) is bounded, we have
From (3.15), we have
This implies that
Since \(\delta _{n} \to 0\) and \(\alpha _{n}\Vert x_{n} -x_{n-1}\Vert \to 0\), then
Using condition (C5), we obtain
Similarly, from Lemma 3.5, we get
that is,
This implies that
Since \(\mu \in (0,1)\), thus we have
hence
Clearly, from (3.1)
and
Therefore from (3.22)–(3.25), we obtain
Since \(\{x_{n}\}\) is bounded, there exists a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) such that \(x_{n_{k}} \rightharpoonup z \in H\) and
By (2.2), we have
Combining (3.27) and (3.26), it is easy to see that
Since C is closed and convex, then \(z \in C\). From (3.21) and (3.23), we get \(y_{n_{k}} \rightharpoonup z\) and \(w_{n_{k}} \rightharpoonup z\) and by the definition of subdifferential of f and (3.12), we have
Passing to the limit in the last inequality as \(k \to \infty \) and using assumptions (A1) and (A3) with the fact that \(\lim_{k\to \infty }\lambda _{n_{k}} = \lambda >0\), we get \(f(z,y) \geq 0\) for all \(y \in C\). Hence, \(z \in \operatorname{EP}(f)\). Furthermore, from (3.22) and (3.23), we have \(z_{n_{k}} \rightharpoonup z\). Since \(S_{i}\) are demiclosed at zero for \(i=1,2,\dots ,m\), it follows from (3.20) that \(z \in F(S_{i})\) for \(i=1,2,\dots ,m\). Therefore \(z \in Sol = \operatorname{EP}(f)\cap \bigcap_{i=1}^{\infty }F(S_{i})\).
Using Lemma 2.5(ii), Lemma 3.6, and (3.28), we arrive at \(\Vert x_{n} - v\Vert \to 0\). This implies that \(\{x_{n}\}\) converges strongly to v.
Case 2: Suppose that \(\{\varGamma _{n}\}\) is not monotonically decreasing, that is, there is a subsequence \(\{\varGamma _{n_{j}}\}\) of \(\{\varGamma _{n}\}\) such that \(\varGamma _{n_{j}}<\varGamma _{n_{j}+1}\) for all \(j \in \mathcal{N}\). Then, by Lemma 2.6, we can define an integer sequence \(\{\tau (n)\}\) for all \(n \geq n_{0}\) by
Moreover, \(\{\tau (n)\}\) is a nondecreasing sequence such that \(\tau (n) \to \infty \) as \(n \to \infty \) and \(\varGamma _{\tau (n)} \leq \varGamma _{{\tau (n)}+1}\) for all \(n \geq n_{0}\). From (3.6) and (3.15), we can show that
By a similar argument as in Case 1, we can also show that
Also from Lemma 3.6, we get
Hence, we have
This implies that \(\lim_{n\rightarrow \infty }\varGamma _{{\tau (n)}+1} = 0\). Thus, we have
Therefore \(\Vert x_{n} -v\Vert \to 0\), and thus \(x_{n} \to 0\). This completes the proof. □
Remark 3.8
We highlight our contributions as follows:
- (i)
It is easy to see that if \(\phi (x_{n}) = u\) for any arbitrary \(u \in H\), Algorithm 3.1 reduces to a modified Halpern-type inertial extragradient method and the result is still valid. This improves the results of Hieu [22] as well as Yang and Liu [48].
- (ii)
In [25], the authors presented a weak convergence theorem for approximating solution of pseudomonotone EP provided the stepsize satisfies certain mild condition which requires prior estimate of the Lipschitz-like constants \(c_{1}\) and \(c_{2}\). In this paper, we proved a strong convergence result without prior estimate of the Lipschitz-like constants. This is very important since there is no best estimate for the Lipschitz-like constant.
- (iii)
Also our result extends the results of [23, 24, 42] to self-adaptive inertial subgradient extragradient method with viscosity approximation method.
4 Numerical examples
In this section, we give some numerical examples to support our main result. All the optimization subproblems are effectively solved by the function quadprog in Matlab. The computations are carried out using MATLAB program on a Lenovo X250, Intel (R) Core i7 vPro with RAM 8.00 GB. We show the numerical behavior of the sequences generated by Algorithm 3.1 and also compare the performance with Algorithm 1.2 of Hieu [22] and Algorithm 1.3 of Yang and Liu [48].
Example 4.1
We consider the Nash–Cournot equilibrium model in [42] with the bifunction \(f:\mathbb{R}^{N} \to \mathbb{R}^{N} \to \mathbb{R}\) defined by
where \(q \in \mathbb{R}^{N}\) and P, Q are two matrices of order N such that Q is symmetric positive semidefinite and \(Q-P\) is negative semidefinite. In this case, the bifunction f satisfies (A1)–(A4) and has \(c_{1} =c_{2} = \frac{\Vert P-Q\Vert }{2}\), see [42], Lemma 6.2. The vector q is generated randomly and uniformly with its entries being in \([-2,2]\) and the two matrices P, Q are generated randomly such that their properties are satisfied. For each \(i \in \mathcal{N}\), we define the multivalued mapping \(S_{i} : \mathbb{R}^{N} \to 2^{\mathbb{R}^{N}}\) by
It is easy to show that \(S_{i}\) is a \(\kappa _{i}\)-demicontractive mapping with \(\kappa _{i} = \frac{4i+8i}{4i^{2} + 12 i + 9}\in (0,1)\) and \(F(S_{i})=\{0\}\) for each \(i \in \mathcal{N}\). Let \(\lambda _{1} = 0.07\), \(\mu = 0.7\), \(\alpha = 3\), \(\phi (x) = \frac{x}{4}\), \(\epsilon _{n} = \frac{1}{(n+1)^{1.5}}\), \(\delta _{n} = \frac{1}{(n+1)^{0.5}}\) and for each \(n \in \mathcal{N}\), \(i \geq 1\), let \(\{\beta _{n,i}\}\) be defined by
It is easy to verify that conditions (C1)–(C5) are satisfied. We implement the experiments for various values of N and m as follows:
- Case I:
\(N = 50\) and \(m = 5\),
- Case II:
\(N = 100\) and \(m = 20\),
- Case III:
\(N= 200\) and \(m = 50\),
- Case IV:
\(N = 500\) and \(m = 100\).
For Algorithm 1.2, we take \(\lambda _{n} = 0.07\), \(\alpha _{n} = \frac{1}{n+1}\), while for Algorithm 1.3, we take \(\lambda _{1} = 0.07 \), \(\mu = 0.7\), and \(Sx = \frac{x}{2}\), which is quasi-nonexpansive. We use \(\Vert x_{n+1}-x_{n}\Vert <10^{-4}\) as a stopping criterion for the numerical computation. Figures 1–3 describe the behavior of \(\Vert x_{n+1}-x_{n}\Vert \) against the number of iterations for Example 4.1. Also, we showed the CPU time taken and the number of iterations for each algorithm in Table 1.
Example 4.2
Let \(H_{1}=H_{2}=\ell _{2}(\mathbb{R})\) be the linear spaces whose elements are all 2-summable sequences \(\{x_{j}\}_{j=1}^{\infty }\) of scalars in \(\mathbb{R}\), that is,
with the inner product \(\langle \cdot ,\cdot \rangle : \ell _{2} \times \ell _{2} \to \mathbb{R}\) and \(\Vert \cdot \Vert : \ell _{2} \to \mathbb{R}\) defined by \(\langle x,y \rangle :=\sum_{j=1}^{\infty }x_{j}y_{j}\) and \(\Vert x\Vert = (\sum_{j=1}^{\infty } |x_{j}|^{2} )^{ \frac{1}{2}}\), where \(x=\{x_{j}\}_{j=1}^{\infty }\), \(y=\{y_{j} \}_{j=1}^{\infty }\). Let \(C = \{x\in H : \Vert x\Vert \leq 1\}\). Define the bifunction \(f:C \times C \to \mathbb{R} \) by
Following [44], it is easy to show that f is a pseudomonotone bifunction which is not monotone and f satisfies the Lipschitz-like condition with constant \(c_{1} = c_{2} = \frac{5}{2}\). Also, f satisfies assumptions (A2) and (A3). For \(i \in \mathcal{N}\), we define \(S_{i} :C \rightarrow \operatorname{CB}(\ell _{2}(\mathbb{R}))\) by
It can easily be seen that \(S_{i}\) is 0-demicontractive for all \(i \in \mathcal{N}\) and \(F(S_{i}) = \{0\}\). Also \(Sol = \{0\}\). We define \(\phi : \ell _{2} \to \ell _{2}\) by \(\phi (x) = \frac{x}{2}\) where \(x = (x_{1},x_{2},\dots ,x_{j},\dots )\), \(x_{j} \in \mathbb{R}\), and take \(\lambda _{1} =0.01\), \(\mu = 0.5\), \(\delta _{n} = \frac{1}{10(n+1)} \), \(\epsilon _{n} = \delta _{n}^{2}\) and \(\{\beta _{n,i}\}\) is as defined in Example 4.1. For Algorithm 1.2, we take \(\lambda _{n} = 0.01\), \(\alpha _{n} = \frac{1}{10(n+1)}\) and for Algorithm 1.3, we take \(\lambda _{1} = 0.01\) and \(\mu = 0.5\). We test the algorithms for the following initial values as follows:
- Case I:
\(x_{0} = (1,1,0,\dots ,0,\dots )\), \(x_{1} = (1,0,1,,\dots ,0,\dots )\),
- Case II:
\(x_{0} = (1,0,0,\dots ,0,\dots )\), \(x_{1} = (0,1,1,,\dots ,0,\dots )\),
- Case III:
\(x_{0} = (1,0,1,\dots ,0,\dots )\), \(x_{1} = (1,1,0,,\dots ,0,\dots )\),
- Case IV:
\(x_{0} = (1,1,0,\dots ,0,\dots )\), \(x_{1} = (1,0,0,,\dots ,0,\dots )\).
We use the following stopping criterion \(\Vert x_{n+1}-x_{n}\Vert < 10^{-5}\) for the numerical computation and plots the graphs \(\Vert x_{n+1} -x_{n}\Vert \) against the number of iterations in each case. The numerical results can be seen in Table 2 and Fig. 2.
Example 4.3
In this example, we take \(H = L_{2}([0,1])\) with the inner product \(\langle x,y \rangle = \int _{0}^{1}x(t)y(t)\,dt\) and the norm \(\Vert x\Vert = (\int _{0}^{1}x^{2}(t)\,dt )^{\frac{1}{2}}\) for all \(x,y \in L_{2}([0,1])\). The set C is defined by \(C = \{x \in H:\int _{0}^{1}(t^{2}+1)x(t)\,dt \leq 1 \}\) and the function \(f:C \times C \to \mathbb{R}\) is given by \(f(x,y) = \langle Ax, y-x \rangle \) where \(Ax(t) = \max \{0, x(t)\}\), \(t \in [0,1]\) for all \(x \in H\). We defined the mapping \(S_{i}:H \to 2^{H}\) by \(S_{i}(x)= \{ (-\frac{1+i}{i} )x \} \) for each \(i \in \mathcal{N}\). It is not difficult to show that \(S_{i}\) is demicontractive with \(\kappa _{i} = \frac{1}{1+2i}\), \(F(S_{i}) = \{0\}\) and \((I-S_{i})\) is demiclosed at 0. We take \(\phi (x) = \frac{x(t)}{2}\), \(\mu = 0.05\), \(\lambda _{0} = 0.25\), \(\epsilon _{n} = \frac{1}{(n+1)^{2}}\), \(\delta _{n} = \frac{1}{n+1}\), and \(\{\beta _{n,i}\}\) is as defined in Example 4.1. We test Algorithms 3.1, 1.2, and 1.3 for the following initial values:
- Case I:
\(x_{0} = t^{2} + 1\) and \(x_{1} = \sin (2t)\),
- Case II:
\(x_{0} = \frac{\cos (2t)}{2}\) and \(x_{1} = \frac{\sin (3t)}{30}\),
- Case III:
\(x_{0} = \frac{\exp (5t)}{80}\) and \(x_{1} = \frac{\exp (2t)}{40}\),
- Case IV:
\(x_{0} = \cos (2t)\) and \(x_{1} = \exp (-3t^{2})\).
We use \(\Vert x_{n+1}-x_{n}\Vert < 10^{-4}\) as a stopping criterion for the numerical computation and plot the graphs of \(\Vert x_{n+1}-x_{n}\Vert \) against the number of iterations in each case. The numerical results can be seen in Table 3 and Fig. 3.
5 Conclusion
The paper presents a self-adaptive inertial subgradient extragradient method for finding a solution of pseudomonotone equilibrium problem and a common fixed point of a countable family of κ-demicontractive mappings in Hilbert spaces. The algorithm consists of an inertial extrapolation step and a viscosity approximation method, while the stepsize is defined by a nonincreasing monotone stepsize rule. A strong convergence result is proved without prior estimate of the Lipschitz-like constants of the pseudomonotone bifunction. Finally, some numerical examples were given to show the efficiency and accuracy of the method with respect to related methods in the literature. The result in this paper improves and extends the corresponding results of [1–6, 22, 24, 25, 32, 42, 48].
References
Anh, P.N.: A hybrid extragradient method for pseudomonotone equilibrium problems and fixed point problems. Bull. Malays. Math. Sci. Soc. 36(1), 107–116 (2013)
Anh, P.N., An, L.T.H.: The subgradient extragradient method extended to equilibrium problems. Optimization 64(2), 225–248 (2015)
Anh, P.N., An, L.T.H.: New subgradient extragradient methods for solving monotone bilevel equilibrium problems. Optimization 68(11), 2097–2122 (2019)
Anh, P.N., Muu, L.D.: A hybrid subgradient algorithm for nonexpansive mappings and equilibrium problems. Optim. Lett. 8(2), 727–738 (2014)
Anh, P.N., Quoc, T.D., Son, D.X.: Hybrid extragradient-type methods for finding a common solution of an equilibrium problem and a family of strict pseudo-contraction mappings. Appl. Math. 3, 1357–1367 (2012)
Anh, P.N., Thuy, L.Q., Anh, T.H.: Strong convergence theorem for the lexicographic Ky Fan inequality. Vietnam J. Math. 46(3), 517–530 (2018)
Antipin, A.S.: On a method for convex programs using a symmetrical modification of the Lagrange function. Èkon. Mat. Metody 12, 1164–1173 (1976)
Attouch, H., Goudon, X., Redont, P.: The heavy ball with friction. I. The continuous dynamical system. Commun. Contemp. Math. 2, 1–34 (2000)
Attouch, H., Peypouquet, J., Redont, P.: A dynamical approach to an inertial forward—backward algorithm for convex minimization. SIAM J. Optim. 24(1), 232–256 (2014)
Bauschke, H.H., Combettes, P.L.: A weak-to-strong convergence principle for Féjer-monotone methods in Hilbert spaces. Math. Oper. Res. 26(2), 248–264 (2001)
Bigi, G., Castellani, M., Pappalardo, M., Passacantando, M.: Nonlinear Programming Technique for Equilibria. Springer, Berlin (2019)
Bigi, G., Passacantando, M.: Descent and penalization techniques for equilibrium problems with nonlinear constraints. J. Optim. Theory Appl. 164, 804–818 (2015)
Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 123–145 (1994)
Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 148, 318–335 (2011)
Censor, Y., Gibali, A., Reich, S.: Extensions of Korpelevich’s extragradient method for variational inequality problems in Euclidean space. Optimization 61, 119–1132 (2012)
Chidume, C.E., Ezeora, J.N.: Krasnoselskii-type algorithm for family of multivalued strictly pseudocontractive mappings. Fixed Point Theory Appl. 2014, 111 (2014)
Cholamjiak, W., Pholasa, N., Suantai, S.: A modified inertial shrinking projection method for solving inclusion problems and quasi nonexpansive multivalued mappings. Comput. Appl. Math. 37, 1–25 (2018)
Dadashi, V., Iyiola, O.S., Shehu, Y.: The subgradient extragradient method for pseudomonotone equilibrium problem. Optimization (2019). https://doi.org/10.1080/02331934.2019.1625899
Daniele, P., Giannessi, F., Maugeri, A.: Equilibrium Problems and Variational Models. Kluwer Academic, Dordrecht (2003)
Dinh, B.V., Kim, D.S.: Projection algorithms for solving nonmonotone equilibrium problems in Hilbert space. J. Comput. Appl. Math. 302, 106–117 (2016)
Güler, O.: On the convergence of the proximal point algorithms for convex minimization. SIAM J. Control Optim. 29, 403–419 (1991)
Hieu, D.V.: Halpern subgradient extragradient method extended to equilibrium problems. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 111, 823–840 (2017)
Hieu, D.V.: Hybrid projection methods for equilibrium problems with non-Lipschitz type bifunctions. Math. Methods Appl. Sci. 40, 4065–4079 (2017)
Hieu, D.V.: New extragradient method for a class of equilibrium problems in Hilbert spaces. Appl. Anal. 97(5), 811–824 (2018). https://doi.org/10.1080/00036811.2017.1292350
Hieu, D.V., Cho, Y.-J., Xiao, Y.-B.: Modified extragradient algorithms for solving equilibrium problems. Optimization (2018). https://doi.org/10.1080/02331934.2018.1505886
Iiduka, H.: A new iterative algorithm for the variational inequality problem over the fixed point set of a firmly nonexpansive mapping. Optimization 59, 873–885 (2010)
Iiduka, H., Yamada, I.: A use of conjugate gradient direction for the convex optimization problem over the fixed point set of a nonexpansive mapping. SIAM J. Optim. 19, 1881–1893 (2009)
Jolaoso, L.O., Alakoya, T.O., Taiwo, A., Mewomo, O.T.: A parallel combination extragradient method with Armijo line searching for finding common solution of finite families of equilibrium and fixed point problems. Rend. Circ. Mat. Palermo (2019) https://springerlink.bibliotecabuap.elogim.com/article/10.1007/s12215-019-00431-2
Jolaoso, L.O., Alakoya, T.O., Taiwo, A., Mewomo, O.T.: An inertial extragradient method via viscosity approximation approach for solving equilibrium problem in Hilbert spaces. Optimization (2020). https://doi.org/10.1080/02331934.2020.1716752
Jolaoso, L.O., Karahan, I.: A general alternative regularization method with line search technique for solving split equilibrium and fixed point problems in Hilbert spaces. Comput. Appl. Math. 39, 150 (2020) https://doi.org/10.1007/s40314-020-01178-8
Jolaoso, L.O., Oyewole, K.O., Okeke, C.C., Mewomo, O.T.: A unified algorithm for solving split generalized mixed equilibrium problem and fixed point of nonspreading mapping in Hilbert space. Demonstr. Math. 51, 211–232 (2018)
Kim, J.K., Anh, P.N., Nam, Y.-M.: Strong convergence of an extragradient method for equilibrium problems and fixed point problems. J. Korean Math. Soc. 49(1), 187–200 (2012)
Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Èkon. Mat. Metody 12, 747–756 (1976)
Lorenz, D., Pock, T.: An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 51(2), 311–325 (2015)
Mainge, P.E.: Convergence theorems for inertial KM-type algorithms. J. Comput. Appl. Math. 219(1), 223–236 (2008)
Maingé, P.E.: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 16, 899–912 (2008)
Maingé, P.E.: Projected subgradient techniques and viscosity methods for optimization with variational inequality constraints. Eur. J. Oper. Res. 205, 501–506 (2010)
Moudafi, A.: Viscosity approximation method for fixed-points problems. J. Math. Anal. Appl. 241(1), 46–55 (2000)
Moudafi, A.: A seconder-order differential proximal methods for equilibrium problems. J. Inequal. Pure Appl. Math. 4(1), 18 (2003)
Muu, L.D., Oettli, W.: Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal. 18, 1159–1166 (1992)
Okeke, C.C., Jolaoso, L.O., Isiogugu, F.O., Mewomo, O.T.: Solving split equality equilibrium and fixed point problems in Banach spaces without prior knowledge of operator norm. J. Nonlinear Convex Anal. 20(4), 661–683 (2019)
Quoc, T.D., Anh, P.N., Muu, L.D.: Dual extragradient algorithms extended to equilibrium problems. J. Glob. Optim. 52, 139–159 (2012)
Quoc, T.D., Muu, L.D., Nguyen, V.H.: Extragradient algorithms extended to equilibrium problems. Optimization 57, 749–776 (2008)
Rehman, H., Kumam, P., Cho, Y.J., Suleiman, Y.I., Kumam, W.: Popov’s extragradient explicit iterative algorithms for solving pseudomonotone equilibrium problems. Optim. Methods Softw. (2020). https://doi.org/10.1080/10556788.2020.1734805
Rudin, W.: Functional Analysis. McGraw-Hill, New York (1991)
Tada, A., Takahashi, W.: Strong convergence theorem for an equilibrium problem and a nonexpansive mapping. In: Takahashi, W., Tanaka, T. (eds.) Nonlinear Analysis and Convex Analysis. Yokohama Publishers, Yokohama (2006)
Taiwo, A., Jolaoso, L.O., Mewomo, O.T.: Parallel hybrid algorithm for solving pseudomonotone equilibrium and split common fixed point problems. Bull. Malays. Math. Sci. Soc. 43, 1893–1918 (2020). https://doi.org/10.1007/s40840-019-00781-1
Yang, J., Liu, H.: The subgradient extragradient method extended to pseudomonotone equilibrium problems and fixed point problems in Hilbert space. Optim. Lett. (2019). https://doi.org/10.1007/s1150-019-01474-1
Acknowledgements
The authors acknowledge with thanks the Department of Mathematics and Applied Mathematics at the Sefako Makgatho Health Sciences University for making their facilities available for the research.
Availability of data and materials
Not applicable.
Funding
This first author is supported by the postdoctoral research grant from the Sefako Makgatho Health Sciences University, South Africa.
Author information
Authors and Affiliations
Contributions
All authors worked equally on the results and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that there are no competing interests on the paper.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Jolaoso, L.O., Aphane, M. A self-adaptive inertial subgradient extragradient method for pseudomonotone equilibrium and common fixed point problems. Fixed Point Theory Appl 2020, 9 (2020). https://doi.org/10.1186/s13663-020-00676-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13663-020-00676-y